News AMD MI300X posts fastest ever Geekbench 6 OpenCL score — 19% faster than RTX 4090, and only twelve times as expensive

Admin

Administrator
Staff member
  • Like
Reactions: softechie
Apr 8, 2024
8
11
15
You mention in the title that the GPU is 12x more expensive, yet there's no factual information anywhere in the article to state its price in comparison to 4090.
Out of curiosity, what pricing information are you referring to?
 

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
You mention in the title that the GPU is 12x more expensive, yet there's no factual information anywhere in the article to state its price in comparison to 4090.
Out of curiosity, what pricing information are you referring to?

Yeah, it's a mediocre written article a couple of days after others commented on the price difference, and get's the price difference wrong realistically. 🙄

Even using THG's own numbers from this article;
https://www.tomshardware.com/tech-i...ce-nvidias-h100-has-peaked-beyond-dollar40000

MI300X usual base price is $15K, but M$ is getting them flr $10K.

Now RTX4090 on Amazon, quick look find a PNY as cheapest I can see for $1,739
https://www.amazon.com/geforce-rtx-4090/s?k=geforce+rtx+4090


Which makes the price difference of $15K/1.7 = 8.8x , not the 10 of the original articles nor the 12 of this one. 🧐
 

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
A quick Google search suggests MI300X costs around $20,000. 🤷‍♂️

Really? Unlikely.

Seems like THG should've stuck with their own article source rather than the reddit post on google of $20K, especially given the 2+ day lag from the original breaking of this result in other articles that do quote price and not ask the reader to google it;
https://www.techradar.com/pro/amd-p...ortgage-to-buy-amds-fastest-gpu-ever-produced

BTW, google also quotes the $10K number in THG's article for M$ price which would make the ratio ever smaller at 5.75x compared to current cheapest RTX4090 on Amazon.

Considering that there is a legacy GPU buyer's guide with pricing (a descendant of the early one Cleeve, Pauldh, and I started in the forum) , seems like at least the latest 4090 pricing should've been readily on hand. 🤔
 
Last edited:
  • Like
Reactions: Makaveli

atmapuri

Distinguished
Sep 26, 2011
21
12
18,515
Aren't all RTX cards from NVidia LHR? Low-Hash-Rate meaning that OpenCL performance is reduced by 10x? We always use AMD because of that. I dont understand what is Geekbench measuring on 4090.
 
Really? Unlikely.

Seems like THG should've stuck with their own article source rather than the reddit post on google of $20K, especially given the 2+ day lag from the original breaking of this result in other articles that do quote price and not ask the reader to google it;
https://www.techradar.com/pro/amd-p...ortgage-to-buy-amds-fastest-gpu-ever-produced

BTW, google also quotes the $10K number in THG's article for M$ price which would make the ratio ever smaller at 5.75x compared to current cheapest RTX4090 on Amazon.

Considering that there is a legacy GPU buyer's guide with pricing (a descendant of the early one Cleeve, Pauldh, and I started in the forum) , seems like at least the latest 4090 pricing should've been readily on hand. 🤔
You're rather missing the point here, which isn't the price of the MI300X compared to RTX 4090 or anything else, but instead is the fact that running Geekbench 6 OpenCL benchmarks on a single MI300X is silly. I've updated the "twelve times" to "eight times" though, which obviously completely changes... nothing at all as far as the rest of the text is concerned.
 
  • Like
Reactions: Makaveli
Aren't all RTX cards from NVidia LHR? Low-Hash-Rate meaning that OpenCL performance is reduced by 10x? We always use AMD because of that. I dont understand what is Geekbench measuring on 4090.
Nvidia LHR specifically detected Ethereum hashing and attempted to limit performance. It did not impact (AFAIK) any other hashing algorithms and was not at all universal to OpenCL. Later software implementations were able to work around the hashrate limiter, but then Ethereum ended proof of work mining and went proof of stake and so none of it matters. Ada Lovelace GPUs (RTX 40-series) do not have any LHR stuff in the drivers or firmware, because by the time they came out, GPU mining was horribly inefficient and thus no one was buying 4090 etc. to do crypto. Or at least, no one smart.
 

DS426

Prominent
May 15, 2024
275
204
560
It would have been enough to say that MI300X is "several times as expensive" as 4090 as there's simply no arguing that with no specific multiplication asserted. Providing a specific multiple infers to me that the author wants to communicate that there's lower value of the MI300X, which clearly isn't the case since consumer vs. enterprise is being contested and at least admittedly as the article points out, doesn't account for AI performance or basically any other particular performance metrics.

That said, going on pricing history of the MI250 and what I can tell is realistic pricing for non hyperscalers like M$ and such, $15K is probably pretty common, even if there's several cases of contracts coming in closer to $12K or so.
 
D

Deleted member 2731765

Guest
However, despite being one of the fastest GPUs on the Geekbench 6 charts, the AMD GPU's score does not reflect its real performance and shows why it's a terrible idea to benchmark datacenter AI GPUs using consumer grade OpenCL applications (which is what Geekbench 6 is).

Don't use Geekbench 6 OpenCL as a measuring stick for enterprize-grade hardware, in other words.

First of all, no one said GB OpenCL is the perfect benchmark for testing server grade hardware.

And, obviously, the score also won't reflect the card's real world performance, and is a terrible idea, but that was NOT the whole point of this benchmark which was done by an independent testing group. Nope. We all know this.

They just wanted to test the card's performance in Geekbench's OpenCL benchmark. Nobody in their sane mind would use this benchmark as a measuring stick to judge AMD MI300X accelerator's performance either. And that was also not the motive for this test either.

No, this was just a test done to check the OpenCL score of this accelerator. That's all.

Also, comparing the MI300X with the RTX 4090 is obviously silly and unfair, and makes no sense since they both are totally different products, differ vastly in hardware/compute specs, and target entirely different markets as well.

FWIW, the benchmark already has few Nvidia Quadro and other server cards in the ranked listing. This isn't the first time a server-level GPU has been spotted though.

19% faster than RTX 4090, and only twelve times as expensive

Funny how you use the word "only" as if the MI300X is some cheap and an affordable consumer product. We are talking about 10-15K USD ballpark here. Poor choice of words for the article heading, btw.

And as I mentioned before, comparing the MI300X with the RTX 4090 is just downright silly.
 
Last edited by a moderator:
  • Like
Reactions: KnightShadey

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
You're rather missing the point here, which isn't the price of the MI300X compared to RTX 4090 or anything else, but instead is the fact that running Geekbench 6 OpenCL benchmarks on a single MI300X is silly. I've updated the "twelve times" to "eight times" though, which obviously completely changes... nothing at all as far as the rest of the text is concerned.

I understand the absurdity of not only the comparison, but the chosen benchmark.
I was being clear to the original person asking, and then question the "google says" follow-up.

[rant]

However, as someone with a history in journalism and a family member still in the biz that's undermined by sloppy ' news ' the defense of the omission of price or the improper ratio is a miss, and not on me or any other reader. I think you're missing that point.

I realize the titles are picked by Ai based on content and it's done for expediency & cost, but considering this article was already 2 days after the fact, there was no need to move fast and break things, vs just support the title with the body and how the ratio was derived. It shouldn't be up to the reader to assume the basis for a 'NEWs' article.

It's already tough enough with the AMD vs intel blood-feuds in the comments section with claims and counter-claims with/without source, NEWS should be obvious as to its veracity.

[/rant]

I say it as constructive criticism, not baselessly bashing, especially of a site I spent over a decade contributing to improve in the early 2000s along with many others, including those mentioned previously.

It's nice you updated it, and I respect that, because the reality is that the world (and the various scrappers/bots/etc) does/do look to THG as a semi-credible source, and what is written here will be parroted and repeated long after the fact.

THAT, to me, was The Point of my reply.
 

cheesecake1116

Distinguished
May 20, 2014
11
16
18,515
Good read. And testing abit more relevant cause no one is running geekbench on this.

Hey, when you have the chance to get a top score you take it :cool:
But yes, in the article we looked at FluidX3D performance, performance in an in-house FP64 benchmark, LLM inference performance, as well as ran our microbenchmark suite on it.
 

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
Hey, when you have the chance to get a top score you take it :cool:
But yes, in the article we looked at FluidX3D performance, performance in an in-house FP64 benchmark, LLM inference performance, as well as ran our microbenchmark suite on it.
But will it RuN qUaKe ? 🤪 Challenging without ROPs, but... 😉

Thanks for the entertaining testing, and so many other articles, Eh! 🤠🤙

PS, is there a high resolution version of the core-core latency? The low-res as you suspected is a bit unreadable beyond 120ish to 190ish with the 116ns & 202 ns appearing to be uncommon outliers from a quick look?
Also the Cup of coffee motivation for CGP sounds very familiar. 🤣
 
Last edited:
  • Like
Reactions: bit_user
Hey, when you have the chance to get a top score you take it :cool:
But yes, in the article we looked at FluidX3D performance, performance in an in-house FP64 benchmark, LLM inference performance, as well as ran our microbenchmark suite on it.
Cheers, I believe we have a separate article looking at the other testing in the works. I'm not surprised the results are from your testing. LOL
 
  • Like
Reactions: bit_user
Interesting how that mistake made it past spell check and, presumably, at least one editor before publishing.
Probably my fault. Because we say you win a prize and somehow that ended up as enterprize in the text. What's curious to me is that the supposed built-in spell-checker of our CMS seems to have stopped working. Bleh.
 
  • Like
Reactions: bit_user

atmapuri

Distinguished
Sep 26, 2011
21
12
18,515
>Nvidia LHR specifically detected Ethereum hashing

Well, then this detection must be insanely over-reaching, because all OpenCL kernels executed on latest RTX cards runs slower than GTX 1080 from 2012. Maybe there are some RTX cards, that dont have LHR, but I would not know about it. The OpenCL timings on LHR RTX cards vary widely by 10x up and down from run to run.
 

bit_user

Titan
Ambassador
It would have been enough to say that MI300X is "several times as expensive" as 4090 as there's simply no arguing that with no specific multiplication asserted.
The phrase "order of magnitude" roughly covers the range.

Providing a specific multiple infers to me that the author wants to communicate that there's lower value of the MI300X,
It's just a cheap tactic to draw readers' attention. IMO, the article does a fair job of clearing up the real disparity in performance & value.
 

bit_user

Titan
Ambassador
First of all, no one said GB OpenCL is the perfect benchmark for testing server grade hardware.
Indeed, OpenCL traditionally has issues with "performance portability" across substantially different platforms.

comparing the MI300X with the RTX 4090 is obviously silly and unfair, and makes no sense since they both are totally different products, differ vastly in hardware/compute specs, and target entirely different markets as well.
Something that hasn't gotten much attention (including none by the article!) is just how lopsided performance comparison is between the MI300X and RTX 4090 on fp64 performance (which is critical for HPC applications).

GPUPeak FP64 TFLOPS (vector)FP64 TFLOPS (tensor)
RTX 4090
1.3​
-​
MI300X
81.7​
163.4​

Furthermore, the MI300X is listed as having Infinity Fabric Bandwidth of up to 128 GB/s, spread across 8 links. It also has PCIe 5.0 x16, while the only connectivity provided by the RTX 4090 is its PCIe 4.0 x16 link.

FWIW, the benchmark already has few Nvidia Quadro and other server cards in the ranked listing. This isn't the first time a server-level GPU has been spotted though.
Quadro is workstation, not server. The server products used to feature the Tesla branding, but Nvidia has since dropped both Quadro and Tesla. So, one easy way to tell is that the server products are passively-cooled. Workstation products have fans (usually blowers).

Funny how you use the word "only" as if the MI300X is some cheap and an affordable consumer product.
I think "only" was being used ironically, in that context, which is something I've seen/heard done on occasion. As we know, the RTX 4090 itself is already priced nearly outside the consumer bracket.
 
Last edited:

bit_user

Titan
Ambassador
The only reason to buy an AMD product on enterprise side is... you cannot wait for an Nvidia product.
Sadly, I think there's some truth to this point. I'm sure a lot of AMD's MI300X orders have come from AI customers who simply can't tolerate Nvidia's waiting list. I'm pretty sure orders of Blackwell are booked well into next year!

The fact remains that MI300X outperforms H200, in many respects. So, at least AMD finally has a (potentially) viable alternative on the market, unlike the situation we had for so long.
 
  • Like
Reactions: Amdlova