News AVX-512 Makes Ryzen 9 7950X Geekbench 5 Results Look Good — Too Good

Arbie

Distinguished
Oct 8, 2007
208
65
18,760
Disregarding the Crypto / AVX-512 results you show, Zen 4 still has the overall performance gains that AMD was touting - right? If so you should point that out. Otherwise the reader has to wonder or go find out if AMD used Geekbench.

You have a fair technical point re @Benchleaks results, but if read quickly your article comes across as challenging all the claims so far.
 
  • Like
Reactions: prtskg

JamesJones44

Reputable
Jan 22, 2021
867
809
5,760
An important point not made here: this means geekbench results for Intel chips with AVX512 were also inappropriately highly ranked compared to real-world performance and compared to previous AMD chips, and this has been going on for years.

right? Right?

Yes, minus Alder Lake which doesn't enable AVX512 by default and is fully disabled on the latest versions.
 

quadibloc2

Reputable
Oct 17, 2019
7
5
4,515
It is true that, at present, not a lot of programs can use AVX-512 instructions. However, now that a (soon to be) widely available processor has them, won't that number increase? What I'm puzzled about is that, since the new Ryzen chips are using a 256-bit floating-point unit for the AVX-512 instructions, I thought that meant that the performance uplift of AVX-512 over the former 256-bit vectors would be very limited.
It's important to note that the Geekbench results don't conform to the performance increase users will see right away, but it should be noted the situation is likely to improve; otherwise, it would be like saying of the original Ryzens that having 8 cores is meaningless, because real software doesn't try to use more than four. That was true, but it changed quickly.
Of course, Intel chips are likely to start having AVX-512 again soon, now, and their implementations may be superior to AMD's. (Isn't competition wonderful?)
 
Aug 31, 2022
1
0
10
This is a really odd article. Where was it when Intel alone had AVX 512? Why does it matter more now when AMD does it, when Intel has done it since 2016? While this may not be the case here, it really seems like a post inspired by the Intel PR machine.
 

shady28

Distinguished
Jan 29, 2007
447
322
19,090
This is a really odd article. Where was it when Intel alone had AVX 512? Why does it matter more now when AMD does it, when Intel has done it since 2016? While this may not be the case here, it really seems like a post inspired by the Intel PR machine.

It's in the article below regarding AVX on Rocket Lake.

Maybe you should just delete that post.

"In a nutshell, you shouldn't trust Geekbench 5's overall scores as an accurate measure of Rocket Lake's performance, and there's a technical reason why. We've encountered strange phenomenons with Geekbench 5, where its use of AVX-512 can widely skew the results in the encryption subtest. "

https://www.tomshardware.com/news/i9-11900k-geekbench-4
 
Aug 31, 2022
1
1
10
Intel hampered AVX-512 adoption 3 ways:
They put it in a small selection of mostly server CPUs, so it was hard to get practice writing code.

Each generation supports a subtly different instruction set.

The entire CPU slows down if any AVX-512 instructions are being executed.

I hope AMD standardizes on a single version across all Zen 4s, and ideally it runs without slowing other programs. Then AVX-512 can get more wide spread acceptance.
 
  • Like
Reactions: Thunder64

cfbcfb

Reputable
Jan 17, 2020
96
58
4,610
Ah. I had noted the odd use of the Geekbench results, since that's not one of the 2-3 commonly used cpu benchmarks.

Now we know why. Apparently the desire to bury Intel intruded well into the land of make believe?

Shame, the story was already good but from the start you could easily see the push-pull between the "honest" 15% alleged performance gains and the "complete boolsheet" 40% that some of the marketing folks wanted to push.
 
  • Like
Reactions: KyaraM

PiranhaTech

Reputable
Mar 20, 2021
136
86
4,660
I saw a few AVX-512 benchmarks vs CUDA, and the two were pretty neck-and-neck. If AVX-512 was faster, it wasn't significantly faster. There's always edge cases in software, so there's probably Apple-like benchmarks for AVX-512 out there. There also seems hope for PS3 emulation

For a gamer, you probably have a GPU already, and GPUs are designed for games. Now, you might be able to offload the GPU to AVX-512, but that's another code path and layer of complexity.

I hope AMD standardizes on a single version across all Zen 4s, and ideally it runs without slowing other programs. Then AVX-512 can get more wide spread acceptance.
It sounds like AMD's implementation doesn't slow down other programs, at least maybe not as much as Intel's

One thing I remember about SIMD in the past was that to use SIMD, the CPU had to flush the pipeline. Sometimes it was faster to just not use SIMD for that reason, but that also was I think based on Intel Netburst, which had a 20-stage pipeline. No idea if they ever fixed that
 

SunMaster

Respectable
Apr 19, 2022
220
200
1,960
Perhaps this was needed to remove the hyperinflated importance of avx-512, which Intel has been favoured by for a long time.

Funny how the coin has been flipped wrt. avx-512, which Intels desktop CPUs currently no longer is capable of.
 
Intel hampered AVX-512 adoption 3 ways:
They put it in a small selection of mostly server CPUs, so it was hard to get practice writing code.

Each generation supports a subtly different instruction set.

The entire CPU slows down if any AVX-512 instructions are being executed.

I hope AMD standardizes on a single version across all Zen 4s, and ideally it runs without slowing other programs. Then AVX-512 can get more wide spread acceptance.
Considering AMD went full steam with their chiplet approach, all their CPUs use the exact same silicon - so it stands to reason that their AVX-512 is the same across range. Also it doesn't use dedicated silicon but actually parallels two existing 256-bit pipelines. It's probably slower than Intel's but at the same time doesn't cause clock speed drops - it should be competitive.
 
  • Like
Reactions: Nick_C
It's probably slower than Intel's but at the same time doesn't cause clock speed drops - it should be competitive.
But wasn't that purely because of the much higher power draw of avx512? If you could provide enough power and cooling you didn't have to use avx offset on the early gens that had avx and in the later gens avx offset was not even a thing anymore, the CPUs could handle the added power and heat from avx.

I doubt that physics will change for AMD, AVX will suck up a ton of power for AMD as well. ZEN 3 looses a lot of clocks just running normal stuff on all cores not even avx. Now ZEN 4 has a much higher TDP and that might be only there for AVX, we will have to see.
 

KyaraM

Admirable
An important point not made here: this means geekbench results for Intel chips with AVX512 were also inappropriately highly ranked compared to real-world performance and compared to previous AMD chips, and this has been going on for years.

right? Right?
The article quite literally points that out, so what's the point? Unless you didn't even read the article...

Unfortunately, this has been an ongoing issue with some subtests — we saw this exact issue with the Core i9-11900's Geekbench results. But with the introduction of AVX-512 on newer processors — including Intel's 11th Gen Rocket Lake platform, these wide performance deviations between workloads have become much more prominent — making general performance estimations difficult with some synthetic benchmarks.
Alder Lake doesn't have AVX-512 enabled, btw, so don't even think about commenting on those.
 

Alpha_Lyrae

Reputable
Nov 13, 2021
28
26
4,560
This article is completely misleading, as we don't know how GB5 weights the scores, but given the 2217 single and 24396 multi, it's safe to stay that crypto is not highly weighted in the overall score.

Integer and floating-point have the highest weights in score calculation, so this is just garbage level "reporting." I'd label this as misinformed opinion.
 

KyaraM

Admirable
This article is completely misleading, as we don't know how GB5 weights the scores, but given the 2217 single and 24396 multi, it's safe to stay that crypto is not highly weighted in the overall score.

Integer and floating-point have the highest weights in score calculation, so this is just garbage level "reporting." I'd label this as misinformed opinion.
Integer 65%
Floating point 30%
Crypto 5%
https://www.rtings.com/laptop/tests/performance/geekbench-5
I calculated results manually and this seems to be correct, at least it is a perfect fit between scores listed here and what I calculated.

However, those 5% can and do have a big impact. Using the scores listed here in the article, if we swap out the crypto score for the 7950X versus those of the 5950X, we land at 2070 rounded up instead of 2217. When using a similar increase to the other scores of 23% between 5950X and 7950X, we land at around 2116 points for the 7950X. The rest would be AVX-512, if we assume linear increase when the instruction set is disabled. That is a pretty huge difference of 101 points just from AVX-512 alone, and not that overwhelmingly better than the score AMD provided for the 12900K in their own test of 2040 points, especially considering how many of those reach far, far higher scores than that. It does demonstrate the impact of just one outlier, though, even if it is a lowly weighted one.
 
  • Like
Reactions: shady28

salgado18

Distinguished
Feb 12, 2007
981
439
19,370
But wasn't that purely because of the much higher power draw of avx512? If you could provide enough power and cooling you didn't have to use avx offset on the early gens that had avx and in the later gens avx offset was not even a thing anymore, the CPUs could handle the added power and heat from avx.

I doubt that physics will change for AMD, AVX will suck up a ton of power for AMD as well. ZEN 3 looses a lot of clocks just running normal stuff on all cores not even avx. Now ZEN 4 has a much higher TDP and that might be only there for AVX, we will have to see.
It's definitely not just for AVX, as all the clocks (base and boost) increased a lot. But the new thermal limits will probably let it keep high clocks during AVX, also thanks to the improved efficiency of the newer node.

I believe Intel didn't like and removed AVX 512 because their CPUs are too power-hungry for that, and they couldn't just add it to their e-cores. Let reviews tell the whole story, but if that's the case for both, it's a win for AMD.
 
Sep 3, 2022
1
0
10
A new Geekbench 5 benchmark shows very impressive scores for the Ryzen 9 7950X, especially as compared to its predecessor — but there's more to the story.

AVX-512 Makes Ryzen 9 7950X Geekbench 5 Results Look Good — Too Good : Read more
Why does this article come over like something is wrong with the the 7950x performance claims, which Geekbench5 was a small part of?
This mentality of fixing benchmarks risks Userbench style tweaking because AMD isn't allowed high scores, only Intel is and they recommend i3 quad core upgrades. Perhaps the headline "Too Good" was intended to attract fanbois who are seeking salve for their butt hurt outrage about that i9 gaming benchmark?

Geekbench could use a geometric mean for their single number rather than addition to reduce the sensitivity to outlier values. But looking at the broken down numbers they're in line with AMD's claims for IPC, ST & MT performance gains; Zen already performed well for AES so improvements at realworld crypto is an achievement, that AVX512 is the reason is useful info. A balanced article would remind people which other processors benefited from Geekbench scores.

Choice of benchmark is not easy, if you use ones that show the performance improvements provided by new processors and roll it up into a single score, there will always be unfairness. Enabling SAM/Re-BAR is needed in GPU benchmarks to push hardware & software vendors into supporting beneficial features. The significance of bencharks is shown where the current creator biased MT benchmarks slant CPUs towards embaressingly parallel cache light workloads. We've seen Intel apparently tune their CPU design for those workloads, making a mockery of "Efficiency cores" as they've been pushed up the power wall and ARE NOT about power savings but scoring points in the common MT benchmarks.
 

steve15180

Distinguished
Dec 31, 2007
40
25
18,535
i've never understood why anyone gives Geekbench any credit. They have a history of tuning the benchmark to get the results they think are "right". Does anyone remember how Geekbench treated the first Ryzen processors? After AMD swept the top spots in the benchmark, they announced that they gave too much weight to "multi core workloads". As a result, Intel was back on top a few months later. They put AVX 512 support in when Intel put it in, knowing that AMD did not have support, and now someone is crying that the AMD results make the AMD chip look "too good". Why were the intel chips not listed as "too good" when it was put in? Better yet, why not just drop Geekbench altogether, since the results are far from accurate to show "real world" performance.
 

KyaraM

Admirable
i've never understood why anyone gives Geekbench any credit. They have a history of tuning the benchmark to get the results they think are "right". Does anyone remember how Geekbench treated the first Ryzen processors? After AMD swept the top spots in the benchmark, they announced that they gave too much weight to "multi core workloads". As a result, Intel was back on top a few months later. They put AVX 512 support in when Intel put it in, knowing that AMD did not have support, and now someone is crying that the AMD results make the AMD chip look "too good". Why were the intel chips not listed as "too good" when it was put in? Better yet, why not just drop Geekbench altogether, since the results are far from accurate to show "real world" performance.
With comments like this I always wonder if people even read the articles, because it literally alludes to that and links to an article where exactly that was mentionend and explained:

https://www.tomshardware.com/news/i9-11900k-geekbench-4

In a nutshell, you shouldn't trust Geekbench 5's overall scores as an accurate measure of Rocket Lake's performance, and there's a technical reason why. We've encountered strange phenomenons with Geekbench 5, where its use of AVX-512 can widely skew the results in the encryption subtest. In turn, this inflates Rocket Lake's overall Geekbench 5 scores against all other processors that don't support AVX-512. This can lead to an inaccurate picture that makes Rocket Lake appear better in relation to AMD's competing chips, not to mention Intel's previous-gen models.

And I found similar arguments on other websites and hardware magazines, so it's not like nobody ever complained about it when it was done in Intel's favor. But I guess that's too too much to grasp for the fanboys, who I'm sure were all over it mocking it endlessly, and now celebrate...