News The Zen 5 Gaming postmortem: Larger generational gains than many reported, game-boosting Windows Update tested, Ryzen 5 7600X3D gaming benchmarks, too

bourgeoisdude

Distinguished
Dec 15, 2005
1,246
38
19,320
Interesting. I would be interested to understand how your results are so much different than say, hardware unboxed. Are the non-canned benchmark sections of the games that much different? This really is an interesting topic.
 

Pierce2623

Prominent
Dec 3, 2023
485
368
560
In this article I’m seeing a clear 10%+ improvement in frame rates from pure CPU. That’s good single core gains in 2024. Arrow Lake will fall in the same area but with slightly higher power use, I imagine.
 

jxdking

Prominent
Mar 6, 2023
8
5
515
What memory kit did you use in the benchmark?
There would be big difference between 6000 CL30 and 6000 CL36.
 
Interesting. I would be interested to understand how your results are so much different than say, hardware unboxed. Are the non-canned benchmark sections of the games that much different? This really is an interesting topic.
Game selection and testing areas can and will make a difference. Even in games where we might overlap HWUB, if we test a different area, or the same area using a different path, that will result in changes in performance.

We can't speak to precisely how HWUB tests (I haven't looked for their latest data), but using Expo as noted has a non-trivial impact on performance, and the same goes for stuff like Windows updates, game drivers, etc. — not to mention differences in motherboards, memory kits, and graphics cards — even the exact same model GPU (i.e. an Asus ROG Strix 4090 OC) can have variance of maybe 1~2 percent across individual units.

It's relatively easy to cause up to a ~5% swing in overall performance differences by changing the games, settings, and benchmark sequences used for testing. And that's fine, that's why places test with multiple games and different places use different test suites. It's all potentially useful data.
 

vijosef

Upstanding
Feb 26, 2024
111
113
260
The fonts on the charts are unnecessarily tiny.
They are ilegible.

The name of the processor could be written on the right side of the axis, inside the bar. It would gain space to make the text larger.
 
  • Like
Reactions: P1nky and NinoPino

BoredErica

Distinguished
Aug 17, 2007
155
8
18,685
Normally cpu reviews are done without overclocked ram, but in the CPU hierarchy it looks like it is. So are tests run twice then?
 
Oh, this is some interesting new information I didn't have before: "Notably, Intel does not do this." in the context of "recommending" using XMP. I had the solid impression Intel always bundled fast RAM in their review kits, but it seems that's not the case? If so, kudos to them on this specific aspect.

And, obviously, thanks for the great data and additional information around the performance characteristics of your testing. I always like it when reviewers can step away from the "enthusiast" edge a bit and get things closer to how an OEM system would actually ship. Now we just need for all reviewers to also test with TDP enforced limits on Intel like most OEM systems ship with (looking at you, Dell).

Regards.
 

Jagar123

Prominent
Dec 28, 2022
73
102
710
Great write up. I appreciate the additional wealth of testing you put into this. That took a lot of work I am sure.

Your numbers are still higher than others numbers though. Maybe you were blessed with exceedingly good silicon?

For gaming though, Zen 5 still doesn't seem like a great generational uplift compared to past gen on gen Ryzen releases (barring the Zen+ release of course).

I will continue to hold out for the 9800X3D part and Arrow Lake before I decide what to buy. I imagine the 7800X3D may still be the best bang for the buck option at that time, but we'll see.
 
  • Like
Reactions: Heiro78
I really appreciate the extensive writeup regarding testing practices and the direct comparisons between parts. This sort of information is imperative for understanding results and while some of us may already have known the potential impact having it spelled out is for the best.
 
  • Like
Reactions: Heiro78

Pierce2623

Prominent
Dec 3, 2023
485
368
560
Oh, this is some interesting new information I didn't have before: "Notably, Intel does not do this." in the context of "recommending" using XMP. I had the solid impression Intel always bundled fast RAM in their review kits, but it seems that's not the case? If so, kudos to them on this specific aspect.

And, obviously, thanks for the great data and additional information around the performance characteristics of your testing. I always like it when reviewers can step away from the "enthusiast" edge a bit and get things closer to how an OEM system would actually ship. Now we just need for all reviewers to also test with TDP enforced limits on Intel like most OEM systems ship with (looking at you, Dell).

Regards.
Intel sends them a tray CPU and a “reviewer’s guide” explaining that if they paint Intel in a negative light there will be no more samples. That’s the review kit. There’s no RAM involved. Most reviews just test Intel with faster RAM so they don’t lose their review sample permission.
 
  • Like
Reactions: anoldnewb
Mar 10, 2020
420
385
5,070
It's relatively easy to cause up to a ~5% swing in overall performance differences by changing the games, settings, and benchmark sequences used for testing. And that's fine, that's why places test with multiple games and different places use different test suites. It's all potentially useful data.
To an upper bound it is possible to prove anything with the choice of benchmarks. Want to make something look bad, choose X, want to make it look good then choose Y and tweak the settings a little.
The answer(?) is to look at as many reviews as possible and note the differences, look for the outliers, look for the reviews that reflect your use case and don’t believe any single review no matter how reputable the outlet may be.
 
Interesting. I would be interested to understand how your results are so much different than say, hardware unboxed. Are the non-canned benchmark sections of the games that much different? This really is an interesting topic.
In addition to Jarred's explanation above:

Paul explained this in the text of this update (HUB uses DDR5-6000 CL30-38-38-96 DRAM for all of their AM5 testing):
When we engage Expo memory overclocking for both chips those gains drop to 6.6% and 7.8%, respectively, because overclocked DDR5-6000 Expo memory is the sweet spot for both chips. This at least partly erases Zen 5’s memory speed advantage (Zen 5 supports DDR5-5600, while Zen 4 only supports DDR5-5200), with a 2.7% loss on the 9700X and a 2.2% loss on the 9600X.

Most reviewers test with Expo overclocked memory as the default stock configuration, which is partially the result of AMD’s somewhat misleading marketing practices. We test at true stock memory settings because AMD does not officially cover memory overclocking under its warranty — it is not the official spec — yet the company uses overclocked memory for its marketing materials and encourages reviewers to test with overclocked memory - even the comparison benchmarks in the reviewer's guides use overclocked memory. Notably, Intel does not do this.
and also this portion (HUB's portions of every game test they do for CPUs are centered on a heavy CPU portion of the games):
Here are the results when we split out in-game versus built-in benchmarks into their own categories, and a measurement with both types of benchmarks combined in the first column.

The second column in the table covers the percentage gain for eight built-in benchmarks only, while the third column shows performance with only the 11 in-game benchmarks. Here we can see that the built-in benchmarks yield from 2.1 to 3.5 percentage points of higher generational performance than the in-game tests. Using only built-in benchmarks, we see a gain of only 5.6% and 7.7% for the 9700X and 9600X, respectively, after memory overclocking. This is at least some uplift when testing the chips with overclocked memory, but AMD’s chips still lose up to three percentage points of generational gains compared to performance at true stock memory settings. It also implies that Zen 5 is memory-bound in some regards.
 

DavidLejdar

Respectable
Sep 11, 2022
284
178
1,860
Certainly some interesting topics there. Since there is talk about the particularities of testing, would it be an idea, to take the top performing CPUs from both, and couple it with the most owned GPU-classes?

Like, e.g. according to Steam survey, owners of RTX 2060/3060/4060 make up 12% of all users, plus Ti versions. So, benchmarking a RTX 4060 with a set of CPUs. And when a few older CPUs are thrown in, it could even be checked how large the difference is, when benchmarking e.g. the RTX 4060 with current-gen CPUs and e.g. Intel's 9th gen flagship thrown in .

There likely is already some data, from benchmarking GPUs, likely with some CPU. And with more data, there could be a data set, with which to compare FPS of the CPUs in circumstances, which are more common for most gamers. And when a few older CPUs are thrown in, it could even be checked how large the difference is, when benchmarking e.g. the RTX 4060 with current-gen CPUs and e.g. Intel's 9th gen flagship thrown in.

And it would seem a nice data set, to able to read from also about whether going higher CPU upgrade is worth it, like how much FPS does one get when $150 spent more on CPU.

As for a somewhat different topic. I was likely to sell my 7600X cheap, once I upgrade. But, I wouldn't want to sell a diminished product, in case it was affected by the Expo issue (on AM5). Is there a way to test, if the CPU is like at a state of 100% in regard to the mentioned issue?
Great write up. I appreciate the additional wealth of testing you put into this. That took a lot of work I am sure.

Your numbers are still higher than others numbers though. Maybe you were blessed with exceedingly good silicon?

For gaming though, Zen 5 still doesn't seem like a great generational uplift compared to past gen on gen Ryzen releases (barring the Zen+ release of course).

I will continue to hold out for the 9800X3D part and Arrow Lake before I decide what to buy. I imagine the 7800X3D may still be the best bang for the buck option at that time, but we'll see.
I don't know about generational uplift. But I upgraded to Zen 5 from an Intel 4th gen DDR3 rig, and I can't complain. :)

And from the two, I will likely upgrade to a 9800X3D. If it pushes the performance on the productivity side a bit, that's quite an argument there, even if the FPS difference to the 7800X3D may not look like much. And with this "gen to next gen and class" upgrade, I'd be doing there, even if only 5% for 9800X3D to prior gen, that's still 5% on top of other performance gain.
 
There likely is already some data, from benchmarking GPUs, likely with some CPU. And with more data, there could be a data set, with which to compare FPS of the CPUs in circumstances, which are more common for most gamers. And when a few older CPUs are thrown in, it could even be checked how large the difference is, when benchmarking e.g. the RTX 4060 with current-gen CPUs and e.g. Intel's 9th gen flagship thrown in.
RTX 2080 is ~RTX 4060 in performance:
https://www.tomshardware.com/pc-components/gpus/cpu-vs-gpu-upgrade-benchmarks-testing
 

Mattzun

Reputable
Oct 7, 2021
101
155
4,760
I'd still love to see if there is a difference between these processors on at 1440P on a 4070 super or at 4k on a 4090.

I'm expecting that there is no real difference for average FPS, but there might be a noticeable difference on 1 percent lows.
 

jlake3

Distinguished
Jul 9, 2014
138
202
18,960
Certainly some interesting topics there. Since there is talk about the particularities of testing, would it be an idea, to take the top performing CPUs from both, and couple it with the most owned GPU-classes?

Like, e.g. according to Steam survey, owners of RTX 2060/3060/4060 make up 12% of all users, plus Ti versions. So, benchmarking a RTX 4060 with a set of CPUs. And when a few older CPUs are thrown in, it could even be checked how large the difference is, when benchmarking e.g. the RTX 4060 with current-gen CPUs and e.g. Intel's 9th gen flagship thrown in .

There likely is already some data, from benchmarking GPUs, likely with some CPU. And with more data, there could be a data set, with which to compare FPS of the CPUs in circumstances, which are more common for most gamers. And when a few older CPUs are thrown in, it could even be checked how large the difference is, when benchmarking e.g. the RTX 4060 with current-gen CPUs and e.g. Intel's 9th gen flagship thrown in.

And it would seem a nice data set, to able to read from also about whether going higher CPU upgrade is worth it, like how much FPS does one get when $150 spent more on CPU.
If you have a CPU comparison with as little bottlenecking as possible and a GPU comparison with as little bottlenecking as possible, you can sort of guesstimate out a balanced system by finding the intersection of the two, or whether an upgrade would be warranted by looking at what each component scores when tested in isolation and whether one would bottleneck the other.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
What am I missing? The chart shows a 6.5% performance increase between 7700x and 9700x. How is this larger than others have reported? Looks very similar to TPU results.