News Leak indicates AMD Ryzen 9000X3D series CPU gaming performance will disappoint

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
There's a meaningful benefit, just not one worthy of a single-gen upgrade. Not sure why you keep repeating that.
YhhMyWtiHu4GJTXAUaywP9.png
I keep saying it because it's the truth. Try looking at a more broad dataset before making your conclusions:

https://www.techspot.com/review/2888-ryzen-9700-vs-7700-windows-24h2/
What about the Windows 24H2 build? Across the 43 games tested, the 9700X was, on average, just 2% faster – 1% less than what we found in our day-one review.

So after all the excitement and claims of 5-8%, even 9% gains from AMD, the reality seems to be closer to 2%, a single-digit margin. This is essentially what most tech media outlets around the world reported, except for a few outliers who clearly had issues with their testing
 
I keep saying it because it's the truth. Try looking at a more broad dataset before making your conclusions:

https://www.techspot.com/review/2888-ryzen-9700-vs-7700-windows-24h2/
First, how many of those titles are GPU-bottlenecked, though? We don't know, because they didn't include any slower or faster CPUs that would help us see how close to the limit the 7700X was already running it. The ones that are GPU-bottlenecked drag down the average. I'm fine with using a narrower game selection, if it avoids titles that are GPU-bottlenecked. If you're trying to judge the CPU on technical merits or predict performance on future games, that would be a better comparison.

Second, you're making a categorical statement based on data which excludes EXPO and PBO.
 
  • Like
Reactions: atomicWAR
This is misleading on a couple points. First of all, Haswell didn't speed up everything by double digits. Anand tech said some things improved by as little as "low single digits". Where they got the bigger gains was surely from things that could utilize AVX2.

Second, you're fond of touting the i7-4790K, but it's basically just a factory-overclocked i7-4770K. And if you're counting that, then Skylake only improved over it by 5%. I'd argue we should count it, because that's what Intel did instead of a proper desktop release of Broadwell. If you count neither Broadwell nor Haswell Refresh, then you're skipping a generation and measuring the performance difference across two generations.

This revisionist history is something of a new turn, for you.
Generations are meaningless, time to market is what matters. Zen 5 took 2 years over zen 4 and the results are... well, not great. Zen 5 didn't improve everything by double digits over zen 4 either.

Fact of the matter is that we got more performance increase year on year(at similar prices obviously) back in the intel 4 core era than we do nowadays. Especially when it comes to MT performance, since amd decided to go from 199 for 6 cores to 299, we basically got less Mt performance from zen 2 to zen 3 lol.

Of course the internet because it is extremely objective as usual is going to complain repeatedly about the intel 4 core era but completely skip over the 7 years of 6 core chips we've been getting from amd, at ever increasing prices to boot.

Frankly I don't mind it, since people keep buying these utter garbage 6 and 8 core amd chips I can enjoy better prices for those big intel chips with twice the numbers of cores. It's a win for me, so keep at it lads.
 
Lower res is always harder on the CPU that's how it works...more frames more work. You hit 4K and in most, not all, games you push the bottleneck to the GPU. HalfCharlie isn't totally correct but they aren't completely wrong either in regards to higher res gaming on older CPUs. I ran an Ivy bridge 8c Xeon OC'd to 4.3GHz until the 7950X came out and I was gaming at 4k 90-120 fps in most games. People really don't realize how much you can shift the bottleneck to the GPU if you push the settings and resolution high enough. After 90fps in gaming, unless you're truly competitive...90fps+ is a completely playable/enjoyable experience.
Shifting the bottleneck to the gpu doesn't help you with anything. Neither does playing at 4k. If your cpu can't produce the frames you want at 1080p it won't produce them at any resolution with any graphics card.
 
First, how many of those titles are GPU-bottlenecked, though? We don't know, because they didn't include any slower or faster CPUs that would help us see how close to the limit the 7700X was already running it. The ones that are GPU-bottlenecked drag down the average. I'm fine with using a narrower game selection, if it avoids titles that are GPU-bottlenecked. If you're trying to judge the CPU on technical merits or predict performance on future games, that would be a better comparison.
Generally speaking they don't use GPU bottlenecked titles. As it is they purposely pick a broad range so even if there were a few it wouldn't mess up the average hence the conclusion pointing out their CPU review (a more focused 13 or so games) was only 3% apart so basically margin of error between the two tests.

The Tom's piece you cited talked about using canned benchmarks versus custom and when you look at the canned they're showing higher performance than custom. You can see the situation first hand in CP2077 and Hitman since they have built-in and custom for each. HUB/Techspot use custom benchmarks in every case they've identified canned benchmarks being GPU heavy.
Second, you're making a categorical statement based on data which excludes EXPO and PBO.
They always use DDR5-6000 CL30 for AMD testing, but rarely list the system specs on the comparison pieces. I've never quite figured out why they don't, but I suppose one can always reference the CPU review which correlates.

As for PBO it of course has its place, but unfortunately nobody tends to actually test CPUs against each other that way. TPU is the closest it gets, but they don't do PBO retests which makes referencing between parts hard.
 
Does it matter if a CPU achieves 230 fps or 245 fps in a gaming benchmark? What difference does it make in terms of gaming experience?
In the old days, upgrading a Pentium 60 to a Pentium 120 or from a 386 to a 486 CPU could make all the difference between a game being playable or not playable,
Nowadays, virtually all CPUs from the last 4-5 (or even more) years are good enough to deliver more than playable framerates in virtually all triple A games.
A CPU being 10% faster or slower in typical fps-based gaming benchmarks is just marketing and of absolutely no relevance to real world users.
There are much more important metrics relevant to the gaming experience than just framerates. (e.g. microstuttering).
 
Last edited:
Fact of the matter is that we got more performance increase year on year(at similar prices obviously) back in the intel 4 core era than we do nowadays.
No, even launch day performance had the R9 9950X delivering 35.3% geomean improvement over the R9 7950X. Show me where Intel did that, back in the quad-core era.

Of course the internet because it is extremely objective as usual is going to complain repeatedly about the intel 4 core era but completely skip over the 7 years of 6 core chips we've been getting from amd, at ever increasing prices to boot.
Nobody is defending AMD CPUs from that era! It's not discussed because everyone is in agreement about how bad AMD was before the Zen turnaround.
 
Shifting the bottleneck to the gpu doesn't help you with anything. Neither does playing at 4k. If your cpu can't produce the frames you want at 1080p it won't produce them at any resolution with any graphics card.
I don't disagree with some of that statement. If a CPU/GPU combo can't push out 120fps at 1080P it won't do it at 4K. But if you think creating a GPU bottleneck won't allow older CPUs to handle 4K just fine because they are older you would be mistaken. What ever the cpu can run at 720P (or less) it can do at 4K assuming the GPU is up to it. When I was running my OC'd e5-1860 V2 with my RTX 2080Ti I hit over 100FPS at 4k in more games than not and when I didn't, I was still frequently above 90FPS (or had to use DLSS if the GPU couldn't keep up the pace I wanted). And when using DLSS I could easily see where the botleneck shifted due to base resolution. I'd no longer get FPS increases when I turned things down they would just flatline. Its extremely easy to test.

Point being IF you do your ingame/control panel settings correctly you can absolutely tune the GPU into a bottleneck while maintaining a decent frame rate. I might turn texture quality all the way up at 4K while turning down shadows, draw distance, polygons count etc for example but end of day more times than not I could find settings that kept me in the 90-100FPS+ range with tolerable to decent 1% lows. And again this was right up till the 7950X/rtx 40 series dropped so not forever ago but my old Xeon was a ten year old CPU at t hat point still punching in the high double to low triple digits... I am not saying a faster CPU wouldn't increase those FPS some, especially the 1% lows, or running a 30 or 40 series GPU might have been counter productive to frame rates due to driver overhead but what I am saying is CPUs are better for much longer than many people realize, particularly new builders. But again that is assuming you take the time to tune all of your settings and keep bloatware to a minimum to ensure your sytem fully utlizes its hardware.
 
Generally speaking they don't use GPU bottlenecked titles.
A few of them are. It's hard to find other reviews of theirs which use some of the titles, but I found a couple titles which also maxed out on i9-14900K and 7800X3D.

As it is they purposely pick a broad range so even if there were a few it wouldn't mess up the average
A broad range is actually worse, if it includes GPU-limited titles.

hence the conclusion pointing out their CPU review (a more focused 13 or so games) was only 3% apart so basically margin of error between the two tests.
That references their day-one review, which is problematic as we all know.

The Tom's piece you cited talked about using canned benchmarks versus custom
The graph I quoted used a mix of 11 custom scenes + 8 built-in ones. Here's a graph including only the custom ones:

WU58W2daceA83LiCKhFTje.png

 
Does it matter if a CPU achieves 230 fps or 245 fps in a gaming benchmark? What difference does it make in terms of gaming experience?
Not if you're only playing that game, but no other games which might be more CPU-bottlenecked. Because newer games tend to be more CPU-intensive than older ones, people buying a CPU they might like to keep for 3-5 years would be well-advised to focus on getting one that's not already significantly bottlenecked.

I think it's also common for people to upgrade their GPU more frequently than their CPU. So, even if your system is initially GPU-bottlenecked, it might not always be.
 
Post the slide, Anton

https://videocardz.com/newz/msi-leaks-ryzen-9000x3d-2-to-13-higher-gaming-performance-than-7000x3d
MSI-9000X3D-LEAK-1-1200x625.jpg

MSI-RYZEN-9000X3D-LEAK-2.jpg


PERFORMANCE IS EXPECTED TO BE BETTER ON PR SAMPLES AND RETAIL CHIPS

I was about to say that when I saw you beat me to it. This is an engineering sample set to lower clock frequencies. This in no way represents the actual product in full performance.
On another note, I did hear that the 9000x3d parts may actually support overclocking this time so, if true, may be a fun chip to play around with.
 
No, even launch day performance had the R9 9950X delivering 35.3% geomean improvement over the R9 7950X. Show me where Intel did that, back in the quad-core era.


Nobody is defending AMD CPUs from that era! It's not discussed because everyone is in agreement about how bad AMD was before the Zen turnaround.

Man, I miss the Athlon XP days….
 
Calling 2 - 13% increase in a light of utter failure of Intel's 200S line regarding gaming and power effeciency is a bit...awkward, don't You think?

Intel's iterational bread and butter till AyMD came with Zen was what? iPC gain of 2 - 4% per generation of their never-ending "hegemony" of 4c/4t/8t CPU's beginning with Intel® Core™2 Quad Q6600 and ending with 7700K and arrival of Zen CPU's, where 1700X and 8700K broke the ice.

Now up to a double digit generational increase isn't enough? Come on Tom's Hardware, are You for real? 😒

This!

Imagine having regressions on the competitors thanks to blowing up and then claiming 2% to 12% " sucks".
Almost looks like paid actors to save intel 's face.
 
Intel had a double digit performance increase almost if not every gen (kabylake exception) from the core quad era till skylake. Even their haswell refresh had a bigger gain than zen 4 to zen 5.
As someone who has been reading the news and releases very closely every year. This is a lie.

There were only very few releases with more than 10% performance lift.
The best one when they switched from NETBURST to CORE architecture.
The rest, specially during the horrible 14nm++++++ fiasco.. was averaging like 4% to 6% per generation.
Including some regressions or changes (less cores vs higher frequency in the 11th generation)

Generations are meaningless, time to market is what matters. Zen 5 took 2 years over zen 4 and the results are... well, not great. Zen 5 didn't improve everything by double digits over zen 4 either.

Fact of the matter is that we got more performance increase year on year(at similar prices obviously) back in the intel 4 core era than we do nowadays. Especially when it comes to MT performance, since amd decided to go from 199 for 6 cores to 299, we basically got less Mt performance from zen 2 to zen 3 lol.

Of course the internet because it is extremely objective as usual is going to complain repeatedly about the intel 4 core era but completely skip over the 7 years of 6 core chips we've been getting from amd, at ever increasing prices to boot.

Frankly I don't mind it, since people keep buying these utter garbage 6 and 8 core amd chips I can enjoy better prices for those big intel chips with twice the numbers of cores. It's a win for me, so keep at it lads.


If my memory serves me correctly.
The lowest performance uplift was from Zen 1 to Zen 2, which was mostly fixing some latency issues and compatibility issues with RAM. Zen 2 just barely clocked higher.
 
Intel had a double digit performance increase almost if not every gen (kabylake exception) from the core quad era till skylake. Even their haswell refresh had a bigger gain than zen 4 to zen 5.
Erm... Nope, not clock for clock. There was a big jump from Core to Core i because integrated memory controller, another big jump from first gen to Sandy Bridge because new core... But then, clock for clock, Haswell was pretty much the last time there was significant IPC gains for quite a while, if you take into account security mitigations for Meltdown and Spectre - the following generations pretty much only gained back the lost performance caused by those software-only mitigations up until 10th-gen, with single digit performance improvement in IPC per core from one generation to the other... Granted, Intel went from 4c-8t in 7th gen to 12c-24t 10th gen, but then there was the "waste of sand" 11th gen and only 12 gen actually got better IPC again, then 13th - 14th gained nothing over 13th, so my guess is, "Zen 5 IPC gains are nothing to write home about FOR AMD" - for Intel, it's an incredible gen-on-gen performance improvement.
 
That references their day-one review, which is problematic as we all know.
Not particularly problematic if the difference is a 1% performance increase between the two CPUs using 23H2 vs 24H2.
The graph I quoted used a mix of 11 custom scenes + 8 built-in ones. Here's a graph including only the custom ones:
The point I was making was the difference between canned and custom within the same title though. That's why I specifically mentioned the two titles that they listed both for.

Also you linked the 99th percentile chart instead of FPS not that it particularly matters.
A broad range is actually worse, if it includes GPU-limited titles.
Not if it's as I said a few titles given that they had 45 in the test so that's less than 10% it won't skew the results in any meaningful manner. Testing a limited number which could easily amplify the significant differences would be a much bigger problem than having some results which don't show much difference.
A few of them are. It's hard to find other reviews of theirs which use some of the titles, but I found a couple titles which also maxed out on i9-14900K and 7800X3D.
They haven't done much testing with 24H2 due to the cascading hardware releases and it just officially launching this month. They did a recent update which hasn't landed on the website so it's only on HUB, but the titles in that one are definitely not GPU limited (it's a smaller amount 12-14 or something of that sort).

Tom's is still the outlier on Zen 5 gaming performance no matter how you slice it. It's just not as bad as when the original testing didn't include equal memory speeds. There's only two quality outlets with 24H2 gaming results that I'm aware of (KitGuru unfortunately has very limited game testing) and of those two I tend to lean more towards the one which focuses on gaming, has done extensive game testing and hasn't seen the numbers between the two CPUs shift in any meaningful way. Hopefully when Arrow Lake launches everyone will have gone through the arduous task of retesting everything with 24H2 and we'll have more results to compare to.
 
The slide is clearly GPU bottlenecked with the exception of farcry. The differences will be bigger than just 2%. Wukong for example running at 60 fps, clearly that's a gpu limit.
FC6 already favors Zen 5 over 4 to the tune of ~11% according to KitGuru (4080) and ~15% according to Tom's (4090) when comparing the 9700X to the 7700X so I don't really think you're right about that title being indicative of performance.
 
Lower res is always harder on the CPU that's how it works...more frames more work. You hit 4K and in most, not all, games you push the bottleneck to the GPU. HalfCharlie isn't totally correct but they aren't completely wrong either in regards to higher res gaming on older CPUs. I ran an Ivy bridge 8c Xeon OC'd to 4.3GHz until the 7950X came out and I was gaming at 4k 90-120 fps in most games. People really don't realize how much you can shift the bottleneck to the GPU if you push the settings and resolution high enough. After 90fps in gaming, unless you're truly competitive...90fps+ is a completely playable/enjoyable experience.
That is kinda my point, of course in modern titles, at 4k framerate or RTX on, mostly the bottleneck will be on GPU and CPU impact is minimal, but case in point is that when CPU limitation is there, it can benefit quite a bit from a better gaming CPU, I've seen streaming that ppl with 14900k and 4090 running DLSS on 4k still showed CPU bottlenecking, it surely depends on code and although less common than GPU bottleneck, it still is present.
 
  • Like
Reactions: atomicWAR
Also you linked the 99th percentile chart instead of FPS not that it particularly matters.
Okay, here:

FAt886U8zcJZ5QzDpSg5ve.png


So, that's 5.6% (EXPO) or 8.5% (non-EXPO) for 9700X vs. 7700X. All while still retaining the 9700X's stock TDP. We know gaming performance doesn't benefit much from a higher TDP, but a further couple % would be expected.

Not if it's as I said a few titles given that they had 45 in the test so that's less than 10% it won't skew the results in any meaningful manner. Testing a limited number which could easily amplify the significant differences would be a much bigger problem than having some results which don't show much difference.
You don't actually know how GPU-limited those games are. You just pulled that 10% number out of nowhere. It's also not a binary thing, whether a game is GPU-limited. As you get closer to being GPU-limited, the amount of improvement from a faster CPU usually becomes less, but it's usually not like you just suddenly hit a ceiling.
 
Intel had a double digit performance increase almost if not every gen (kabylake exception) from the core quad era till skylake. Even their haswell refresh had a bigger gain than zen 4 to zen 5.
Look, being a fan is one thing, but could you please make some sense? Intel have double digit performance uplift every gen while AMD always have pathetic generational uplift, yet somehow after a decade they still trade blows in performance, and Zen 5 still wins over 14th gen in quite some tests, does that make sense at all?
 
I don't disagree with some of that statement. If a CPU/GPU combo can't push out 120fps at 1080P it won't do it at 4K. But if you think creating a GPU bottleneck won't allow older CPUs to handle 4K just fine because they are older you would be mistaken. What ever the cpu can run at 720P (or less) it can do at 4K assuming the GPU is up to it. When I was running my OC'd e5-1860 V2 with my RTX 2080Ti I hit over 100FPS at 4k in more games than not and when I didn't, I was still frequently above 90FPS (or had to use DLSS if the GPU couldn't keep up the pace I wanted). And when using DLSS I could easily see where the botleneck shifted due to base resolution. I'd no longer get FPS increases when I turned things down they would just flatline. Its extremely easy to test.

Point being IF you do your ingame/control panel settings correctly you can absolutely tune the GPU into a bottleneck while maintaining a decent frame rate. I might turn texture quality all the way up at 4K while turning down shadows, draw distance, polygons count etc for example but end of day more times than not I could find settings that kept me in the 90-100FPS+ range with tolerable to decent 1% lows. And again this was right up till the 7950X/rtx 40 series dropped so not forever ago but my old Xeon was a ten year old CPU at t hat point still punching in the high double to low triple digits... I am not saying a faster CPU wouldn't increase those FPS some, especially the 1% lows, or running a 30 or 40 series GPU might have been counter productive to frame rates due to driver overhead but what I am saying is CPUs are better for much longer than many people realize, particularly new builders. But again that is assuming you take the time to tune all of your settings and keep bloatware to a minimum to ensure your sytem fully utlizes its hardware.
Jayz 2 cents did this in a video presented by Phil.
 
  • Like
Reactions: atomicWAR
Erm... Nope, not clock for clock. There was a big jump from Core to Core i because integrated memory controller, another big jump from first gen to Sandy Bridge because new core... But then, clock for clock, Haswell was pretty much the last time there was significant IPC gains for quite a while, if you take into account security mitigations for Meltdown and Spectre - the following generations pretty much only gained back the lost performance caused by those software-only mitigations up until 10th-gen, with single digit performance improvement in IPC per core from one generation to the other... Granted, Intel went from 4c-8t in 7th gen to 12c-24t 10th gen, but then there was the "waste of sand" 11th gen and only 12 gen actually got better IPC again, then 13th - 14th gained nothing over 13th, so my guess is, "Zen 5 IPC gains are nothing to write home about FOR AMD" - for Intel, it's an incredible gen-on-gen performance improvement.
I'm quoting you but I'm replying to all 3 doubters.

Who cares about clock for clock? That's just moving the goalpost. Performance is performance, where it comes from is irrelevant. Intel in their quad cores days gave more performance increase than amd did from 2017 to 2020 at iso prices. The reduction in number of cores they pulled with zen 3 kinda sealed the deal for them.
 
Look, being a fan is one thing, but could you please make some sense? Intel have double digit performance uplift every gen while AMD always have pathetic generational uplift, yet somehow after a decade they still trade blows in performance, and Zen 5 still wins over 14th gen in quite some tests, does that make sense at all?
They are not trading blows though, that's just you ultracoping. Zen 5 is trading blows with 2021 intel (12700k vs 9700x, 12600k vs 7600x). To think they are trading blows is completely delusional when an i7 from 2022 completely and utterly desecrates the newest r7. But yeah, they are trading blows, lol.