News Leak indicates AMD Ryzen 9000X3D series CPU gaming performance will disappoint

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
In Toms original benchmarks they had both the 7700X and 9700X with RAM running at their official supported speeds (5200 and 5600 respectively). Remember that running anything beyond official specifications is considered an overclock and not guaranteed to even work; so even running the 7700X with 5600 RAM would be overclocking the RAM and giving an advantage to the 7700X relative to the 9700X. Even running both CPUs at the same overclocked RAM speed give the 7700X an advantage due to the higher overclock percentage. For example at 6000 the 7700X has a 15% RAM overclock vs a 7% overclock for the 9700X. In order to have the same percentage overclock you would need to have the 9700X RAM running 6400. However, people would once again say that isn't fair as the 9700X has more bandwidth. This is why running them at stock vs stock is IMO the best way to do this as it gives an out of box idea for performance difference that you can expect.
There's a reason most places test with DDR5-6000 on AMD and have since Zen 4 launch. Here's what the Tom's Zen 5 update had to say about it:
Most reviewers test with Expo overclocked memory as the default stock configuration, which is partially the result of AMD’s somewhat misleading marketing practices. We test at true stock memory settings because AMD does not officially cover memory overclocking under its warranty — it is not the official spec — yet the company uses overclocked memory for its marketing materials and encourages reviewers to test with overclocked memory - even the comparison benchmarks in the reviewer's guides use overclocked memory.
So sure you can claim testing default memory speeds is what to expect, but the bottom line is that DDR5-6000 is used by AMD everywhere and this recommendation didn't change with Zen 5.
Even with the retest with stock vs stock the 9700X as 8% faster than the 7700X in gaming.
Tom's is the outlier on Zen 5 testing with regards to gaming. If nobody else is backing up these numbers with their testing that suggests the game list used isn't a very good representation.
 
The last ISA extension that was going to be "the next big thing" was AVX-512. But, we all know what happened there. Intel flipped the chessboard and all the pieces went flying.
A big barrier for ISA when it comes to gaming is backwards compatibility. People still complain today when games release requiring AVX2. In turn a handful have had patches which remove the requirement and in some cases there have been third party workarounds. I can't imagine how long it would take for something like AVX512 to be used even if it had been on every CPU since RKL.
 
  • Like
Reactions: ottonis
There's a reason most places test with DDR5-6000 on AMD and have since Zen 4 launch.
I understand why places do that, however, you are by no means guaranteed to get that RAM performance. On the forums just a couple weeks ago someone had gotten a 7600 with 6000MTs RAM and they couldn't get it to work at that speed. They were able to run at 5600MTs but no more. The simple explanation is their CPU doesn't have as good of an IMC. Not surprising since the 7600 is a binned 7600X which is a binned 7700X. Therefore your odds of having an IMC being able to run 15% beyond actual support is lessened. Hence why testing stock vs stock is IMO still the best idea and then including the RAM OC performance as well. Reason is that you are never guaranteed to get an IMC that can support more than stock so having stock numbers is still required.
 
  • Like
Reactions: ottonis
A big barrier for ISA when it comes to gaming is backwards compatibility. People still complain today when games release requiring AVX2. In turn a handful have had patches which remove the requirement and in some cases there have been third party workarounds.
You can have multiple code paths, where one exists for backward compatibility and another is used to take advantage of some new ISA extension.

I can't imagine how long it would take for something like AVX512 to be used even if it had been on every CPU since RKL.
It might even be used in some supporting libraries, today! On Linux, some media libraries (ffmpeg, AV1 decoders, etc.), libm (math) and glibc (C runtime library) all have special-case optimizations that can utilize AVX-512, if present. I'm sure there are some game or physics engines with similar runtime feature-detection and code dispatch.
 
  • Like
Reactions: thestryker
You can have multiple code paths, where one exists for backward compatibility and another is used to take advantage of some new ISA extension.
That mostly depends on what regressions might be on the table. Lately it has seemed like game engines are leveraging AVX/AVX2 more intrinsically so it's been a hard requirement. AVX was SNB (AMD added same time) and AVX2 HSW (AMD added a couple years later) so 2011 for AVX and 2013/2015 for AVX2. Most current AAA type games are releasing with the minimums around 6th-8th Gen Intel which makes sense as to why they're more comfortable making AVX/AVX2 a requirement even though we're still talking 8+ year old hardware.

Even though you can use multiple code paths there's still the question of how much of your audience has access. Console hardware also dictates a lot when it comes to any engines being used across multiple platforms now. If only a minority of your users have access to a new ISA it's unlikely development time will be spent. That doesn't mean that some libraries used won't have capability (I imagine this is exactly why AVX requirements in the past were bypassed) just that it won't be leveraged in any fashion that improves things from the end user side.

That tends to be why newer ISA usage pops up in community driven things rather than things sold retail (AVX512 in emulation has been pretty interesting).
 
Console hardware also dictates a lot when it comes to any engines being used across multiple platforms now. If only a minority of your users have access to a new ISA it's unlikely development time will be spent.
Next gen consoles are pretty definitely going to have AVX-512.

That tends to be why newer ISA usage pops up in community driven things rather than things sold retail (AVX512 in emulation has been pretty interesting).
Has anyone managed to run PS4 or PS5 games on generic PCs, yet?
 
Next gen consoles are pretty definitely going to have AVX-512.
I think this will depend entirely on Sony and how much they want to optimize the die size. With the PS5 they cut down the FPU to save size so it's not impossible they'd cut out AVX512 if they didn't have plans. Microsoft didn't really touch the CPU core with the Xbox Series, but with silicon costs being what they are it's entirely possible they'd do the same now.
Has anyone managed to run PS4 or PS5 games on generic PCs, yet?
I don't think there's been any broad game support, but that might be driven by Sony being more open about exclusives on PC. The ongoing joke is that PS4 emulation might as well be called a Bloodborne emulator.
 
That's true. When you buy a CPU you want it to be as future-proof as possible. Nonetheless, in my opinion, there is no indication that any of the current or recent CPU generations will in any shape or form limit anyone's gaming-experience anytime soon.

Yep... so many people stress over their CPU purchase and I just don't see why. Buy the best CPU you can afford and call it a day. Hardware is so good now that whatever you get is going to last a long time. Personally I prefer to have the best of both worlds... so I went with a 16 core CPU.

I may get 5 less fps in some random game than the 7800x3D but I can encode video a lot faster. 🤣

You might have a point if a new disruptive technology is on the horizon and all CPUs that just don't have that technology will simply fail- Do you remember MMX or SSE extensions? They enabled much smoother media playback (and other things), or when MPEG2 became hardware accelerated? All of a sudden, playing DVD movies on the computer became a thing.

Actually I do remember. I had the Pentium 75mhz back in 1997... and I remember when the Pentium 200mhz WITH MMX came out... and yes... I upgraded. 🤣 Those were the days when hardware was obsolete before you got it home from Circuit City... today that just isn't the case.
 
I think this will depend entirely on Sony and how much they want to optimize the die size. With the PS5 they cut down the FPU to save size so it's not impossible they'd cut out AVX512 if they didn't have plans. Microsoft didn't really touch the CPU core with the Xbox Series, but with silicon costs being what they are it's entirely possible they'd do the same now.
At worst, they'll use the laptop version of Zen 5, with split execution of 512-bit operands. With AI being all the rage, I doubt Sony will want to give up on AVX-512. Some developers will want it for VNNI. There are other things that make AVX-512 better than AVX2, which is why it was a net win for Zen 4.

The ongoing joke is that PS4 emulation might as well be called a Bloodborne emulator.
Huh? This reference is lost on me.
 
Those were the days when hardware was obsolete before you got it home from Circuit City... today that just isn't the case.
Try telling that to someone who bought a RTX 3090 just before the RTX 4090 was announced!

Also, I've got to say that Raptor Lake was quite an upgrade to Alder Lake, performance wise. I have an Alder Lake i9 at work and (until all this nonsense with degradation blew up), I really lamented it wasn't a Raptor Lake.
 
Last edited:
Huh? This reference is lost on me.
It's the one game Sony has refused to port despite massive fan demand (there isn't even a native PS5 version).
At worst, they'll use the laptop version of Zen 5, with split execution of 512-bit operands. With AI being all the rage, I doubt Sony will want to give up on AVX-512. Some developers will want it for VNNI.
It's going to be hard to say whether they see this as a worthwhile benefit or not. They've added hardware to the PS5 Pro for upscaling, but it's not clear whether it's part of the GPU or something separate like an NPU. If it's the latter they may evolve that and cut AVX512 for the space savings.
 
It's going to be hard to say whether they see this as a worthwhile benefit or not.
If I gambled, I'd put money on it.

They've added hardware to the PS5 Pro for upscaling, but it's not clear whether it's part of the GPU or something separate like an NPU.
If they upgraded the GPU to RDNA3, then they're probably just using WMMI. Even though Sony has deep pockets, custom design work costs a lot of money and the PS5 Pro isn't going to sell that many units. So, all of the IP in it should be off-the-shelf, maybe with tweaks.

A counter-argument to that is if they want to use it as a development vehicle for PS6, in which case they might've deemed it worth putting an XDNA engine in there.

If it's the latter they may evolve that and cut AVX512 for the space savings.
You think even their "half" implementation uses that much space? I think I read the registers are still 512 bits, but most of the execution pipelines should be the same width you'd need for AVX2.
 
A counter-argument to that is if they want to use it as a development vehicle for PS6, in which case they might've deemed it worth putting an XDNA engine in there.
This was basically the stance the folks at digital foundry were taking when they went over what PS5 Pro details they had.
You think even their "half" implementation uses that much space? I think I read the registers are still 512 bits, but most of the execution pipelines should be the same width you'd need for AVX2.
Nope I don't, but the FPU cut in Zen 2 was a relatively tiny space savings. I'm guessing it came about because minimizing die area was part of the contract so the engineers were cutting anything they could. So it's one of those things where unless it's part of what they want to do design wise it's ripe for removal.
 
Star Citizen is a very physics heavy game built on a huge open world which is starting server meshing to make the world basically limitless. Is very heavily cpu bottlenecked, even at 4k. Any and all cpu changes can be felt, and the game engine scales well across 8+ cores. My 7800x3D has typical framerates of 40-70fps in cities at 1440p with a CPU load of typically 50-90%. 14900k is actually a little choppier and slower unless you install a large custom cooling loop and do an all core overclock to over 5.5ghz with really fast 7000+ MT/s ram. Any changes to the CPU are welcomed. If the 9800x3D is overclockable, I'll buy it just for that feature alone. If it adds 200-300mhz, I'll be very happy with the added performance. Hopefully the new floating point pipelines add some IPC gains as well.

The main reason Starfield gives better performance is because it has a ton of loading screens. It's not a truly open world game like Star Citizen. See the tech trailer below. For Star Citizen, optimization would mainly be reducing entity count and render distance. But that's just a band-aid because CPU's haven't caught up yet. Bring on better CPU's and more immersion please!

View: https://www.youtube.com/watch?v=nWm_OhIKms8
Running a 14900k at 5.5ghz is an all core underclock, not an overclock. It runs 5.7ghz all core right out of the box. 7000 ram ain't particularly fast either for RPL, im running 7200 on Alderlake. Saying it requires a large custom cooling loop is well, let's say it's just not true. Even a cheap ass air cooler can handle it perfectly fine in gaming with 0 thermal throttling. I mean I've played TLOU (the heaviest game that exists right now) at complete stock 14900k (5.7 ghz all core) at 720p and temps were ~70c with a single tower air cooler. A custom loop is definitely not even close to being required, please let's just stop spreading misinformation. In fact im perfectly certain your 7800x 3d will be hitting higher temps with a big AIO in TLOU.


This is the most comprehensive test I've found about star citizen and looks like even a 13900k with DDR4 is as fast / faster than a 7800x 3d. With DDR5 it's clearly ahead.

View: https://www.youtube.com/watch?v=wSQFrXiKUpM
 
Last edited:
Nope I don't, but the FPU cut in Zen 2 was a relatively tiny space savings. I'm guessing it came about because minimizing die area was part of the contract so the engineers were cutting anything they could. So it's one of those things where unless it's part of what they want to do design wise it's ripe for removal.
I know, but I'll still bet you Internet Points that PS6 will have either AVX-512 or AVX10/256.
 
  • Like
Reactions: thestryker
I know, but I'll still bet you Internet Points that PS6 will have either AVX-512 or AVX10/256.
If the rumors about Microsoft skipping a refresh and going straight to the next generation are true we might get a preview there. Of course there are also rumors that Microsoft's less performant console would be a handheld and I can't imagine they'd do two fully custom SoCs. I don't see any way they'd have a handheld with AVX512 and not include it on the more powerful standalone so it also might not indicate a thing 🤣
 
  • Like
Reactions: bit_user
  1. With what power limits?
  2. Out of the box means no under-volting.
  3. Running what software? This was in reference to Star Citizen, which @gggplaya said was physics-heavy.
It doesn't matter what power limits, even with the stock 250w it's not dropping clocks cause no game uses that much power. Star citizen ain't particularly heavy. By heavy, I mean using as many cores as available and pushing very high power draw. The heaviest game as of right now is tlou, and it's hitting 200w on a 14900k with no undervolt. Still sitting around 70c with a single tower air cooler. Got a video on my channel if you wanna see it.

People underestimate how stupidly easy these intel chips are to cool. Big dies, good thermal transfer, 200watts are very trivial even for small coolers to handle on such a chip. Saying it needs a custom loop to run a game while underclocked is just insanity.
 
This is a good watch to explain the numbers with a bit more insight and the same things that came to my mind looking at the slides:

View: https://www.youtube.com/watch?v=l0n81AWpkAs


Regards.
His numbers on jedi survivor is due to him running stock memory. 7800x 3d with properly tuned memory is getting very stable 90 fps in the very same scene he is testing. It makes no sense to care about gaming performance while running xmp auto memory. It's just bad.
 
His numbers on jedi survivor is due to him running stock memory. 7800x 3d with properly tuned memory is getting very stable 90 fps in the very same scene he is testing. It makes no sense to care about gaming performance while running xmp auto memory. It's just bad.
Please enlighten me, how is your comment related to the video in Fran’s post.

Dan Owen’s videos seem to be balanced, he criticises where due, he praises where appropriate. That he took the time to give a possible reason for the numbers presented is praise worthy. Note, he doesn’t say these are marvellous, but to wait and see.
Highlighting the 4090 grinding at 60fps at 1080p, with a cpu at low 20%… the bottleneck isn’t the cpu and his own testing had a similar result though he doesn’t state the hardware he used.

There are lies, damn lies and statistics. Unless you know exactly what has been done to generate the data it is pretty meaningless. It is only the trust you have in reviewers that give the reviews any meaning.
Taking a broad view across many reviews can lessen the potential biases. Each reviewer has his/her own viewpoint, you might be lucky and hit upon a sensible average, but you might hit upon HUB who have milked the criticism of AMD 9000 to the point the udders are dry. I guess the cows are getting fresh grass in time for the X3D variants.
 
Please enlighten me, how is your comment related to the video in Fran’s post.

Dan Owen’s videos seem to be balanced, he criticises where due, he praises where appropriate. That he took the time to give a possible reason for the numbers presented is praise worthy. Note, he doesn’t say these are marvellous, but to wait and see.
Highlighting the 4090 grinding at 60fps at 1080p, with a cpu at low 20%… the bottleneck isn’t the cpu and his own testing had a similar result though he doesn’t state the hardware he used.

There are lies, damn lies and statistics. Unless you know exactly what has been done to generate the data it is pretty meaningless. It is only the trust you have in reviewers that give the reviews any meaning.
Taking a broad view across many reviews can lessen the potential biases. Each reviewer has his/her own viewpoint, you might be lucky and hit upon a sensible average, but you might hit upon HUB who have milked the criticism of AMD 9000 to the point the udders are dry. I guess the cows are getting fresh grass in time for the X3D variants.
Want to add in claiming XMP/EXPO is bad (at least for review) is as dubm as it can get, XMP or EXPO at below the absolute top clocked rams for the generation is kinda guaranteed to work, if it didn't you can RMA the ram sticks (except in the highest clock frequencies where IMC might contribute more), where "knowing what you are doing" can have a pair of ram so marginal it don't run any faster than XMP specced, so what's the point of a reviewer review a CPU for gaming with who knows if you can achieve timings?
 
  • Like
Reactions: bit_user
Want to add in claiming XMP/EXPO is bad (at least for review) is as dubm as it can get, XMP or EXPO at below the absolute top clocked rams for the generation is kinda guaranteed to work, if it didn't you can RMA the ram sticks (except in the highest clock frequencies where IMC might contribute more), where "knowing what you are doing" can have a pair of ram so marginal it don't run any faster than XMP specced, so what's the point of a reviewer review a CPU for gaming with who knows if you can achieve timings?
Amd and Intel have their sweet spots, testing at that point, 6000 cl30 for Amd 9000 is going to be an accurate reflection of most builds. I don’t know the Intel numbers.
 
Amd and Intel have their sweet spots, testing at that point, 6000 cl30 for Amd 9000 is going to be an accurate reflection of most builds. I don’t know the Intel numbers.
Yea I agree on that, and maybe use some reasonably priced yet not too far away specced ram using expo/xmp profile for the cl and ram speed is best case you can do for review and comparison, maybe alongside those CPU official supported memory speed test (say, DDR 5600 cl40) to see how much ram clock/latency would affect a game or not.
 
  • Like
Reactions: bit_user
7800X3d vs 9800x3d in CB R23 + 18% ST + 28% MT shows that we can expect double digit improvement in gaming for CPU limited games . Mainly due they decrees area covered by 3D stack memory and that leading to easier to cool CPU cores equal more headroom for higher clocks.

Also if we compare 9700x to 9800x3d in CB R23 -2,2% ST + 6,5% MT showing that X3d now have higher TDP available, but still we see some limit to max Vcore applied for X3D that leading to slightly lower single core boost.
 
Last edited: