News Leak indicates AMD Ryzen 9000X3D series CPU gaming performance will disappoint

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Not if you're only playing that game, but no other games which might be more CPU-bottlenecked. Because newer games tend to be more CPU-intensive than older ones, people buying a CPU they might like to keep for 3-5 years would be well-advised to focus on getting one that's not already significantly bottlenecked.

I think it's also common for people to upgrade their GPU more frequently than their CPU. So, even if your system is initially GPU-bottlenecked, it might not always be.
Are there ANY games at all with barely playable fps rates (not GPU bottlenecked, of course) when played on a contemporary CPU? I am not aware of any such game where changing a modern CPU would make a difference in playability.
 
Are there ANY games at all with barely playable fps rates (not GPU bottlenecked, of course) when played on a contemporary CPU? I am not aware of any such game where changing a modern CPU would make a difference in playability.
Man it's not that hard to figure it out. If cpu a gets 150 fps and cpu b gets 100 fps today there will come a point when cpu a will get 70 fps and cpu b will get 50. At that point you'll need to upgrade cpu b but you can keep using cpu a.
 
  • Like
Reactions: ottonis
Man it's not that hard to figure it out. If cpu a gets 150 fps and cpu b gets 100 fps today there will come a point when cpu a will get 70 fps and cpu b will get 50. At that point you'll need to upgrade cpu b but you can keep using cpu a.
That's true. When you buy a CPU you want it to be as future-proof as possible. Nonetheless, in my opinion, there is no indication that any of the current or recent CPU generations will in any shape or form limit anyone's gaming-experience anytime soon.
You might have a point if a new disruptive technology is on the horizon and all CPUs that just don't have that technology will simply fail- Do you remember MMX or SSE extensions? They enabled much smoother media playback (and other things), or when MPEG2 became hardware accelerated? All of a sudden, playing DVD movies on the computer became a thing. That were changes that made a real difference.
On the other hand, in order to "feel" that 10% peprformance gain from last generation, you would probably have to wait perhaps 4-5 more years, when games get published that will really demand that high of performance if you want to run them halfway smoothly.
 
They are not trading blows though, that's just you ultracoping. Zen 5 is trading blows with 2021 intel (12700k vs 9700x, 12600k vs 7600x). To think they are trading blows is completely delusional when an i7 from 2022 completely and utterly desecrates the newest r7. But yeah, they are trading blows, lol.
And here comes more nonsense, so how about R9 vs i9? and R7 X3D topping the chart in gaming?? to think that Zen 4 vs RPL and Zen 5 vs RPL refresh/ARL (pending) is not trading blows is delusional, hack even Intel's own marketing material for ARL seems to think they are trading blows, obviously Intel themselves are deulsional
 
  • Like
Reactions: bit_user
And here comes more nonsense, so how about R9 vs i9? and R7 X3D topping the chart in gaming?? to think that Zen 4 vs RPL and Zen 5 vs RPL refresh/ARL (pending) is not trading blows is delusional, hack even Intel's own marketing material for ARL seems to think they are trading blows, obviously Intel themselves are deulsional
If by trading blows you mean Intel chips ae faster in ST and up to 50% faster in MT in most segments then yeah, they are trading blows.

Literally their brand new 2024 Zen 5 R7 is a slower in MT performance than 2021's i7. What the heck are you talking about is beyond me man.
 
If by trading blows you mean Intel chips ae faster in ST and up to 50% faster in MT in most segments then yeah, they are trading blows.

Literally their brand new 2024 Zen 5 R7 is a slower in MT performance than 2021's i7. What the heck are you talking about is beyond me man.
If you mean by faster is only in the mid range chips with far less cores/threads for one side then you are beyond delusional, obviously for a sane person comparing the generations are comparing the TOTL chips and not the cut down ones due to whatever reason for vendors, according to Tom's , even with unlimited degrading profile of 13900k, multithread performance vs 7950x is literally trading blows, of course you can still be delusional and thinks i7 and R7 represent the whole gen.

https://www.tomshardware.com/news/amd-ryzen-9-7950x-vs-intel-core-i9-13900k.
 
  • Like
Reactions: bit_user
If you mean by faster is only in the mid range chips with far less cores/threads for one side then you are beyond delusional, obviously for a sane person comparing the generations are comparing the TOTL chips and not the cut down ones due to whatever reason for vendors, according to Tom's , even with unlimited degrading profile of 13900k, multithread performance vs 7950x is literally trading blows, of course you can still be delusional and thinks i7 and R7 represent the whole gen.

https://www.tomshardware.com/news/amd-ryzen-9-7950x-vs-intel-core-i9-13900k.
So you are saying that they are losing because they choose to be losing and therefore they are not losing, cause if they decided not to lose they wouldn't?

Yeah right, okay, but they are losing, until they decide not to, they are competing with a 3 year old Intel in most segments.
 
12 gen actually got better IPC again, then 13th - 14th gained nothing over 13th
Most of the performance increase from 12th > 13th Gen was based on clock speed and more e-cores. Overall the IPC increase from Golden Cove > Raptor Cove was at best 3% but more times than not it wasn't even that much.

The lowest performance uplift was from Zen 1 to Zen 2
There was actually decent performance uplift from Zen 1 > Zen 2. Gaming wise the 3700X was about 11% faster overall than the 1800X. Relative CPU performance was closer to a 20% gain. Zen 1 > Zen+ was the smallest gain with about 3% IPC but 3-10% performance uplift overall.
 
So you are saying that they are losing because they choose to be losing and therefore they are not losing, cause if they decided not to lose they wouldn't?

Yeah right, okay, but they are losing, until they decide not to, they are competing with a 3 year old Intel in most segments.
don't derail chief, you are literally humilating Intel, face the gen on gen, CPU architecture and not some random chosen single SKUs, same arguement can be said that intel need to pump that much more cores and power into the segments or else they will lose in all fronts, so that they do these core count vs pricing in the mid to low tier (AKA what AMD is planning to do in RDNA4 vs Nvidia), it is child logic that compare Top SKUs at stock for the generational performance/efficiency comparison, and compare C/P according to the marketing strategy, don't derail with strange comparisons.

It is as simple as what they can do vs what they decided to put in each segment. keep being delusional and you won't convince anyone except maybe you feels you won an arguement
 
don't derail chief, you are literally humilating Intel, face the gen on gen, CPU architecture and not some random chosen single SKUs, same arguement can be said that intel need to pump that much more cores and power into the segments or else they will lose in all fronts, so that they do these core count vs pricing in the mid to low tier (AKA what AMD is planning to do in RDNA4 vs Nvidia), it is child logic that compare Top SKUs at stock for the generational performance/efficiency comparison, and compare C/P according to the marketing strategy, don't derail with strange comparisons.

It is as simple as what they can do vs what they decided to put in each segment. keep being delusional and you won't convince anyone except maybe you feels you won an arguement
So if 1 company is losing majorly in let's say, 9 out of 10 segments, and only competes in 1 segment, your conclusion is that they are competing, just because that company is called AMD?

Intel doesn't need to use that more power, we've proven that multiple times. The 13700k absolutely rips the brand new 9700x while using less power
 
It's just not as bad as when the original testing didn't include equal memory speeds
In Toms original benchmarks they had both the 7700X and 9700X with RAM running at their official supported speeds (5200 and 5600 respectively). Remember that running anything beyond official specifications is considered an overclock and not guaranteed to even work; so even running the 7700X with 5600 RAM would be overclocking the RAM and giving an advantage to the 7700X relative to the 9700X. Even running both CPUs at the same overclocked RAM speed give the 7700X an advantage due to the higher overclock percentage. For example at 6000 the 7700X has a 15% RAM overclock vs a 7% overclock for the 9700X. In order to have the same percentage overclock you would need to have the 9700X RAM running 6400. However, people would once again say that isn't fair as the 9700X has more bandwidth. This is why running them at stock vs stock is IMO the best way to do this as it gives an out of box idea for performance difference that you can expect. Even with the retest with stock vs stock the 9700X as 8% faster than the 7700X in gaming.
 
  • Like
Reactions: YSCCC
Are there ANY games at all with barely playable fps rates (not GPU bottlenecked, of course) when played on a contemporary CPU? I am not aware of any such game where changing a modern CPU would make a difference in playability.
MS Flight Simulator. You're lucky if you can keep above* 40FPS in some areas.

Any game that is heavy on VR or has huge landscapes with some dumb AI for NPCs will be very CPU heavy.

Time-stamped video to show the graphs.
View: https://youtu.be/_hYeacjkHTA?t=553


Intel is not even an option for VR, at least.

Regards.
 
Last edited:
I've seen streaming that ppl with 14900k and 4090 running DLSS on 4k still showed CPU bottlenecking
That isn't surprising as there is quite a bit of extra CPU load being done from streaming. Intel's CPU scheduler keeps games on the P-Cores and if streaming is also trying to use the P-Cores then only 8 P-Cores could easily cause a CPU bottleneck. If the streaming uses the E-Cores, while they about Sky Lake in performance, their relatively low clock speed will make things more interesting especially if your streaming applications can use instructions not present on the E-Cores. This could be a situation where having 12-16 performance cores, like the Ryzen 9's, would actually make you less likely to be CPU bound.
 
Slightly different take here, with the exception of engine locks on frame rate (GW2 is locked to 250fps) ALL games are bottlenecked. Quake 2 gives me a mere 1000 fps, it still has a bottleneck otherwise it would be (being ridiculous with the numbers)1,000,000fps.

Whether the bottleneck is cpu or gpu can be gleaned from whatever was released last or was a significantly meaningful change, look at the reviews. Say for sake of argument, intel 10900 to 12900, AMD3950/5950 to 9950 or GPU.. pick your own generations, each brings some improvements even if they aren’t what you personally want to see.
If you see a change when you upgrade there was a bottleneck, you could see a change in fps locked titles as the minimums increase, you could also see an increase in maximums where the fps is not limited. The change maybe more under the hood, less fuel used and a minor speed bump.

Performance that is sustainable, I.e.the damn thing doesn’t burn too much of your money or burn out because it needs too much power matters. High IPC and slower clocks is preferable, reduced clocks roughly equate to less power, less heat. Clock speed performance P4, Bulldozer even the 13/14 gen core chips has its drawbacks, heat, inefficiency being two. Power doesn’t scale linearly. That extra 100MHz costs.

The AMD 64 instruction set with additions has been around for a long time now. There have been many processor variations. The easy to fix, low hanging fruit, improvements are long past. Improvements are pretty much common between Intel and AMD, at least the ideas are. The implementation of the silicon is where the differences lie. A flawed analogy would be F1, the instruction set defines the rules by which you can build the car/chip. There are 10 variations on the theme on the track, they are within 2 seconds on a quali lap, fairly close. For CPUs there are 2 family variations of chips for each generation in the shops to buy and for most fitting a current or last generation chip into a rig they wouldn’t tell the difference.

Tried to stay non-partisan. I think the above makes sense.
 
  • Like
Reactions: bit_user
MS Flight Simulator. You're lucky if you can keep avobe 40FPS in some areas.

Any game that is heavy on VR or has huge landscapes with some dumb AI for NPCs will be very CPU heavy.

Time-stamped video to show the graphs.
View: https://youtu.be/_hYeacjkHTA?t=553


Intel is not even an option for VR, at least.

Regards.
Thanks a lot! The difference between the Ryzen 5000 and 7000 is indeed quite significant (30-40 fps vs 80-90 fps). Color me surprised!
 
  • Like
Reactions: bit_user and -Fran-
So if 1 company is losing majorly in let's say, 9 out of 10 segments, and only competes in 1 segment, your conclusion is that they are competing, just because that company is called AMD?

Intel doesn't need to use that more power, we've proven that multiple times. The 13700k absolutely rips the brand new 9700x while using less power
And here comes more make up, 9 out of 10 segments only on i7, and only on selected metric, how wonderful, you are literally saying they always wins coz they are named Intel, don't you think it dumb when you say that you should use the vendor designated segment for comparison and then says that intel is as dumb as using unnecessary power for something they already won even with less power? But yea, it makes sense that intel dominates, yet ARL announcement instantly makes 7800X3D sold out and price get to all new level.

That isn't surprising as there is quite a bit of extra CPU load being done from streaming. Intel's CPU scheduler keeps games on the P-Cores and if streaming is also trying to use the P-Cores then only 8 P-Cores could easily cause a CPU bottleneck. If the streaming uses the E-Cores, while they about Sky Lake in performance, their relatively low clock speed will make things more interesting especially if your streaming applications can use instructions not present on the E-Cores. This could be a situation where having 12-16 performance cores, like the Ryzen 9's, would actually make you less likely to be CPU bound.
Yea it makes sense also and IME, the like of MSFS eats so much CPU is because they need to render shaders or call for weather, wind etc. the dynamic changing environment need a lot more to calculate vs what GPU calculates like polygons and shadows.

And it is mostly main thread limited, so extra cache from ADL 12700KF to 14900K is quite significant, no idea if X3D will make it a lot better but whole system upgrading isn't worth it back in the day so 14900K is a good trade off (if not having the degradation saga) Wondering if the 2024 iteration will change as they rely much less on system (IIRC they reduce the game size about 50GB), and that they got more streaming heavy and with RT now. hopefully it didn't break the bank to play
 
  • Like
Reactions: bit_user
MS Flight Simulator. You're lucky if you can keep avobe 40FPS in some areas.

Any game that is heavy on VR or has huge landscapes with some dumb AI for NPCs will be very CPU heavy.

Time-stamped video to show the graphs.
View: https://youtu.be/_hYeacjkHTA?t=553


Intel is not even an option for VR, at least.

Regards.
This guy says the exact opposite, AMD isn't even an option for VR since the fastest chip they have loses to a 12900k running 4800mhz ram, lol

View: https://www.youtube.com/watch?v=4LH9WP-szaw&t=89s
 
And here comes more make up, 9 out of 10 segments only on i7, and only on selected metric, how wonderful, you are literally saying they always wins coz they are named Intel, don't you think it dumb when you say that you should use the vendor designated segment for comparison and then says that intel is as dumb as using unnecessary power for something they already won even with less power? But yea, it makes sense that intel dominates, yet ARL announcement instantly makes 7800X3D sold out and price get to all new level.
Okay, intel isn't dominating, they are just like 50% faster in MT performance at most segments while AMD's latest and greatest is losing to your old i7 from 2021.
 
And here comes more make up, 9 out of 10 segments only on i7, and only on selected metric, how wonderful, you are literally saying they always wins coz they are named Intel, don't you think it dumb when you say that you should use the vendor designated segment for comparison and then says that intel is as dumb as using unnecessary power for something they already won even with less power? But yea, it makes sense that intel dominates, yet ARL announcement instantly makes 7800X3D sold out and price get to all new level.


Yea it makes sense also and IME, the like of MSFS eats so much CPU is because they need to render shaders or call for weather, wind etc. the dynamic changing environment need a lot more to calculate vs what GPU calculates like polygons and shadows.

And it is mostly main thread limited, so extra cache from ADL 12700KF to 14900K is quite significant, no idea if X3D will make it a lot better but whole system upgrading isn't worth it back in the day so 14900K is a good trade off (if not having the degradation saga) Wondering if the 2024 iteration will change as they rely much less on system (IIRC they reduce the game size about 50GB), and that they got more streaming heavy and with RT now. hopefully it didn't break the bank to play
It isn't worth it to try and argue with them as they constantly move the goal posts and change the narrative of what they are staying. For example they like to say MT all the time but to them MT ONLY means applications that can use all the extra cores that Intel has vs the same named AMD CPU (Ryzen 7 vs i7 for example) even though the i7 costs the same as the Ryzen 9 X900 CPU and they have similar thread counts. They don't consider an application to be MT if it only uses say 8 cores. In reality when Zen 4 CPUs vs 14th Gen is benchmarked with the same thread levels (7600/X and 14400) the AMD chip is usually faster overall. They also ONLY say MT but never overall application performance because they know Intel loses in that as well.

What is also funny is they posted that VR benchmark where the person says "this build has specific Intel optimizations for the e-cores and that could be affecting the Ryzen performance." That company had a second video in which they downloaded a previous build to see if there was a performance difference for the Ryzen and there was a very noticable performance uplift with the older version. That means that the Intel optimizations in the more current build broke performance for Ryzen in some way. Not to mention those are their ONLY comparison videos across CPUs. Then it looks like they went straight with the 7800X3D after that anyways....lol
 
"Depressing" ?? Really? We're talking about leaks; how much emotion is worth investing in such?

Additionally, what is the expectation then? 20-30% IPC? I say IPC because it sounds like we're talking about normalizing 9800X3D clocks down to 7800X3D's as it wouldn't work the other way around.

Only more questions than answers, as usual coming from a leak. Was Windows 11 24H2 used, i.e. how much does AMD's new branch prediction patch effect 9000X3D perf? What do their actual sustained clock frequencies look like in these games? BTW, won't take Gamers Nexus any time at all to determine that after launch. :) Also, what is the AGESA level on these MSI boards used for this testing? We almost know for certain this will change around launch time either way.
 
It isn't worth it to try and argue with them as they constantly move the goal posts and change the narrative of what they are staying. For example they like to say MT all the time but to them MT ONLY means applications that can use all the extra cores that Intel has vs the same named AMD CPU
Nope, you are just lying. Even in an application that uses only 4 cores the Intel chip will be faster cause you can use 4 of them simultaneously. Eg. warp stabilizer, something a lot of content creators actually use in parallel.

even though the i7 costs the same as the Ryzen 9 X900 CPU

Lying again. The 13700k is in fact cheaper than the 9700x. It's also 2 years older. Still smacks it to oblivion.

This isn why most people aren't arguing with the crazy amd fandom. They just lie in support of a company, god knows for what reason.

You beat me to it!

He beat you to what, lying? Even with the old build the 7800x 3d ain't faster than a 12900k. Here it is.


View: https://www.youtube.com/watch?v=wJb3zNtiv4E



Why the heck are you people making stuff up? LOL
 
It isn't worth it to try and argue with them as they constantly move the goal posts and change the narrative of what they are staying. For example they like to say MT all the time but to them MT ONLY means applications that can use all the extra cores that Intel has vs the same named AMD CPU (Ryzen 7 vs i7 for example) even though the i7 costs the same as the Ryzen 9 X900 CPU and they have similar thread counts. They don't consider an application to be MT if it only uses say 8 cores. In reality when Zen 4 CPUs vs 14th Gen is benchmarked with the same thread levels (7600/X and 14400) the AMD chip is usually faster overall. They also ONLY say MT but never overall application performance because they know Intel loses in that as well.

What is also funny is they posted that VR benchmark where the person says "this build has specific Intel optimizations for the e-cores and that could be affecting the Ryzen performance." That company had a second video in which they downloaded a previous build to see if there was a performance difference for the Ryzen and there was a very noticable performance uplift with the older version. That means that the Intel optimizations in the more current build broke performance for Ryzen in some way. Not to mention those are their ONLY comparison videos across CPUs. Then it looks like they went straight with the 7800X3D after that anyways....lol
lol the misinformation is so interesting to see sometimes that I got pulled away.

But anyway, back in the day I was amazed testts like the one below showed how the main thread will improve the MSFS performance. if I hadn't gone from Sandy bridge directly to alder lake then likely 14900K won't ever be on my radar and X3D will be the way to go.
View: https://www.youtube.com/watch?v=U5qTUKpEvik&t=294s


You beat me to it!
Sorry to derail but your name suits the disinformation guy soooo much.

"Depressing" ?? Really? We're talking about leaks; how much emotion is worth investing in such?

Additionally, what is the expectation then? 20-30% IPC? I say IPC because it sounds like we're talking about normalizing 9800X3D clocks down to 7800X3D's as it wouldn't work the other way around.

Only more questions than answers, as usual coming from a leak. Was Windows 11 24H2 used, i.e. how much does AMD's new branch prediction patch effect 9000X3D perf? What do their actual sustained clock frequencies look like in these games? BTW, won't take Gamers Nexus any time at all to determine that after launch. :) Also, what is the AGESA level on these MSI boards used for this testing? We almost know for certain this will change around launch time either way.
In all seriousness I do believe the uplift won't be much, as 3D V cache itself will almost always hinder heat transfer unless they got some secret sauce this time round, as the cache is all the performance dominance in games, if the IPC of the core itself isn't drastic it won't get any crazy leap.
 
  • Like
Reactions: bit_user
You might have a point if a new disruptive technology is on the horizon and all CPUs that just don't have that technology will simply fail- Do you remember MMX or SSE extensions? They enabled much smoother media playback (and other things), or when MPEG2 became hardware accelerated? All of a sudden, playing DVD movies on the computer became a thing. That were changes that made a real difference.
The last ISA extension that was going to be "the next big thing" was AVX-512. But, we all know what happened there. Intel flipped the chessboard and all the pieces went flying. Then, they made up a new thing called AVX10, that's virtually identical to AVX-512 but slightly incompatible and can be limited to just 256 bits. In most consumer software, we probably shouldn't expect to see much embrace of AVX-512, but will instead see software developers hold off until AVX10/256 gets implemented by (presumably) Panther/Nova Lake.

Another big ISA extension to watch out for is APX. This is much more generally applicable, but the best-case upside will be much smaller. Again, I expect to see it in the Panther/Nova Lake timeframe, for Intel.

As far as I'm aware, AMD has said nothing about their plans for either of these extensions. I wouldn't be surprised to see AVX10.1 supported in Zen 6, however. I doubt they'll have enough time to support either AVX10.2 or APX, though.
 
Are there ANY games at all with barely playable fps rates (not GPU bottlenecked, of course) when played on a contemporary CPU? I am not aware of any such game where changing a modern CPU would make a difference in playability.

Star Citizen is a very physics heavy game built on a huge open world which is starting server meshing to make the world basically limitless. Is very heavily cpu bottlenecked, even at 4k. Any and all cpu changes can be felt, and the game engine scales well across 8+ cores. My 7800x3D has typical framerates of 40-70fps in cities at 1440p with a CPU load of typically 50-90%. 14900k is actually a little choppier and slower unless you install a large custom cooling loop and do an all core overclock to over 5.5ghz with really fast 7000+ MT/s ram. Any changes to the CPU are welcomed. If the 9800x3D is overclockable, I'll buy it just for that feature alone. If it adds 200-300mhz, I'll be very happy with the added performance. Hopefully the new floating point pipelines add some IPC gains as well.

The main reason Starfield gives better performance is because it has a ton of loading screens. It's not a truly open world game like Star Citizen. See the tech trailer below. For Star Citizen, optimization would mainly be reducing entity count and render distance. But that's just a band-aid because CPU's haven't caught up yet. Bring on better CPU's and more immersion please!

View: https://www.youtube.com/watch?v=nWm_OhIKms8
 
Last edited: