Review AMD Ryzen 7 9800X3D Review: Devastating Gaming Performance

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

abufrejoval

Reputable
Jun 19, 2020
582
421
5,260
Yes, IMO I would 110% return it!!!

Microsoft Fight Simulator 2020 (& likely 2024 as well) aren't particularly heavily multi-threaded (anything >=6c/12t is good), but ARE ABSOLUTELY as cache, IPC, & clock-speed hungry as games can basically get!!!

Aka, the Ryzen 7 9800X3D is going to be notable better than your current Ryzen 9 7950X3D in literally every way that actually matters! 🤷 (Framerates, frame-times, performance consistency, future-proofing for future GPU upgrades, etc...)

If you absolutely NEED >8x-cores for something outside running flight simulators though, then you might not have any choice as the 16-core/32-thread Ryzen 9 9950X3D isn't coming out until January, but if you don't, THEN MOST DEFINITELY RETURN/EXCHANGE!!!
I'd like to test that, but probably not enough to buy a 9800X3D to replace my 7950X3D just now.

But when it comes to Flight Simulator 2020/2024, I can only caution: Yes FS has never performed better and is almost smooth even with 4k VR (HP Reverb and Quest 3 being used with an RTX 4090), but that only allows you to see the garbage in perfection!

The main issue with FS is that M$ uses the completely inadequate data they have on Bing and then fakes very badly what it doesn't have on top (simulated buildings, roads, traffic etc.).

I live next to a major European airport, so the first thing I do with every version of FS (I started with the very first on my Apple ][), is to fly over my house.

Only what I'm shown has nothing to do with what's actually there.

M$ hints at being able to explore a digital twin of Earth. But unless that twin has been hand optimized by whoever they get their data from, it's generated using algorithms that deliver catastrophically bad results.

Just go and compare any place you know on Bing maps and Google with 3D terrain rendering. Google's variant isn't photo-pretty, but it very closely related to ground truth. The Bing variant in many cases is a catastrophy, with material that is often decades old, if you zoom in closer.

E.g. it's been fascinating to see one of my work places in Lyon revert to it's old heavy industries past when I flew closer, but while time travel would be a great separate game to play, it's not what I expected to buy.

And that cannot change with FS2024, unless they pay for better data than what they show on Bing maps.

And we all know far better data with far more frequent updates is available today commercially. It's just so expensive that M$ decides to go cheap.

And while we're at it: the performance of the browser based rending on Chromium browsers for the Google 3D maps is incredible, even on the weakest systems and shows that the biggest issue isn't hardware but software.

I've "flown" more fluidly using Google Maps 3D on Chromium on a 4k 43" screen driven by a €220 Orange PI 5+, than on an RTX3090 driven by a 5950X on FS2020! The 7950X3D on an RTX4090 is finally as smooth, but what you see is so bad, you have to climb to a thousand feet to have it becomes visually tolerable (and cars no longer drive into rivers or emerge from a field where they plant sugar beets instead).

But what's the point, when it has nothing to do with what's actutally there?

M$FS is really bad code operating on really bad data and I recommend against spending money on hardware only to make it tolerably fast.

Paying for the software is bad enough, with €100 wasted every few years only to see that M$ keeps overselling crap.

Of course that's the armchair travelling perspective, no idea if it's good at simulating flight. But I haven't crashed a plane in many years unless I wanted to, so I'm doubtful.
 
Last edited:

abufrejoval

Reputable
Jun 19, 2020
582
421
5,260
My feeling is that if people don’t buy or fewer people buy the none x3d parts then the impetus to produce x3d parts will be stronger.
At the same clocks x3d makes the non x3d parts semi redundant (price dependent).
It’s akin to Athlon vs Duron or Pentium vs Celeron.
You aren't wrong, just as they keep saying that the right price fixes bad products.

I just don't see AMD having the incentive any time soon and the real-worlds benefits being big enough for most consumers to insist on X3D parts.

And even €20 in manufacturing cost only disappear towards the very end at clearance sales or on eBay.

Spending the extra money on the GPU or a pizza or two may be more attractive, just as Durons and Celerons sold.
 

YSCCC

Commendable
Dec 10, 2022
566
460
1,260
I'd like to test that, but probably not enough to buy a 9800X3D to replace my 7950X3D just now.

But when it comes to Flight Simulator 2020/2024, I can only caution: Yes FS has never performed better and is almost smooth even with 4k VR (HP Reverb and Quest 3 being used with an RTX 4090), but that only allows you to see the garbage in perfection!

The main issue with FS is that M$ uses the <Mod Edit> data they have on Bing and then fakes very badly what it doesn't have on top (simulated buildings, roads, traffic etc.).

I live next to a major European airport, so the first thing I do with every version of FS (I started with the very first on my Apple ][), is to fly over my house.

Only what I'm shown has nothing to do with what's actually there.

M$ hints at being able to explore a digital twin of Earth. But unless that twin has been hand optimized by whoever they get their data from, it's generated using algorithms that deliver catastrophically bad results.

Just go and compare any place you know on Bing maps and Google with 3D terrain rendering. Google's variant isn't photo-pretty, but it very closely related to ground truth. The Bing variant in many cases is a catastrophy, with material that is often decades old, if you zoom in closer.

E.g. it's been fascinating to see one of my work places in Lyon revert to it's old heavy industries past when I flew closer, but while time travel would be a great separate game to play, it's not what I expected to buy.

And that cannot change with FS2024, unless they pay for better data than what they show on Bing maps.

And we all know far better data with far more frequent updates is available today commercially. It's just so expensive that M$ decides to go cheap.

And while we're at it: the performance of the browser based rending on Chromium browsers for the Google 3D maps is incredible, even on the weakest systems and shows that the biggest issue isn't hardware but software.

I've "flown" more fluidly using Google Maps 3D on Chromium on a 4k 43" screen driven by a €220 Orange PI 5+, than on an RTX3090 driven by a 5950X on FS2020! The 7950X3D on an RTX4090 is finally as smooth, but what you see is so bad, you have to climb to a thousand feet to have it becomes visually tolerable (and cars no longer drive into rivers or emerge from a field where they plant sugar beets instead).

But what's the point, when it has nothing to do with what's actutally there?

M$FS is really bad code operating on really bad data and I recommend against spending money on hardware only to make it tolerably fast.

Paying for the software is bad enough, with €100 wasted every few years only to see that M$ keeps overselling crap.

Of course that's the armchair travelling perspective, no idea if it's good at simulating flight. But I haven't crashed a plane in many years unless I wanted to, so I'm doubtful.
The default MSFS aircrafts are crap, but things like flybywire A320 neo, A380, the Fenix A320, PMDG boeing etc. are quite good simulation of the real thing, the default ones are more geared towards gaming with Xbox.

For the payware "study level" ones it is CPU and ram intensive in all those background system simulation, which is lacking in the default game, so say my 14900k+3070Ti can get some 100+FPS in default planes, in the more difficult and realistic ones you get only 30-50fps.
 
Last edited by a moderator:
even though already built my 7950x3d system a year ago... if the 9950x3d has 3d cache on both ccd's, I'd be tempted to upgrade lol. Else I'd prolly just upgrade from my 4080 to a 5090.
So, after talking to Paul some more... it's not going to happen. And the reason is that the CCD to CCD communication adds a lot of latency, so unless you have a game that scales beyond eight cores / 16 threads, you will lose more performance from the CCD to CCD latency than you'd gain from the extra cache.

Now, there's an interesting aside and it's that AMD could potentially (if it designed for it?) put cache above and below the CPU die. Yes, that would limit clocks due to thermal insulation from the cache on top, but there are workloads where having 128MB of extra L3 cache might offset the loss in clocks.

Alternatively: AMD could move the cache die from the current N6 node (I think) to N3 and probably stuff 128MB into the same area.
 
  • Like
Reactions: atomicWAR
Paul, very frustrating that you didn’t include the 245K in this review for comparison. From a power perspective this is a good chip for SFF ITX builders. Would have been nice to see how it compared to the new gaming champ in that area specifically.
He was benchmarking until the very end, and with firmware and OS updates, ran out of time for testing more chips. Plus, let's be real: Intel needs to put out some additional firmware updates for Arrow Lake that will hopefully improve the situation. AMD has released several Zen 5 updates (AGESA) that have real-world measurable impact on performance. 9000-series, even non-X3D, looks much better today than it did at launch just two and a half months back.
 
So, after talking to Paul some more... it's not going to happen. And the reason is that the CCD to CCD communication adds a lot of latency, so unless you have a game that scales beyond eight cores / 16 threads, you will lose more performance from the CCD to CCD latency than you'd gain from the extra cache.

Now, there's an interesting aside and it's that AMD could potentially (if it designed for it?) put cache above and below the CPU die. Yes, that would limit clocks due to thermal insulation from the cache on top, but there are workloads where having 128MB of extra L3 cache might offset the loss in clocks.

Alternatively: AMD could move the cache die from the current N6 node (I think) to N3 and probably stuff 128MB into the same area.
Talking to a friend and remembering the quoted reasons why AMD just went with a single CCD last gen, I think it would make sense they come out with a dual VCache'd monster now: thermals and clocks.

Now that the clocks limitation is gone and you'd have only a small power penalty to reach parity with the 9950X in productivity apps (clocks-wise), there's no good reason why they can't make the 99550X3D a dual VCache's part.

It'll come down to what they think they'd be cannibalising instead, which, I don't know what would be considering the 9950X is at a very low price now compared to even the 7950X. ThreadRipper is in another Postcode at this point as well, so there's zero overlap with any other vertical and even horizontal. Like, it would slightly overlap with the 9800X3D at most, but peeps like myself, would buy it in a heart beat even at 1K (which, please, don't AMD) only due to the extra productivity you can get out of it with no trade offs like with the 7950X3D.

Regards.
 
So, after talking to Paul some more... it's not going to happen. And the reason is that the CCD to CCD communication adds a lot of latency, so unless you have a game that scales beyond eight cores / 16 threads, you will lose more performance from the CCD to CCD latency than you'd gain from the extra cache.

Now, there's an interesting aside and it's that AMD could potentially (if it designed for it?) put cache above and below the CPU die. Yes, that would limit clocks due to thermal insulation from the cache on top, but there are workloads where having 128MB of extra L3 cache might offset the loss in clocks.

Alternatively: AMD could move the cache die from the current N6 node (I think) to N3 and probably stuff 128MB into the same area.
I've had to tell this to numerous people who are foaming at the mouth for cache on both CCD lol.

Even if that happens since its a rumor it will not address cross CCD latency.
 
  • Like
Reactions: atomicWAR
But the hero that has it has the highest memory support while having 4 dimms. At least according to hubs testing.
What does that have to do with anything?

That speaks to manufacturing variance and/or the limits of Steve's testing. The X870-I has the highest rated memory support of any Asus X870/E boards and it matched the X870E-E which has NitroPath.

The technology just changes the retention system to minimize the losses caused by empty slots. It should have an effect very similar to that of using dummy DIMMs which is why they only use it on 2DPC boards.

Roman went more in depth if you're interested:
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
What does that have to do with anything?

That speaks to manufacturing variance and/or the limits of Steve's testing. The X870-I has the highest rated memory support of any Asus X870/E boards and it matched the X870E-E which has NitroPath.

The technology just changes the retention system to minimize the losses caused by empty slots. It should have an effect very similar to that of using dummy DIMMs which is why they only use it on 2DPC boards.

Roman went more in depth if you're interested:
Well, got myself an aorus elite and a backup el cheapo b650 HDV in case the 4dimmer is a stinker, ill use the el cheapo 2 dim.
 
Aug 18, 2024
31
10
35
Yeah, as I said above, for 4K I doubt you'll see much of a delta between the various CPUs.
that is something that irks me about a lot of builds. The builder buys a 7800X3D, only to play games at 1440P with a 4070 ti. At that point, even something like a 9700X or a 14700K and see equivalent performance for less money.
 
Well, got myself an aorus elite and a backup el cheapo b650 HDV in case the 4dimmer is a stinker, ill use the el cheapo 2 dim.
The tradeoff with high speed memory on AMD is so weird. There doesn't seem to be any consistency as to speed or how low the latency can go once moving past 6000. Sometimes it seems like it can be motherboard, IMC and/or specific memory kit. Then there's the increased power consumption which you have to account for or you can actually run lower performance due to power throttling.

Buildzoid has done some testing with the X870 Hero, X870 Tomahawk and just did the X870-I. He also put out a general 7000/9000 video which I haven't had time to check out.
 
The tradeoff with high speed memory on AMD is so weird. There doesn't seem to be any consistency as to speed or how low the latency can go once moving past 6000. Sometimes it seems like it can be motherboard, IMC and/or specific memory kit. Then there's the increased power consumption which you have to account for or you can actually run lower performance due to power throttling.

Buildzoid has done some testing with the X870 Hero, X870 Tomahawk and just did the X870-I. He also put out a general 7000/9000 video which I haven't had time to check out.
Check that out and you'll understand.

I posted that video in another thread for @helper800 .

Regards.
 
  • Like
Reactions: helper800

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
The tradeoff with high speed memory on AMD is so weird. There doesn't seem to be any consistency as to speed or how low the latency can go once moving past 6000. Sometimes it seems like it can be motherboard, IMC and/or specific memory kit. Then there's the increased power consumption which you have to account for or you can actually run lower performance due to power throttling.

Buildzoid has done some testing with the X870 Hero, X870 Tomahawk and just did the X870-I. He also put out a general 7000/9000 video which I haven't had time to check out.
Latency will always go down with faster memory regardless of the Aida readings. Aida measures time for first tick, but faster memory will end the action faster and move to next due to bandwidth. The imc clock of course is an issue since it has to run a good 500 MHz slower in gear 2 but that might not be such an issue on a 3d chip.
 
Mar 10, 2020
414
376
5,070
that is something that irks me about a lot of builds. The builder buys a 7800X3D, only to play games at 1440P with a 4070 ti. At that point, even something like a 9700X or a 14700K and see equivalent performance for less money.
Ok, you hit upon a frame rate wall, GPU bottle neck.. it doesn’t matter what cpu you are using you can’t get more out of the GPU averages. What has been seen across many reviews is an increase in the minimum frame rates, smoother gameplay.
The faster processors have more headroom waiting to be tapped, when the next gpu… maybe the following gpus have come through the faster processors will be better placed to feed the data to them, faster frame rates will be seen just as they were with the 3090 to 4090 transitions.

People pay their money and make their choices..
 
Mar 10, 2020
414
376
5,070
Latency will always go down with faster memory regardless of the Aida readings. Aida measures time for first tick, but faster memory will end the action faster and move to next due to bandwidth. The imc clock of course is an issue since it has to run a good 500 MHz slower in gear 2 but that might not be such an issue on a 3d chip.
Look up Buildzoid video “General Ryzen 7000/9000 AM5 CPU DDR5 and infinity fabric OC” on YouTube

It explains the relationships for the memory clocks and infinity fabric really well. Also where the mismatches can be minimised to cause the least increase in latency when running over 6400. Basically you need to be running close to 8000 to benefit from ram over 6400 1:1
 

abufrejoval

Reputable
Jun 19, 2020
582
421
5,260
So, after talking to Paul some more... it's not going to happen. And the reason is that the CCD to CCD communication adds a lot of latency, so unless you have a game that scales beyond eight cores / 16 threads, you will lose more performance from the CCD to CCD latency than you'd gain from the extra cache.

Now, there's an interesting aside and it's that AMD could potentially (if it designed for it?) put cache above and below the CPU die. Yes, that would limit clocks due to thermal insulation from the cache on top, but there are workloads where having 128MB of extra L3 cache might offset the loss in clocks.

Alternatively: AMD could move the cache die from the current N6 node (I think) to N3 and probably stuff 128MB into the same area.
Contrary to what Intel did for a while, AMD isn't about stuffing every possible niche with product, just to ensure that no competitor can even draw a breath.

Instead they build design components for the more profitable server market, which also (quite intentionally) happen to work great enough on desktop to help with scale.

That in turn means, that things which don't work at scale, either with server or something like consoles + laptops + desktops, won't get done.

AMD does chips that only contain fully enabled V-Cache CCDs--as distinct EPYC variants for servers. They are among the most costly specimens they make and it evidently saves some of their customers so much money, that they don't have an issue paying a premium which is far above what it costs AMD in manufacturing. It's a gold mine for both sides.

That V-cache CCDs found their way to the desktop is basically a skunkworks accident, from what I heared. And it came at a moment where AMD could score a significant hit against Intel putting out a 5800X3D for gamers, the premium desktop market segment.

But as you say, a 2nd V-cache CCD won't help you in gaming, unless someone were to write games specifically tailored for that architecture. Writing games for such a niche sounds like a death wish, not a USP.

Of course a dual V-cache CCD desktop part could still find a niche even on the desktop, basically for people who'd want to run smaller variants of those V-cache EPYC workloads on the cheap.

But AMD has very little motivation to fill that appetite with cheap desktop chips... unless there were any significant scale and competition.

Currently I'm actually guessing that AMD doesn't even want to put out a 9950X3D (or 9900X3D), because gamers wouldn't buy that and it would put the 9800X3D under pricing pressure they'd really rather do without.

It's only people like me who'd rather like to mix gaming into a mainly productivity oriented environment without buying another system who'd go for that.

When Intel was competition, making that CPU helped AMD to keep their competitor at bay. Today they might just decide that the risk of bad press and disappointed users and diluting a high-price market isn't worth it.

To niche users the dual V-cache 9950X3D is more attractive than ever, because it won't suffer nearly as much from clock limitations, make it essentially V-cache without "clock remorse". It won't be significantly more expensive to make, as V-cache was reported to be no more than €20 extra per CCD, even with the first incarnations.

But without competition it's far more profitable to sell it in EPYC or perhaps Threadrippers.

As to 128MB V-cache: CCD designs are EPYC first, desktop second. If it made any sense on servers, they might have considered it.

But you can't just triple caches and then double again without some very deep design changes. For Zen 5 that ship would have sailed long ago. And unless they decide to make vastly bigger caches a standard feature, 128MB V-cache requires a separate CCD, which AMD won't do without vast deployment scale.

I consider that much less probable than the dual V-cache CCD on the desktop.

And then I just don't see it upping the bar on gaming, as it would very likely incur latency overheads, too.

Even today the practical advantage of faster CPUs for gaming is near zero, because nobody needs 400 FPS and 4k results prove where the bottleneck would like, if ~150 FPS isn't enough for you.

It may turn out that the 9950X isn't bad enough to make a V-cache variant actually that attractive to significant enough masses of gamers... apart from bragging rights.

And I see the gaming market moving more significantly towards even smaller and chaper platforms with the performance of today's mid-range gaming rigs than it accepting higher prices for higher resolution eye candy that simply isn't sweet enough to bite.

If it wasn't for AI, high-end dGPUs and CPUs would be in deep, deep trouble.
 
Last edited:

mux

Nov 7, 2024
1
0
10
Can we expect 1440p and 4k gaming benchmarks soon? I'm really curious to see the numbers. We would most likely see a smaller gap there because the workload is a lot more GPU bound but the question is how much?

Thanks for the article by the way.
 
They enabled PBO which is an overclock. If you are talking about manual tuning, they said its not worth is over PBO because PBO is better.

The boost behaviour in 9800X3D is still different than the non X3D chips. IIRC you cannot have a positive voltage offset. Only fixed manual voltage. So PBO will not give you the same results as manual tuning?

If you can get 5.6ghz with PBO, then I am wrong.
 

YSCCC

Commendable
Dec 10, 2022
566
460
1,260
And I see the gaming market moving more significantly towards even smaller and chaper platforms with the performance of today's mid-range gaming rigs than it accepting higher prices for higher resolution eye candy that simply isn't sweet enough to bite.

If it wasn't for AI, high-end dGPUs and CPUs would be in deep, deep trouble.
Agree on this and actually this is a consequence of skyrocketing greed, especially on the GPU side, when a card who's lifespan is like 2 years being TOTL, and at most 5 years relevant for all eyecandy enabled cost like 50% + of a normal working class monthly salary, or cost like 1month of rent, one would be more concious on the spending, and considering that those who can afford long hours of gaming are mostly consisting of the child of those working class, or some colleage students, the price become more unattainable.

Plus those eye candy took away too much of the game's development time, so in recent years a ton of photo realistic games pop out with next to no enjoyable content inside or minimal thought on the entertainment side, ppl tend to have even less incentive to pay big on hardward and upgrade every now and then, so it becomes a viscious cycle
 

ilukey77

Reputable
Jan 30, 2021
808
332
5,290
Can we expect 1440p and 4k gaming benchmarks soon? I'm really curious to see the numbers. We would most likely see a smaller gap there because the workload is a lot more GPU bound but the question is how much?

Thanks for the article by the way.
Just search the reviews on YouTube I think one I spotted showed 1440 p and 4K but in most cases it was only by a few fps at 1440p if that and 4K was non existent as you are basically gpu bound !

edit: level1tech channel
This channel has the 1440p and 4k fps as well
View: https://youtu.be/KswGlkrNhP0
 
Last edited:

Latest posts