News AMD Ryzen 7 5800X3D Review: 3D V-Cache Powers a New Gaming Champion

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
why would this SKU be great ? vs getting a regular 5800x or even 5900x ?

Am4 is done, might as well get 12 or 16 core for the long run if you want to skip 13th gen or AM5 /ddr5

If it's for the short run, then it's a complete waste of money vs a cheaper SKU like 5700x
If someone is still on a 2700X it may be because they can't afford a whole new platform.
Although this chip is expensive, it may be attractive for getting another 3-4 years out of whatever they currently have.
 
  • Like
Reactions: King_V
If someone is still on a 2700X it may be because they can't afford a whole new platform.
Although this chip is expensive, it may be attractive for getting another 3-4 years out of whatever they currently have.
But they would also have to get a very expensive GPU, otherwise what's the point, they can stay with the 2700 for a few years if they only have a mid tier card.
 

InvalidError

Titan
Moderator
Given WoW's historic dependency on low thread and memory speeds, I think the 5800X3D will crush Intel like with FarCry 6. I don't think they've changed the engine much, so that's my feeling/prediction.
WoW has quite a bit more multi-threaded activity now than it used to. Before BfA's multi-threaded update, most of WoW's CPU utilization was split between four threads something like 100%, 50%, 25%, 10% core utilization and intermittent activity on a dozen more threads. Now, I'm seeing something like 60-70% on one thread + a steady 3-10% across 10 more. Looks like about a whole core worth of stuff got ripped out of the main three threads and delegated to workers. Approximately the same two cores worth of total CPU usage.

WoW is now decently multi-threaded, just doesn't really need it as long as your CPU has fast enough cores to handle the main thread.

What we see here with the 3D stacked chip is the emerging of CPUs being specialists for certain types of tasks.
CPUs, GPUs, SoCs, etc. already have "specialist" circuitry to power-efficiently delegate common tasks to. You don't need the still not insignificant added complexity physically separate dedicated dies for every secondary function, especially when they are complementary and relatively simple like video encode/decode on a GPU.

Where I expect most of the benefits to be is in semi-custom SoCs: pick your CPU(s), your GPU(s), your (V)RAM, etc., add whatever else that isn't already covered by the major function dies, design appropriate interposers, slap the thing together and you've got whatever SoC you need for your application.

And yet the funny thing is AMD seems to be heavily pushing the 5800X3D as a gaming CPU. They show only gaming workloads in their marketing. If AMD was confident 3D VCache was a general purpose game changer, then it would show some of those benchmarks as well but those are missing. So no, Intel has nothing worry about here.
Larger cache is only a game-changer when you actually need it. The rest of the time, the increased latency that usually comes with it (2-4 cycles here) is a liability.
 
  • Like
Reactions: KyaraM
I think the better argument here is, the 99th percentile should be included as it is useful metric overall, but so too is the average.

I agree on the textures, those are GPU level objects. However, when loaded from disk they can be re-referenced which can have some CPU cache implications depending on how the texture loading engine is written. It's not likely to make a huge difference, but wanted to point that part out.
99th percentiles ARE included. I'm not sure what you're looking at.

vP7sBHWbbvGRRBAMaoWHCH.png

v4swqgw5RqtnTFeEJdz3iD.png

47MxxKnzF963eAQnoQiGUW.png

JwWGshHkQnfYK2vd4HHW8d.png

ZiNbHNsJXmxYK3UNzica5i.png


If you give equal weighting to all of those, the 5800X3D still delivers higher overall 99th percentile results.

The problem with only focusing on 99th percentiles is that they're far, FAR more variable than average fps results. I benchmark GPUs all the time, and I run each game/setting combination at least three times. Where I often see a ~0.5% difference maximum in average fps, depending on the game, the 99th percentile fps can swing by as much as 10% in some cases. It's because all it takes is one or two bad frame stutters, which won't occur every time or even consistently. Look at the results in the above charts. The 5900X PBO should in theory never be slower than the stock 5900X, and the same goes for the stock vs. OC Intel CPUs. That's usually the case, but there are exceptions. F1 2021, Far Cry 6, and Watch Dogs Legion all show some 99th percentile fluctuations that are relatively common.

What dgbk says isn't completely wrong, because minimums are important, but basing all testing solely off minimum or 99th percentile fps would be horribly prone to abuse. What should we do, benchmark each game five times and average all the results to get a consensus? But one really bad run could still skew things, so maybe let's do each game ten times! Now we're spending substantially more time testing, which absolutely isn't viable — we already often end up pulling all nighters just to hit embargo, without running lots of extra tests.

The reality is that talking in theoreticals and showing data sets like "105,105,105,105,105" versus "100,100,200,100,100" makes the problems mentioned sound possible, but in reality we're looking at THOUSANDS of frametimes collected on benchmark runs, so things really do average out. The relatively few outliers don't generally skew the average, but because the 99th percentile only looks at 1% of the data by nature, a data set of 26,000 frames as an example gets reduced to 260 frames.

This is why we don't use the absolute minimum fps over a run. The RTX 3090 Ti for instance averaged 184.25 fps at 1080p ultra in Borderlands 3, the 99th percentile fps was 141.02, the 99.9th percentile was 82.23, and the minimum instantaneous fps was 22.67. But if I did a dozen runs of that game at those settings on a single GPU, the standard deviation for the minimum fps would likely fall in the range of 10–15 fps. Maybe we should report the standard deviation of the frametimes (or framerates) as a secondary metric? But then everyone would need to get a better foundation in statistics to even grok what our charts are showing.

Basically, KISS applies here. Showing seven more charts analyzing all of the intricacies of performance for each game on each CPU/GPU would only be of interest to 0.01% of the readers. And even then, because it's now a matter of statistics, opinions on which metrics are the most important would still crop up and we're back to square one, having now wasted lots of time. Lies, Damned Lies, and Statistics. Average fps is the standard because it really does mean something and it's less prone to wild fluctuations. 99th percentile fps and minimum fps mean something as well, so they're not useless, but if I were weighting things I'd say 99th percentile results only count for about 10% as much as average results. And if they're really bad for a particular set of hardware compared to its direct competition, it implies a problem with the benchmark/software/drivers/etc. more than anything.
 
But they would also have to get a very expensive GPU, otherwise what's the point, they can stay with the 2700 for a few years if they only have a mid tier card.
And that GPU will perform at its full potential, even on a PCIe3 X16. It would be an interesting test though.

Still, just by swapping a Ryzen 1K, 2K and even 3K, I'm sure the FPS jump just due to the CPU change will be quite substantial. Just go look at where the 2700X sits compared to the 5800X; this CPU is almost like a new generational jump for games, so it is totally justified as a single upgrade point for gaming alone. Besides that, it's still on par with the 5800X in almost everything, just a tad below, so the trade off is not that terrible considering the gaming benefit.

Wendel demonstrated in his review he could just drop in the 5800X3D into a B450 with a 2600 using a 3090 and getting more FPS than a 12900KF and a 3090ti. If that's not impressive (for games), I don't know what is.

EDIT: The video I mentioned
View: https://www.youtube.com/watch?v=fIRWzMnfMPY


Regards.
 
If AMD was confident 3D VCache was a general purpose game changer
it isnt general purpose game changer.
If an application can't use the cache well it will do nothing.

However moment it can use that extra cache the benefits good.

Especially given Ryzen LOVES fast ram & DDR5 is able to go brrr....as long as they can get fabric faster to match the 1:1 at higher speeds on top of assumed icp/mt boost & clock speeds zen4 will be a beast. (more so in that AMD has shown zen to be very good on performance per watt thus thermally under control)
 

InvalidError

Titan
Moderator
The 5900X PBO should in theory never be slower than the stock 5900X, and the same goes for the stock vs. OC Intel CPUs. That's usually the case, but there are exceptions.
When you screw around with the relative clocking and scheduling of things on a chip, weird things tend to happen. PBO and Intel OC sometimes having worse frame time variance could simply be due to random threads getting super-boosted in code that doesn't handle stuff completing out of sequence exceptionally well.
 

saunupe1911

Distinguished
Apr 17, 2016
212
76
18,660
So Tom's and TechSpot are the only reviewers to compare it with a 12900KS so far.

Imagine a 5900X3D...but I guess that's what Zen4 is for.

Lastly...you gotta be crazy to buy one of these within the first few months. You better wait until those BIOSs are sorted out. You've been warned.
 
Last edited:

waltc3

Honorable
Aug 4, 2019
453
252
11,060
AMD has never promoted this chip as being anything other than a gaming CPU, and AMD has much better CPUs for productivity. At 1080P gaming the consensus is the 5800X3D is ~22% faster than the fastest Intel CPU now shipping, which costs a lot more. Very impressive to say the least. A new Gaming Crown indeed. But I suspect the best to come from AMD will appear later this year. Trouncing Intel's best running DDR5 on a PCIe5 bus seems like a nice achievement, I'd say...;) AMD knows what it's about. I'm impressed myself.
 
Intel is rubbing their hands for this prospect, if AMD does this their CPUs are going to be so expensive that intel won't even have to try with 13th gen, they are suddenly going to experience process troubles again.
Always the ray of sunshine (for intel)
That said AMD will have additional cache + DDR5, but no chip stacking. That's reserved for specialty server server chips. This was a showboating chip.
 

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
Its a good chip for the price but this is like if someone body slams you (alder lake) and then you kick them in the nuts while you're on the ground. This is just a last ditch effort for amd to maintain the gaming crown before moving to zen 4. Am4 is a dead platform so no one will be suggesting this in a new build. I think this is for upgraders. This whole 3d cache needs some rethinking because the thermal limits means it can only go so far. I suggest watching the video from ltt. It was impressive but only in games that can take advantage of the cache. The cheaper 12700k is a better option all around.
Don't point me to entertainers like LTT, I watch people that do real reviews and rip companies one off when they make mistakes instead of lauding them, as in GN and HUB.
I suggest you watch those, if you want professional testing.
And yet the funny thing is AMD seems to be heavily pushing the 5800X3D as a gaming CPU. They show only gaming workloads in their marketing. If AMD was confident 3D VCache was a general purpose game changer, then it would show some of those benchmarks as well but those are missing. So no, Intel has nothing worry about here.

The only reason why gaming workloads would be suitable for this application is because games have a lot of data reuse (running a loop over and over again on the same things). A majority of other processor heavy applications don't reuse data by the nature of the workload itself (crunch one set of data, move on to the next set when done).
Oh, they are soiling themselves, make no mistake.

Zen4 comes with it's own +25% IPC increase, major core clock increase and 2nd gen V-Cache and that's all we know so far, who knows what other optimizations.

Why do you think all of a sudden we hear Raptor Lake having bigger L3 cache and possible 5.7-5.8Ghz max core boost?

Intel is scared and pushing all they can like crazy. They know how it tastes to lose the crown already this decade.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
Great stuff, now I have the incredibly hard decision between buying this, 5950X or nothing and upgrade with Zen 4 the whole PC or even later. On gaming side, and I didn’t expect this to be great for normal apps, it even delivers more than expected performance advertised by AMD. Great stuff.
 
Larger cache is only a game-changer when you actually need it. The rest of the time, the increased latency that usually comes with it (2-4 cycles here) is a liability.
Applications can't decide if they "need it", it depends purely on the nature of the work and how the application was implemented. If the application deals with constantly changing datasets, then extra cache isn't going to help here. Especially since AMD's caching system doesn't prefetch data.

it isnt general purpose game changer.
I never said it was.

Especially given Ryzen LOVES fast ram & DDR5 is able to go brrr....as long as they can get fabric faster to match the 1:1 at higher speeds on top of assumed icp/mt boost & clock speeds zen4 will be a beast. (more so in that AMD has shown zen to be very good on performance per watt thus thermally under control)
That would make extra cache even less useful because the point of cache is to hide the deficiencies of RAM performance.
 

ezst036

Honorable
Oct 5, 2018
754
631
12,420
Just imagine a dedicated "audio" - CPU with stacked audio processing circuits, or a "video CPU" with extremely energy- efficient video encoders/decoders. Apple Silicon did this with their M1 based chips that are able to effortlessly and efficiently do video editing because of the hardware based video encoding/decoding engines.

I do think that as far as the hardware goes this is already common practice. Fundamentally Apple's decoding/encoding engines are probably very similar to what AMD has had in its APUs going back to Llano.(Released in 2011) Yeah, sure, Apple probably has more advanced features like AV1 support in M1. But what really is the difference between M1 encoding/decoding and the AMD UVD?

Maybe VCE/VCN is the better/more fully functioning comparison than earlier UVDs. ? (VCE began in 2012 with AMD Trinity)

Note: Encoding capabilities begin emerging also in 2012 on video cards going back to Radeon HD 7xxx and NVENC made its debut with GeForce 6xx.
 
Last edited:

spongiemaster

Admirable
Dec 12, 2019
2,353
1,327
7,560
Besides that, it's still on par with the 5800X in almost everything, just a tad below, so the trade off is not that terrible considering the gaming benefit.
It's not on par in price with the standard 5800x, and that is a critical distinction, because the 5900x is currently selling for $71 less than the MSRP of the 5800X3D. You really need to be sure gaming at lower resolutions is all you care about if you jump for the X3D.
 
Last edited:

KyaraM

Admirable
At 1080P gaming the consensus is the 5800X3D is ~22% faster than the fastest Intel CPU now shipping, which costs a lot more.
On average at 1080p, the 5800X3D is ~9% faster than the 12900K, which costs 30% more, and ~7% faster than the Core i9-12900KS, which costs a whopping 64% more.
Can we at least not lie about the difference in performance, please? Also, the Intel chips aren't overclocked in that comparison, which further slims down margins. As does screen resolution. I rather have 12 FPS on average less playing in 1440p best case than a cripple (compared to everything else in that price range) in everything else but gaming, lmao. If FPS are already high enough and you are running into the GPU limit, that's worth infinitely more than 12 FPS you will never even see due to your GPU.

Edit: it's funny, though. The 5950X is ridiculously overpriced, too, yet so many AMD fans love the CPU even for gaming where it underperforms so pricing didn't matter. Yet when that (kinda, not really; the 12900ks IS the best overall Intel chip sfter all in all categories) happens with Intel, it's reason to poke fun at them. Hmmmmmmm...
 
Last edited:
  • Like
Reactions: rluker5

Johnpombrio

Distinguished
Nov 20, 2006
252
73
18,870
Interesting CPU. We finally get an answer to what happens to heat dissipation in a stacked chiplet configuration. I expected lower clocks and higher heat but AMD has tweaked this to a T. No overclock is reasonable with the design. I am not exactly sure what to make of a "pure" gaming" CPU as we run these CPUs on desktops, not gaming consoles. I went with Alder Lake but for the first time in decades of building PCs, I did not go for the fastest CPU by Intel simply due to not liking water cooling vs a nice compact air cooler.
I expect that the latest batches of CPU by both Intel and AMD are getting less and less likely to be worth overclocking anymore. Of course, I said that 10 years ago :)
 

Johnpombrio

Distinguished
Nov 20, 2006
252
73
18,870
As people are commenting, when I went to 4K HDR at 120 Hz, I poured my money into a 3080 Ti and spent less on a lower-tier Intel Alder Lake CPU which I could air cool. I am not exactly sure what purpose is there having an expensive CPU vs an expensive GPU except for bragging rights and some slight increases in certain apps. And this CPU throws out even more of a head-shaker as an assumption of a CPU as a General Purpose desktop is changed to a PC desktop gaming console, a trend I hope will not continue.
 
  • Like
Reactions: KyaraM and Why_Me
Certainly, the 5800X3D is a great value champ when compared to 12900K/KS, and in particular, compared to a 12900K or 12900KS with DDR5-6400, which, oddly enough, seemed to mysteriously escape discussion altogether when proclaiming this CPU the new 'gaming champion'....(Had this review proclaimed ' DDR4 gaming value champion' , ...I'd much more definitely agree somewhat...)

But Hardware Unboxed review shows DDR5-6400 based rig, admittedly expensive, as faster in an average of the 8 games they used. Perhaps we can reach a better conclusion upon testing 30 games....(this review seemed more inclined to make AMD happy, rather than to show it did well, but certainly did not 'win')
 
  • Like
Reactions: rluker5
Certainly, the 5800X3D is a great value champ when compared to 12900K/KS, and in particular, compared to a 12900K or 12900KS with DDR5-6400, which, oddly enough, seemed to mysteriously escape discussion altogether when proclaiming this CPU the new 'gaming champion'....(Had this review proclaimed ' DDR4 gaming value champion' , ...I'd much more definitely agree somewhat...)

But Hardware Unboxed review shows DDR5-6400 based rig, admittedly expensive, as faster in an average of the 8 games they used. Perhaps we can reach a better conclusion upon testing 30 games....(this review seemed more inclined to make AMD happy, rather than to show it did well, but certainly did not 'win')
Purchasing an i9 doesn't make much sense imo when the i7 exist.

https://www.tomshardware.com/reviews/intel-core-i7-12700k-review
 
  • Like
Reactions: rluker5

bjnmail

Honorable
Jul 20, 2018
16
15
10,515
Additionally, the chip doesn't support the auto-overclocking Precision Boost Overdrive (PBO) feature, and you can't undervolt or underclock.

What about cTDP? I would assume this is not disabled, as you should be able to lower the TDP down to as low as 45W for a 105W SKU, via cTDP. It's still enabled on the Epyc Milan chips, so I can't see why they'd disable it on desktop if you really need to reduce power consumption.
 

spongiemaster

Admirable
Dec 12, 2019
2,353
1,327
7,560
Certainly, the 5800X3D is a great value champ when compared to 12900K/KS, and in particular, compared to a 12900K or 12900KS with DDR5-6400,
I'd have to question your knowledge of the word "value" if you're using an i9 as your reference point. The 5800X3D on its own merits is a terrible value. We in the enthusiast community like to talk about a well rounded/balanced system. The 2160p chart above posted by Alvar shows how false that thought process can be. If you have an $800+ GPU, you can get within 1.1% of the performance of the "New Gaming Champion" with a $109 i3-12100f. You're less than 2% behind the absolute fastest you can get KS. Pretty much everyone would consider a 3080 paired with an i3 to be a terribly mismatched system, however, if you're a top of the line 2160p gamer, ironically, the i3-12100f destroys everything in value. You have to be very specific in your scenario when determining what really is the best value option.
 
  • Like
Reactions: KyaraM