News AMD Ryzen 7 5800X3D Review: 3D V-Cache Powers a New Gaming Champion

salgado18

Distinguished
Feb 12, 2007
932
376
19,370
That was kind of underwhelming: some decent gains in gaming, almost no difference in anything else. At least the 2ns (20%) worse average cache latency isn't hurting anything particularly badly.
Why underwhelming? AMD marketed it as a gaming-focused chip, and it delivers on that front. It is not intended to be an all-around winner, but a one-hit champion. After all, not even the extra cache can beat the extra cores and clocks in other applications.

I actually expect this tech to appear on consoles in the future.
 
Now if only I could see some WoW benches of this chip.
Given WoW's historic dependency on low thread and memory speeds, I think the 5800X3D will crush Intel like with FarCry 6. I don't think they've changed the engine much, so that's my feeling/prediction.

And gaming focused CPU for sure. As I've read in other places, this CPU fits 2 niches quite well: HTPC and high refresh gaming.

I have a 5600X sitting under my TV for VR games and the 5800X3D may be the way to go... When it comes down in price by a lot XD

As for my main PC, I just swapped the 3800XT (from a 2700X) to a 5900X and called it a day. Quite the upgrades, I should say. Extend the life of good ol' Vega64 in it, lel.

Regards.
 
Apr 14, 2022
3
1
15
Benchmark suggestions:

First of all, I have really respected Tom's article for a long time. but I still have a few thoughts on how to improve the methodology for testing a CPU for gaming.

1. DON'T use average FPS for the benchmark. use 99th better for real players' requirements.

eg. Let's take 5sec FPS data.

case 1 : fps 100 100 200 100 100
case 2: fps 105 105 105 105 105

every e-sport or pro gaming player knows case 2 is more favorable.
case 1's average FPS is 120, and case 2 's average is 105.
but case 1 's 99th is far lower than case 2.

From the way, Toms's benchmark actually misleads gamers that AMD 5800 3D is good CPU in average FPS, but from 99th trustable FPS , INTEL 12900K is still better for e-sport players or pro gamers.

2. DON'T test cpu with FPS higher than the monitor's frame rate, with inproper settings.

The mainstream player's monitor is 144Hz or 165Hz as e-sports at 2K, or 4K for pro gamers for RPG.
In most of Tom's tests, the benchmark is un-capped to show CPU performance, but actually, it's not accurate at all. why is that? for example, 1080P testing only uses a small texture for rendering, which will fit well into the cache, but under 2K or 4K scenario, it's not the case at all. the 3800 3D will present the fake result.
Another aspect is Toms's most test case using HIGH settings instead of extra or extreme settings. HIGH setting VS extreme setting , the texture detail almost not at same level. ( extreme setting use more texture and require more cache size)

the suggestion is ALWAYS to calibrate the benchmark to 144 or 165 FPS and turn the settings based on that for either extreme or high, instead of testing fake FPS like 200-300 fps under high settings only.

3. DON'T do math average for multi-game bench, you should use normalize every game to equal weight to get the conclusion.
eg. Let's take 5 game data.

cpu1: fps 100 100 200 100 100
cpu2: fps 102 102 120 102 102

cpu1 1 , 5 game's average is 120 FPS
cpu2 , 5 game's average is 102 FPS, but 4 /5 game cpu2 win the performance.

we know in cpu2 's FPS, 4 game win and 1 game lose , obviosly case cpu2 is more farorage for gammer, but Tom's methodoly will pick cpu1 instead of cpu2.


Base on my suggestion, I can't get the conclusion that amd 5800 3D is best gaming CPU at all. I do love AMD cpu, but I love what my real feeling from the gaming even more.
 
  • Like
Reactions: jacob249358

drajitsh

Distinguished
Sep 3, 2016
131
20
18,695
@tomshardware
  1. slide 5 in x3d die placement shows that both the R5800X and cache dies are face down. In most cases the bulk of a die is made up of the substrate. Which way are the substrates oriented. Also, there are processes which can reduce substrate thickness. Are any such technologies being used?
  2. you used an alphacool 2000 chiller. Is it possible to use that with 5800x3d?
  3. The chips and shims are bonded by oxide. The commonest oxide is silicon dioxide. The thermal conductivity of SiO2 is 6-12 depending on orientation, silicon is 149( >10x) and copper is 401 (approx 30x).
  4. the Tj temperature is usually 95C, but could it be possible that the temperature validated for the stacking is 90C.
Again waiting for a subambient test
 
Last edited:
1. DON'T use average FPS for the benchmark. use 99th better for real players' requirements.

eg. Let's take 5sec FPS data.

case 1 : fps 100 100 200 100 100
case 2: fps 105 105 105 105 105

every e-sport or pro gaming player knows case 2 is more favorable.
case 1's average FPS is 120, and case 2 's average is 105.
but case 1 's 99th is far lower than case 2.

From the way, Toms's benchmark actually misleads gamers that AMD 5800 3D is good CPU in average FPS, but from 99th trustable FPS , INTEL 12900K is still better for e-sport players or pro gamers.
My counter arguments to this are:
  • This example has too small of a sample size to be useful. I'm nit picking here sure, but if the upwards spike was intermittent, then it doesn't matter over the long run.
    • Consider this, the average benchmark tends to be 60 seconds. If the performance average is 100 FPS, that's a sample size of 6000 frames. Even if we had a case where one second was 200 FPS, the overall FPS would only increase by 1.666...
  • Unless there's a blip of looking at an empty skybox, most games won't exhibit a behavior of suddenly shooting up in FPS. Also I can't imagine a scenario where one CPU would suddenly have a blip and another wouldn't.
  • Practically all benchmarks report an average, which is the number most people will use because it's right there. If you have a problem with that, then go tell benchmark developers to stop doing this.
However, I will say that the data set would be better if they added a frame time graph.

2. DON'T test cpu with FPS higher than the monitor's frame rate, with inproper settings.

The mainstream player's monitor is 144Hz or 165Hz as e-sports at 2K, or 4K for pro gamers for RPG.
In most of Tom's tests, the benchmark is un-capped to show CPU performance, but actually, it's not accurate at all. why is that? for example, 1080P testing only uses a small texture for rendering, which will fit well into the cache, but under 2K or 4K scenario, it's not the case at all. the 3800 3D will present the fake result.
Another aspect is Toms's most test case using HIGH settings instead of extra or extreme settings. HIGH setting VS extreme setting , the texture detail almost not at same level. ( extreme setting use more texture and require more cache size)

the suggestion is ALWAYS to calibrate the benchmark to 144 or 165 FPS and turn the settings based on that for either extreme or high, instead of testing fake FPS like 200-300 fps under high settings only.
Textures don't reside in CPU cache. Also calibrating to some arbitrary FPS and seeing the quality settings you can get is not really a useful metric when benchmarking the processor. The goal is to see how much performance you can get out of the processor period, not a combination of performance and image quality.

As an example, if I'm getting 100 FPS, I've identified it's my CPU limiting performance, and I want to know which CPU gets me say 240 FPS on a game (because I happen to own a 240 Hz monitor), if everything is "calibrated" to 144, then how do I know which CPU to get?

3. DON'T do math average for multi-game bench, you should use normalize every game to equal weight to get the conclusion.
eg. Let's take 5 game data.

cpu1: fps 100 100 200 100 100
cpu2: fps 102 102 120 102 102

cpu1 1 , 5 game's average is 120 FPS
cpu2 , 5 game's average is 102 FPS, but 4 /5 game cpu2 win the performance.

we know in cpu2 's FPS, 4 game win and 1 game lose , obviosly case cpu2 is more farorage for gammer, but Tom's methodoly will pick cpu1 instead of cpu2.
They're using a geometric mean for the specific purpose of lessening the effect of those outliers. From https://sciencing.com/differences-arithmetic-geometric-mean-6009565.html:
The Effect of Outliers
When you look at the results of arithmetic mean and geometric mean calculations, you notice that the effect of outliers is greatly dampened in the geometric mean. What does this mean? In the data set of 11, 13, 17 and 1,000, the number 1,000 is called an "outlier" because its value is much higher than all the other ones. When the arithmetic mean is calculated, the result is 260.25. Notice that no number in the data set is even close to 260.25, so the arithmetic mean is not representative in this case. The outlier's effect has been exaggerated. The geometric mean, at 39.5, does a better job of showing that most numbers from the data set are within the 0-to-50 range.
 
Last edited:

wifiburger

Distinguished
Feb 21, 2016
613
106
19,190
This is great for those on older AM4 CPUs. I'm sure that's what they're targetting.
Someone with a 2700X who games all the time - yeah, here's your chip!
why would this SKU be great ? vs getting a regular 5800x or even 5900x ?

Am4 is done, might as well get 12 or 16 core for the long run if you want to skip 13th gen or AM5 /ddr5

If it's for the short run, then it's a complete waste of money vs a cheaper SKU like 5700x
 
  • Like
Reactions: KyaraM

ottonis

Reputable
Jun 10, 2020
166
133
4,760
What we see here with the 3D stacked chip is the emerging of CPUs being specialists for certain types of tasks.
If stacking L3 cash significantly improves gaming performance, then AMD will have a great recipe for gaming-oriented chips in the future.
The question is whether or not 3D stacking could be used for more advanced compute elements like ASICS or coprocessors.

Just imagine a dedicated "audio" - CPU with stacked audio processing circuits, or a "video CPU" with extremely energy- efficient video encoders/decoders. Apple Silicon did this with their M1 based chips that are able to effortlessly and efficiently do video editing because of the hardware based video encoding/decoding engines.
So, there is a good possibility that in the near future we will have CPUs that are not only great all-rounders but that will also have some dedicated special-purpose advancements.

I am already loving it!
 
  • Like
Reactions: drajitsh
The question is whether or not 3D stacking could be used for more advanced compute elements like ASICS or coprocessors.

Just imagine a dedicated "audio" - CPU with stacked audio processing circuits, or a "video CPU" with extremely energy- efficient video encoders/decoders. Apple Silicon did this with their M1 based chips that are able to effortlessly and efficiently do video editing because of the hardware based video encoding/decoding engines.
Caching is only really useful if the data being used is accessed frequently enough. Considering the datasets in processing either audio or video change constantly, caching won't really help here.

So, there is a good possibility that in the near future we will have CPUs that are not only great all-rounders but that will also have some dedicated special-purpose advancements.
We kind of already have that. Smartphone processors are a great example of this. One could argue AMD APUs are also in the same boat.
 
If all they do is game, the 5800x3d is the better chip.
Well, if they only play games that are affected by cache then it's the better chip, if they play a mix of games it's going to be win some lose some, and if they only play games that aren't affected by cache then it's the worser chip.
As always, it all depends on what you do.
What we see here with the 3D stacked chip is the emerging of CPUs being specialists for certain types of tasks.
If stacking L3 cash significantly improves gaming performance, then AMD will have a great recipe for gaming-oriented chips in the future.
The question is whether or not 3D stacking could be used for more advanced compute elements like ASICS or coprocessors.

Just imagine a dedicated "audio" - CPU with stacked audio processing circuits, or a "video CPU" with extremely energy- efficient video encoders/decoders. Apple Silicon did this with their M1 based chips that are able to effortlessly and efficiently do video editing because of the hardware based video encoding/decoding engines.
So, there is a good possibility that in the near future we will have CPUs that are not only great all-rounders but that will also have some dedicated special-purpose advancements.

I am already loving it!
Intel has quicksync in their CPUs since 2011, first for 264 now for 265 as well, a couple of gens ago they also added AI for photo processing or training in general.
They also had a CPU with much more cache than their normal cpus, in the form of broadwell which nobody remembers anymore.
 
  • Like
Reactions: KyaraM

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
I remember some individuals here and on other forums, and YT channels that were misquoting (probably intentionally) AMD's claims of 15% on average better performance than 5900x (as shown on their slides), saying "up to 15%, which means 7-8% on average", so now that this CPU actually delivers that claim and more actually, where are those individuals now?

The fact that on average matches and also has so many crushing wins vs the 12900k at lower price, temps and power consumption is an absolute WIN for AMD, this CPU. Coming 1 and a half year later on "old" Zen3 tech now and beating a brand new Alder Lake, it's actually hilarious what they achieved. Bravo AMD!

Intel must be soiling their pants when they think about Zen4 + V-Cache + DDR5!

P.S. The most I am impressed are actually the 1% lows (see GN and HUB's reviews too) on this CPU, better even than the averages vs all other CPUs, intel or AMD. When V-Cache works good, it demolishes everything.
 

JamesJones44

Reputable
Jan 22, 2021
648
582
5,760
Why underwhelming? AMD marketed it as a gaming-focused chip, and it delivers on that front. It is not intended to be an all-around winner, but a one-hit champion. After all, not even the extra cache can beat the extra cores and clocks in other applications.

I actually expect this tech to appear on consoles in the future.

Consoles already do this to a degree with their unified memory structures. I'm not sure it would have the same overall effect, but it would be interesting to see.
 
  • Like
Reactions: salgado18

jacob249358

Commendable
Sep 8, 2021
636
215
1,290
The fact that on average matches and also has so many crushing wins vs the 12900k at lower price, temps and power consumption is an absolute WIN for AMD, this CPU. Coming 1 and a half year later on "old" Zen3 tech now and beating a brand new Alder Lake, it's actually hilarious what they achieved. Bravo AMD!

Intel must be soiling their pants when they think about Zen4 + V-Cache + DDR5!

P.S. The most I am impressed are actually the 1% lows (see GN and HUB's reviews too) on this CPU, better even than the averages vs all other CPUs, intel or AMD. When V-Cache works good, it demolishes everything.
Its a good chip for the price but this is like if someone body slams you (alder lake) and then you kick them in the nuts while you're on the ground. This is just a last ditch effort for amd to maintain the gaming crown before moving to zen 4. Am4 is a dead platform so no one will be suggesting this in a new build. I think this is for upgraders. This whole 3d cache needs some rethinking because the thermal limits means it can only go so far. I suggest watching the video from ltt. It was impressive but only in games that can take advantage of the cache. The cheaper 12700k is a better option all around.
 
  • Like
Reactions: Why_Me and KyaraM

JamesJones44

Reputable
Jan 22, 2021
648
582
5,760
My counter arguments to this are:
  • This example has too small of a sample size to be useful. I'm nit picking here sure, but if the upwards spike was intermittent, then it doesn't matter over the long run.
    • Consider this, the average benchmark tends to be 60 seconds. If the performance average is 100 FPS, that's a sample size of 6000 frames. Even if we had a case where one second was 200 FPS, the overall FPS would only increase by 1.666...
  • Unless there's a blip of looking at an empty skybox, most games won't exhibit a behavior of suddenly shooting up in FPS. Also I can't imagine a scenario where one CPU would suddenly have a blip and another wouldn't.
  • Practically all benchmarks report an average, which is the number most people will use because it's right there. If you have a problem with that, then go tell benchmark developers to stop doing this.
However, I will say that the data set would be better if they added a frame time graph.


Textures don't reside in CPU cache. Also calibrating to some arbitrary FPS and seeing the quality settings you can get is not really a useful metric when benchmarking the processor. The goal is to see how much performance you can get out of the processor period, not a combination of performance and image quality.

As an example, if I'm getting 100 FPS, I've identified it's my CPU limiting performance, and I want to know which CPU gets me say 240 FPS on a game (because I happen to own a 240 Hz monitor), if everything is "calibrated" to 144, then how do I know which CPU to get?


They're using a geometric mean for the specific purpose of lessening the effect of those outliers. From https://sciencing.com/differences-arithmetic-geometric-mean-6009565.html:

I think the better argument here is, the 99th percentile should be included as it is useful metric overall, but so too is the average.

I agree on the textures, those are GPU level objects. However, when loaded from disk they can be re-referenced which can have some CPU cache implications depending on how the texture loading engine is written. It's not likely to make a huge difference, but wanted to point that part out.
 
I think the better argument here is, the 99th percentile should be included as it is useful metric overall, but so too is the average.
Funny thing though, I noticed in the benchmark charts they label the lower value as "99th", which to me that's kind of odd if it was a 99th percentile metric. If we take Far Cry 6, the "99th" number is 133.5, the average is 180.4. If we flatlined the FPS so every sample was 133.5 99% of the time, the remaining 1% would have to have something like 4800 FPS to make the average (assuming arithmetic) go to 180.

I could be fudging the numbers here though, and I never really liked statistics. :tearsofjoy:
 
This is great for those on older AM4 CPUs. I'm sure that's what they're targetting.
Someone with a 2700X who games all the time - yeah, here's your chip!

I would imagine they are targeting holdouts and enthusiast. But to get the most out of this platform you really need a 500 series motherboard. Older power delivery systems and lower pcie bus speeds might hamper performance on older systems. For example I wouldn't dare stick this chip in ANY ASRock board except maybe a x570 Taichi. ASRock cuts corners on most of their power delivery. I suspect my 3900X is limited by ASRock's steel series because of cheap power delivery. And that's supposed to be a more sturdy design.

Given the price premium it wouldn't make sense to buy. Just get a 12700K and overclock the snot out of it.
 
  • Like
Reactions: KyaraM

salgado18

Distinguished
Feb 12, 2007
932
376
19,370
why would this SKU be great ? vs getting a regular 5800x or even 5900x ?

Am4 is done, might as well get 12 or 16 core for the long run if you want to skip 13th gen or AM5 /ddr5

If it's for the short run, then it's a complete waste of money vs a cheaper SKU like 5700x
Because, if the only heavy processing you do is play games, this is the fastest chip overall. The 5900X is slower than that, let alone the 5800X/5700X (although they are cheaper). This is very good for those with Ryzen 3000 and 2000.

This is NOT a chip for heavy multi-threading! It has "only" 8 cores, no cache in the world can compensate that. This is a gaming chip. Period.

Also, vs the 5700x, it is cheaper and slower, so it's price/performance like all other chips.
 
Intel must be soiling their pants when they think about Zen4 + V-Cache + DDR5!
And yet the funny thing is AMD seems to be heavily pushing the 5800X3D as a gaming CPU. They show only gaming workloads in their marketing. If AMD was confident 3D VCache was a general purpose game changer, then it would show some of those benchmarks as well but those are missing. So no, Intel has nothing worry about here.

The only reason why gaming workloads would be suitable for this application is because games have a lot of data reuse (running a loop over and over again on the same things). A majority of other processor heavy applications don't reuse data by the nature of the workload itself (crunch one set of data, move on to the next set when done).
 
  • Like
Reactions: Why_Me