News AMD Ryzen 7 5800X3D Review: 3D V-Cache Powers a New Gaming Champion

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
GPU's come to mind as those are heavily limited by memory bandwidth and having a large buffer cloud alleviate bottlenecks.
Vram on GPUs is already directly wired to the GPU chip, shortening the wire by a little bit is not going to make any measurable difference.
It might make some difference for iGPUs but the cost and heat won't be enough to justify it.
 

abufrejoval

Reputable
Jun 19, 2020
336
235
5,060
So why was the Ryzen 9 5950X kept off the list? Surely not too many pixels would have to be sacrificed for an extra line or two with PBO?

It could have created an extra splash of balancing red on the multi-core synthetics, while it's much cheaper than Intel's red-hot KS variant.

And when it comes to true money making productivity workloads e.g. on Linux, Alder lake still can't harness it's cores to pull their weight.
 

InvalidError

Titan
Moderator
Vram on GPUs is already directly wired to the GPU chip, shortening the wire by a little bit is not going to make any measurable difference.
GPU dies may be directly wired to VRAM chips but the busses are a few centimeters long and centimeters-long transmission lines have very significant effects on signal integrity at 10+GHz which requires relatively complex circuitry and significant power to overcome. With mm-scale traces using HBM, you can eliminate most of that complexity and reduce power while still increasing bandwidth. By stacking RAM on the GPU, the interconnect stubs could be made almost non-existent and simplify die-to-die busses that much further.

Distance does matter if you want to scale bandwidth while keeping power and complexity in check.
 

King_V

Illustrious
Ambassador
$449 for the 5800X3D or $310 for the 12700F (+ B660 board).

This tired old nonsense again. And yes, PLUS the motherboard.

The 5800X3D was not meant as a competitor to the 12700F.

The 5800X3D was meant to retake the crown for fastest gaming CPU. And, it seems to have done so. Or, for the games that it doesn't beat Intel in, it's very close.

For FAR LESS than the i9 prices.

And, you're still looking at 5800X3D MSRP. For the six lower end chips that were released less than two weeks ago, they're already available under MSRP. The 5800X3D will soon follow.
 
This tired old nonsense again. And yes, PLUS the motherboard.

The 5800X3D was not meant as a competitor to the 12700F.

The 5800X3D was meant to retake the crown for fastest gaming CPU. And, it seems to have done so. Or, for the games that it doesn't beat Intel in, it's very close.

For FAR LESS than the i9 prices.

And, you're still looking at 5800X3D MSRP. For the six lower end chips that were released less than two weeks ago, they're already available under MSRP. The 5800X3D will soon follow.
Does this cpu cost more to produce with the extra cache? If so it might not go down in price as soon as some think.
 

rluker5

Distinguished
Jun 23, 2014
626
381
19,260
I'd be more impressed by the "victory" if the other participant weren't handicapped by the reviewers. Like a weighed down racehorse or a racecar with high mileage tires.
DDR4, 4400 DDR5, strange selection of mostly older games that most reviewers apparently got the word to use, no overclocking allowed on a chip advertised and used for it.
And we've seen this before with every Ryzen release where it is hyped to the moon until the new one comes out and "oh, I guess it wasn't that fast but the new one is insane!"
I'll steal a quote from mdd1963 since he summed it up well: 5800X3D the new stock clock DDR4 gaming champion!
 
  • Like
Reactions: shady28 and KyaraM
I don't understand how some can't see this is AMD giving AM4 one last hurrah while giving a glimpse into the next gen.

This isn't a "ditch your 12900k and expensive motherboard and RAM and come over to this side". If you're on AM4 with a compatible motherboard and do a lot of gaming you could get this and be good for another year or so.

I think it's amazing what AMD has done with the same socket and for the most part backwards compatibility over the last few years.
 

PCWarrior

Distinguished
May 20, 2013
200
81
18,670
I notice the lack of DDR5 testing which plays a major role in gaming. In any case in 1080p gaming, even with DDR4, they are trading lows and is more or less a tie. For 1440p and especially 4K, the cpu choice is pretty much irrelevant as there is GPU bottleneck even with a 3090Ti. We are back to wondering who is buying a super high end GPU and then games in 1080p. And then in productivity the 12900KF/K/KS destroy the 5800X3d. So very unfair to call the i9s overpriced as they are more than well priced based on productivity versus the 5800X3D which is a one trick pony. That’s like calling the Threadrippers overpriced because of the 5600X. Anyway below is a full comparison between the 12900KF/K/KS versus the 5800X3D including both gaming (with the best possible RAM configurations) and productivity. The 12900KS still retains the gaming crown with a 5% lead in average fps and 11% lead in 1% lows. And wins by a whopping 62% in productivity. It also offers 80% higher direct cpu PCIe bandwidth, and 100% higher cpu to chipset bandwidth. And many times it wins in performance per watt too.

Intel 12900KF/K/KS Vs AMD 5800X3D
Price Comparison: 1.25x/1.31x/1.64x

Relative performance (stock Intel Vs stock AMD):
A. Productivity testing

Premiere Pro: 1.26x/1.26x/1.26x (Intel wins)
Geekbench (ST): 1.23x/1.23x/1.29x/1.29x (Intel wins)
Excel: 1.31x/1.31x/1.31x (Intel wins big)
Photoshop: 1.3x/1.3x/1.32x (Intel wins big)
CPU-Z (ST): 1.28x/1.28x/1.35x (Intel wins big)
Geekbench (MT): 1.42x/1.42x/1.48x (Intel wins big)
Text recognition (Tesseract OCR): 1.43x/1.43x/1.49x (Intel wins big)
Cinebench (ST): 1.44x/1.44x/1.53x (Intel destroys AMD)
Handbrake (x264): 1.53x/1.53x/1.61x (Intel destroys AMD)
Handbrake (x265): 1.63x/1.63x/1.66x (Intel destroys AMD)
SHA3 Hashing: 1.37x/1.37x/1.68x (Intel destroys AMD)
Corona 1.3: 1.61x/1.61x/1.71x (Intel destroys AMD)
Blender (HUB): 1.67x/1.67x/1.75x (Intel destroys AMD)
Java SE 8: 1.67x/1.67x/1.75x (Intel destroys AMD)
Chromium Code compilation: 1.66x/1.66x/1.81x (Intel destroys AMD)
Blender (GN logo): 1.78x/1.78x/1.86x (Intel destroys AMD)
Chemistry Simulation (NAMD): 1.81x/1.81x/1.87x (Intel destroys AMD)
CPU-Z (MT): 1.81x/1.81x/1.92x (Intel destroys AMD)
Cinebench R23 (MT): 1.94x/1.94x/2.05x (Intel destroys AMD)

B. Gaming Testing:
Best possible RAM configuration testing: Intel DDR5 6400C32 Vs AMD DDR4-3800C16 (Hardware Unboxed/Techspot testing):

Overall 1080p Gaming performance:
Avg fps: 1.03x/1.03x/1.05x (Intel wins)
1% lows: 1.09x/1.09x/1.11x (Intel wins)

Individual Games, 1080p, Avg fps:
Far Cry 6: 0.94x/0.94x/0.96x (AMD wins)
Horizon Zero Dawn: 0.97x/0.97x/0.97x (AMD wins)
Shadow of the Tomb Raider: 0.99x/0.99x/Tie (Tie)
Tom Clancy’s Rainbow Six Extraction: 0.93/0.93/1.01x (Intel wins narrowly with 12900KS)
Watch Dogs: Legion: 1.02x/1.02x/1.03x (Intel wins)
Cyperpunk 2077: 1.11x/1.11x/1.12x (Intel wins)
The Riftbreaker: 1.12x/1.12x/1.15x (Intel wins)
Hitman 3: 1.16x/1.16x/1.19x (Intel wins)

C. Connectivity Comparison:
Direct CPU PCIe bandwidth: 1.8x/1.8x/1.8x (Intel destroys AMD)
CPU to Chipset bandwidth: 2x/2x/2x (Intel destroys AMD)

D. Efficiency Comparison
MT Cinebench Power consumption: 1.79x/1.79x/1.8x
MT Cinebench Score: 1.94x/1.94x/2.05x
MT Cinebench Performance per Watt: 1.08x/1.08x/1.14x (Intel Wins)

Blender Power consumption: 1.83x/1.83x/2.21x
Blender Performance: 1.67x/1.67x/1.75x
MT Cinebench Performance per Watt: 0.91x/0.91x/0.79x (AMD wins)

Hitman 3 power consumption: 1.09x/1.09x/1.09x
Hitman 3 Avg Fps: 1.16x/1.16x/1.19x
Hitman 3 Avg Fps: 1.06x/1.06x/1.09x (Intel Wins)

Cyperpunk 2077 power consumption: 1.12x/1.12x/1.12x
Cyperpunk 2077 Avg Fps: 1.11x/1.11x/1.12x
Cyperpunk 2077 Performance per Watt: Tie
 
  • Like
Reactions: shady28 and KyaraM

hannibal

Distinguished
Why didn't they use this technology on the 5900X? Kinda makes me mad I just built my computer in the end of 2021. A lot of people steered me towards Ryzen while I was contemplating Intel. I bought a 5900X which was the gaming king and then comes the Alder Lake chips. Now a 5800X is better than the 5900X. I wouldn't mind a new build being obsolete in a few years but not in a few months.

Also 5900x is for content creation! And 3d cache is... well not good for content creations. So why make a content creation chip that it worse than the normal?
3d cache at this moment makes sense in games so 6 or 8 core chips that is already in disadvantage at content creation makes much more sense as gaming chip!
 
  • Like
Reactions: ConfusedCounsel

hannibal

Distinguished
Does this cpu cost more to produce with the extra cache? If so it might not go down in price as soon as some think.

Ofcourse it cost more! That is why there is not 6 core version, because extra $110 for 3600x... makes no sense when considering that 6 core parts allready are at the bottom of the stack!
This is highend gaming chip. For real work 5900x and 5950x makes more sense. And for budget gaming 3600 is the king in AMD platform at this moment!
 
if you refer to the reviews of the broken amd chips from both the xbone and ps5 you see the unified memory isnt quite as good as could be expected do to latency
The manual says it uses GDDR6, which is geared towards bandwidth, not latency.

Why didn't they use this technology on the 5900X? Kinda makes me mad I just built my computer in the end of 2021. A lot of people steered me towards Ryzen while I was contemplating Intel. I bought a 5900X which was the gaming king and then comes the Alder Lake chips. Now a 5800X is better than the 5900X. I wouldn't mind a new build being obsolete in a few years but not in a few months.
Welcome to the world of trying to be on the cutting edge of PC technology. Something is always just around the corner waiting to make your system "obsolete"

Underwhelming because it didn't show performance improvements in the things it wasn't trying to improve performance in? That's an odd takeaway.
Sure, while AMD heavily advertised the 5800X3D as a gaming processor, that doesn't preclude that extra cache would be helpful in other ways. We just needed proof that it wasn't helpful in other ways.
 
  • Like
Reactions: KyaraM

ConfusedCounsel

Prominent
Jun 10, 2021
91
49
560
I look at this way.: With Ryzen 6 cores is the magic number for gaming, as going to 8 doesn't add more than 5-6% performance on average and going beyond 8 doesn't see gains.

Production tasks benefit from more cores, higher frequencies, and computation power. However, if you have good server / workstation setup, then just send code compiling, rendering, and other tasks to it. That is becoming more common deployments. Face it, as the IDC report shows (IDC - Personal Computing Devices Market Share) notebook and tablet sales each out pace desktop sales; it is cheaper to give each worker a light interface.

AMD is aware of the above and made a business decision. They must have saw the strongest market demand for this technology at the server level and 8-core level. Enthusiasts like us may hate it, but AMD legally has to answer to shareholders, including 401k / pension fund managers, not enthusiasts. From that perspective, this chip makes total sense and others a great uplift at top end of the relevant market segment.

Now, given notebooks are bigger segment than desktops, I could see a justification for adding 3D cache to notebook processors, depending on if Gaming Notebooks represent a market segment at least relatively equal in size to gaming desktops.
 
  • Like
Reactions: KananX

cc2onouui

Prominent
Oct 3, 2021
14
7
515
My counter arguments to this are:
  • This example has too small of a sample size to be useful. I'm nit picking here sure, but if the upwards spike was intermittent, then it doesn't matter over the long run.
    • Consider this, the average benchmark tends to be 60 seconds. If the performance average is 100 FPS, that's a sample size of 6000 frames. Even if we had a case where one second was 200 FPS, the overall FPS would only increase by 1.666...
  • Unless there's a blip of looking at an empty skybox, most games won't exhibit a behavior of suddenly shooting up in FPS. Also I can't imagine a scenario where one CPU would suddenly have a blip and another wouldn't.
  • Practically all benchmarks report an average, which is the number most people will use because it's right there. If you have a problem with that, then go tell benchmark developers to stop doing this.
However, I will say that the data set would be better if they added a frame time graph.


Textures don't reside in CPU cache. Also calibrating to some arbitrary FPS and seeing the quality settings you can get is not really a useful metric when benchmarking the processor. The goal is to see how much performance you can get out of the processor period, not a combination of performance and image quality.

As an example, if I'm getting 100 FPS, I've identified it's my CPU limiting performance, and I want to know which CPU gets me say 240 FPS on a game (because I happen to own a 240 Hz monitor), if everything is "calibrated" to 144, then how do I know which CPU to get?


They're using a geometric mean for the specific purpose of lessening the effect of those outliers. From https://sciencing.com/differences-arithmetic-geometric-mean-6009565.html:
With so much respect I'd say your counter arguments are just a "way to say things" but these things don't make any sense.. he was asking for fair methodology .. but you argue like if he said (I don't understand the methodology that used her) he clearly understand and it is the only reason he knew it was the wrong way


QUOTE="hotaru.hino, post: 22666468, member: 2839834"]
My counter arguments to this are:
  • This example has too small of a sample size to be useful. I'm nit picking here sure, but if the upwards spike was intermittent, then it doesn't matter over the long run.
    • Consider this, the average benchmark tends to be 60 seconds. If the performance average is 100 FPS, that's a sample size of 6000 frames. Even if we had a case where one second was 200 FPS, the overall FPS would only increase by 1.666...
[/QUOTE]
If you can pick whatever examples you like.. you will win every time. the idea is
a constant frame rate is the real number the latest Celeron CPU will stutter a game with 41 average frame rate per second.. I thank that useless CPU that always helps me win this argument
constant frame rate is real and the average is fake
good gameplay vs deceiving high numbers
 
If you can pick whatever examples you like.. you will win every time. the idea is
And that's exactly what they did. They set up a straw man. And I provided an argument why theirs didn't really hold up.

a constant frame rate is the real number the latest Celeron CPU will stutter a game with 41 average frame rate per second.. I thank that useless CPU that always helps me win this argument
constant frame rate is real and the average is fake
good gameplay vs deceiving high numbers
I won't argue that more data is good, but every set of data has its flaws when placed in a vacuum. However, that does not invalidate any of the data as long as said data can be obtained more or less repeatably.

Also what argument? That averages don't tell the whole story? Honestly if you believed that from the beginning, that's your problem.
 
So why was the Ryzen 9 5950X kept off the list? Surely not too many pixels would have to be sacrificed for an extra line or two with PBO?
Because this is a gaming based cpu and we know the 5950X sits slightly behind the 5900X in this use case. There are enough 5950X reviews with wasting time running all the benchmarks for another cpu we already know where it sits relative to the 5800X & 5900X.
 
The big slapping:

View: https://www.youtube.com/watch?v=9XB3yo74dKU


So, even with monstrous DDR5-6400CL32, which is a whooping 80% additional cost, the 12900K is overall 1% slower. The KS for sure will make it either equal or move that 1% to the Intel side, but... That is still impressive in favour of the 5800X3D. And the 5800X3D still can use faster memory to increase its performance slightly and there's indication OC may be something that can be achieved, but at high risk.

Interesting and disruptive. Looking forward to more in-depth comparisons like that one in the near future when they go on sale.

Regards.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
The big slapping:

View: https://www.youtube.com/watch?v=9XB3yo74dKU


So, even with monstrous DDR5-6400CL32, which is a whooping 80% additional cost, the 12900K is overall 1% slower. The KS for sure will make it either equal or move that 1% to the Intel side, but... That is still impressive in favour of the 5800X3D. And the 5800X3D still can use faster memory to increase its performance slightly and there's indication OC may be something that can be achieved, but at high risk.

Interesting and disruptive. Looking forward to more in-depth comparisons like that one in the near future when they go on sale.

Regards.
And with way better efficiency, no Auto-OC needed at AMD.

It’s funny people think this is more than a gaming CPU, it’s not. If you’re a content creator you can skip this, if whatever work you do needs more than 8 cores, buy something else. For the vast amount of people however 8 cores is more than enough. Gaming needs 4 cores and 8 threads, not more than that, this has 8. You can even easily stream and game with this at the same time, yes CPU streaming on x264 medium. This CPU is plenty strong. No worries about E cores, no worries about another CCD.

If you want to overclock it, it’s possible. Maybe not advisable though.

As others already explained, this is mostly for Am4 users that game, they get another upgrade on the dated platform which is great. Alternative would’ve been nothing and buy a new PC. So I don’t get the complainers here, they make no sense. Users with 5900X or 5950X can ignore this, unless they bought it for gaming only? In that case they can’t blame AMD for not having a 12-16 core variant, they can only blame themselves for buying the wrong CPU. Nobody told you to buy a 5900/5950X solely for gaming. And even then, you can sell it and upgrade to this, you only lose cores you didn’t use anyway. It will be some time still until games utilize 8 fast cores, this CPU will be relevant for years and then you need a new PC anyway.

I wanna add: why is there no 5900X3D or 5950X3D? Because the 3D Vcache only profits gaming and gaming only needs 8 cores at the max. It’s a simple decision that AMD made, and they made it right. They’re not in the business to release super niche products that nobody needs, this is exactly the right product. And people who don’t know tech well can’t waste money on a 5950X3D they don’t profit from.
 
I wanna add: why is there no 5900X3D or 5950X3D? Because the 3D Vcache only profits gaming and gaming only needs 8 cores at the max. It’s a simple decision that AMD made, and they made it right. They’re not in the business to release super niche products that nobody needs, this is exactly the right product. And people who don’t know tech well can’t waste money on a 5950X3D they don’t profit from.
Some people are going to whine that they can't have their cake and eat it too. i.e., the king of both gaming and multi-core performance.
 

guru7of9

Reputable
Jun 1, 2018
58
7
4,545
So going by Tomshardwares own testing, the new Ryzen R9 5800x 3DVcache will be at the TOP of their CPU GAMING Heirarchy chart in the next few days ! It's been 5 days already!
Cos they are the fairest and most non biased hardware website of all .
Keep up the good work! 😊👍
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
Some people are going to whine that they can't have their cake and eat it too. i.e., the king of both gaming and multi-core performance.
They can cry, but unless they are csgo fanatics or similar it’s irrelevant as 5950X will produce same fps as this with settings like 1440p ultra or 4K high. And then if they need the 16 cores hard and still highest fps, they can buy 12900K, problem solved. Unbiased comment given
 
  • Like
Reactions: KyaraM