Review AMD Ryzen 9 7950X3D Review: AMD Retakes Gaming Crown with 3D V-Cache

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Developers who get lazy because their machines are powerful come back to reality pretty quickly when the client tries to run their code on a dual-core mobile Kaby Lake with 8GB of RAM.
I did something even sinful, because I expected my code to be used on unknown processors, then I should assume the smallest cache possible.

But in the end the only way I found to make fast code was to assume x86 integer encoding, which probably doesn't run on ARM (most phones).

So, I'm a xxxxxx coder anyways.
 
Last edited by a moderator:
DDR4 was first used in 2014 with the Haswell-E HEDT CPUs for consumers and Haswell based server CPUs. At this point in time DDR4 isn't able to give us the needed bandwidth to keep your CPU cores fed with data. We needed to move to DDR5 just to keep up with the bandwidth requirements. Using servers as an example we went from DDR4-3200 > DDR5-4800. That accounted for a 50% increase in bandwidth per DIMM (38.4GB/sec vs 25.6GB/sec). The absolute bandwidth increase of 13GB/sec is the same as going from DDR3-1600 > DDR4-3200. For reference DDR3-1600 was released in 2007 and used in servers at the latest in 2011 and DDR4-3200 was first used in servers in 2019. That means in 1 generation (highest DDR4 > lowest DDR5) we get a bandwidth jump that took 8+ years previously. That doesn't even count the other advances that you get with DDR5. Overall it was time for CPUs to go to DDR5 and remove their DDR4 baggage. Intel didn't want to go straight DDR5 with Alder Lake and kept that the same with Raptor Lake. You can be assured that Meteor Lake will be strictly DDR5 when that comes out. In the end technology pushes forward. The ewaste is an issue, however, if you dispose of computers correctly they will be recycled and used in newer things.

Yeah, that's why I went from a DDR3 rig straight to a DDR5 rig (AM5), which won't be a bottleneck everywhere when eventually going for a more expensive GPU. But the DDR3 rig isn't waste because of that, as it still works fine for someone else.
 
Thanks for the review as always.

Well, there's just three things I get from this release:

1.- AMD would have been better off equipping the 2 CCDs with VCache or at least have a variant with it.
2.- The 7900X3D will be a hard sell, unless it is heavily discounted given the hassles. Unless the lower cores reduce thermal draw and help boost higher, making it the fastest VCache gaming part? Maybe?
3.- The 7800X3D is going to be a monster for games that do make use of cache, but I'd rather spend the money (based on the price difference) on a dual VCache'd 7950X to really get the bests of both worlds.

I also wonder what highly tuned RAM would do here.

EDIT: Forgot something.

Regards.
 
Can someone help me understand the graph on Page 6, "Average FPS (Geomean), 1440p Windows 11? It appears that the Ryzen 9 7950X3D offers a 40% increase from my Ryzen 7 5800X (215 fps vs. 153 fps, respectively). I was under the impression at higher resolutions, the CPU did not make much difference. I have a widescreen monitor with a native resolution of 3440 x 1440p and a GeForce RTX 3090. Am I likely to see a 40% fps increase by upgrading to the Ryzen 9 7950X3D from my Ryzen 7 5800X?
 
Can someone help me understand the graph on Page 6, "Average FPS (Geomean), 1440p Windows 11? It appears that the Ryzen 9 7950X3D offers a 40% increase from my Ryzen 7 5800X (215 fps vs. 153 fps, respectively). I was under the impression at higher resolutions, the CPU did not make much difference. I have a widescreen monitor with a native resolution of 3440 x 1440p and a GeForce RTX 3090. Am I likely to see a 40% fps increase by upgrading to the Ryzen 9 7950X3D from my Ryzen 7 5800X?
If your 3090 is not being utilized to 100%, then it means there's still room for it to process more frames at your resolution, so, in that particular case, upgrading would still give you an FPS increase, yes.

Regards.
 
  • Like
Reactions: bit_user and RodroX
Can someone help me understand the graph on Page 6, "Average FPS (Geomean), 1440p Windows 11? It appears that the Ryzen 9 7950X3D offers a 40% increase from my Ryzen 7 5800X (215 fps vs. 153 fps, respectively). I was under the impression at higher resolutions, the CPU did not make much difference. I have a widescreen monitor with a native resolution of 3440 x 1440p and a GeForce RTX 3090. Am I likely to see a 40% fps increase by upgrading to the Ryzen 9 7950X3D from my Ryzen 7 5800X?

That, as the title said, is the average with the system Tom's Hardware used for testing. It includes a RTX 4090, which is faster than your RTX 3090.

So as a fast answer, Are you likely to see improvements? yes. Will it be 40% across all games you play?, not even a chance. You probably wont hit 40% improvement in any game.

But as always, better wait for the "lower" end parts, i.e. the 7800X3D.
 
When I said PC were good enough for most people to browse the internet, that was in response to an article about PC sales tanking. Not about my specific PC. But it's nice to know I have a stalker.

I am usually curious when folks talk about PC hardware and it's pricing. What is your current hardware? Are you currently in the market or new hardware? I don't complain about high-end product pricing when I have no intention on buying high in products in the first place.
 
I am usually curious when folks talk about PC hardware and it's pricing. What is your current hardware?

12700k

Are you currently in the market or new hardware?

For me to buy a new system, two things would need to happen.

1) The shader compile stuttering on PC needs to be solved. There is no point in a game running 200fps if it drops to 10fps each time a shader needs to be compiled. The proposed solution from Digital Foundry to do the shader compiles during loading and letting people wait 15 minutes is imo, ridiculous. My guess is that we will see a hardware solution to this problem, unless Nvidia and AMD agree on a unified driver architecture which is unlikely.

2) Prices need to come down drastically on all hardware, especially on the memory and mobo. 32GB DDR5 is still twice as expensive as 32GB DDR4 and AM5 boards are still ridiculously overpriced.
 
  • Like
Reactions: Amdlova and 10tacle
No, it's typical AMD fan behavior. I made a negative comment about AMD, you are emotionally attached to this brand, and you felt the need to search through all my posts to try to discredit me. It's creepy fan behavior.

That's not going to change my opinion on this product. I have said it before. AM5 is too expensive. DDR5 is too expensive. The boards are too expensive and the CPU are too expensive.

I bolded it just for you, you're not going to silence people just because their opinion differs from yours.

And my second point, the old post you qouted is also something I stand by. The PC market crashed because old PC are good enough to browse the internet, write emails and do basic gaming. Hardware makers will be forced to lower prices, and AMD is doing a horrible job at it, because they keep raising prices and I know no one who is interested in throwing away their DDR4 in the middle of record inflation.

You have done enough to discredit yourself he doesn't need to go through your post history for the rest of us to see it.

Now back to the review excellent job i'm glad you guys didn't use jedec memory like anandtech did and also nice to see the PBO numbers.

I'm not so sure. The 7800x3d has a 700mhz lower boost clock than the 7950x3d so it's quite likely the gains may only be in the single digits vs a 5800x3d. And with it costing $450 vs a 13700k $400, I think it's a bust, especially in mixed workloads.

That 700Mhz lower clock compared to the non cache CCD in the 7950X3d?
 
Who is "the rest of us" ? You're part of a Tomshardware gang or something?

Here is what I said.

I said AM5, DDR5 and this CPU are too expensive. And if you think that discredits someone and that justifies attacking a person, I suggest you take your head out of the AMD kool-aid barrel. Because I am far from the only one who feels this way. Take a look at the best selling AMD CPU right now, and not 1 in the top 10 is on AM5.

AMD has made a major mistake tying AM5 to DDR5.

AMD cpu and DDR 5 prices are fine right now its the motherboards that are expensive and holding people back. And you are free to feel how ever you want with your own budget. However the rest of us don't need to see you complaining about pricing in every AMD article we get it.
 
Prices need to come down drastically on all hardware, especially on the memory and mobo. 32GB DDR5 is still twice as expensive as 32GB DDR4 and AM5 boards are still ridiculously overpriced.
While AM5 motherboards are expensive, DDR5 IN NOT twice as much as DDR4. A 2x16GB DDR4-3600 CL18 kit starts at $70 whereas a 2x16GB DDR5-5600 CL36 starts at $110.
 
Can we get some 4K (UHD) gaming benches for MSFS that showed massive FPS improvement over Intel with this CPU between HD (1080p) and QHD (1440p)? I know normally at 4K the performance margin diminishes to very small differences between CPUs, but I know some in the MSFS sim community who are looking at upgrading their older hardware, and now that GPUs are readily avaiable, it comes down to CPU options and how they'll perform at 4K over other CPUs (Intel or AMD).
 
I would like to see some new tests included using AI engines; You could run something like REALESRgen to upscale some anime videos or something; Granted, if you have a 4090 you are using the GPU; but there are CPU options as well;
 
AMD has made a major mistake tying AM5 to DDR5.
Except they didn't. It was smarter to design Zen 4 with a single IMC (makes it cheaper to develop) and focus on the NEW technology. Remember Zen 4 is used in the Genoa server CPUs as well. These CPUs can have 96c/192t. It makes no sense to design that for DDR4 and 12 channel RAM for what a single generation of CPUs. Had they kept 8 channel RAM then you would be losing per core bandwidth. Instead AMD did the smart thing and went DDR5 + 12 channels so they are getting 50% more bandwidth PER channel + 50% more channels. This is why a single socket Genoa has MORE theortical RAM bandwidth than a dual socket Rome/Milan.
 
12700k



For me to buy a new system, two things would need to happen.

1) The shader compile stuttering on PC needs to be solved. There is no point in a game running 200fps if it drops to 10fps each time a shader needs to be compiled. The proposed solution from Digital Foundry to do the shader compiles during loading and letting people wait 15 minutes is imo, ridiculous. My guess is that we will see a hardware solution to this problem, unless Nvidia and AMD agree on a unified driver architecture which is unlikely.

2) Prices need to come down drastically on all hardware, especially on the memory and mobo. 32GB DDR5 is still twice as expensive as 32GB DDR4 and AM5 boards are still ridiculously overpriced.

All of these things are fair. Maybe direct storage will help with the first.
The motherboards are pricey. My best guess is PCI-e gen5, and DDR 5 wiring is expensive due to the signaling requirements. Looking a the intel equivalent to my board, it doesn't support as many Gen 5 m.2 drives as the AMD board but cost only 70 dollars less. Another possibility, they priced in future missing motherboard sales from folks only switching out CPUs and not the motherboard.
That is my plan anyway. I updated my CPU specs in my sig.
 
  • Like
Reactions: Thunder64
I'm not so sure. The 7800x3d has a 700mhz lower boost clock than the 7950x3d so it's quite likely the gains may only be in the single digits vs a 5800x3d. And with it costing $450 vs a 13700k $400, I think it's a bust, especially in mixed workloads.
Nah 7800x3d will do just fine. The article says right there that the stacked CCD on 7950x3d is 500mhz slower than the other, so for cache-sensitive workload 7800x3d is just 200mhz slower.
 
I already have DDR4. AMD expects me to just throw this into the trash and buy expensive new DDR5 for a 1% difference in performance. It even bothers me from an e-waste perspective, let alone the financial cost.

I turned mine into a Truenas Scale server with 80-or-so-TBs of ZFS storage.

Now render unto Caesar his salad:smilingimp:
 
  • Like
Reactions: bit_user
Every single media outlet beating the "DDR5 cost" dead horse over and over again.
When talking about ultimate gaming performance, which these 3D chips are designed for, saying the platform cost of DDR5 is a negative is stupid.
DDR4 is a dead end and should only be considered for budget builds and upgrades on older systems at this point.

What's super annoying about it is that every outlet talking about this DDR4 virtue was testing their 13900's with DDR5.
 
No, it's typical AMD fan behavior. I made a negative comment about AMD, you are emotionally attached to this brand, and you felt the need to search through all my posts to try to discredit me. It's creepy fan behavior.

That's not going to change my opinion on this product. I have said it before. AM5 is too expensive. DDR5 is too expensive. The boards are too expensive and the CPU are too expensive.

I bolded it just for you, you're not going to silence people just because their opinion differs from yours.

And my second point, the old post you qouted is also something I stand by. The PC market crashed because old PC are good enough to browse the internet, write emails and do basic gaming. Hardware makers will be forced to lower prices, and AMD is doing a horrible job at it, because they keep raising prices and I know no one who is interested in throwing away their DDR4 in the middle of record inflation.

Homie, no one is buying a 13900 and running DDR4. Get a grip.
 
I have said it before. AM5 is too expensive. DDR5 is too expensive. The boards are too expensive and the CPU are too expensive.
the SAME can be said about going from and intel ddr 4 based comp to a intel ddr5 based comp as far as ram is concerned.

just did a quick price check with a 7950X, Asus Strix X670E0E gaming, 13900k, asus Strix Z790-e gaming and the same Corsair 5600 mhz CL36 ram ( could use faster ram,but still.

ram, same price, face it, very few would by a new system and use ddr4 on t he intel system, boards, for this board the AM5 board was the SAME price as the intel version, so that argument is FALSE. the cpus, the 7950x is $150 more, so that is true, so based on regular prices, you are HALF right.
BUT currently everything i have listed is on sale, and taking that into account the amd system is less expensive by 10 bucks :) this isnt factoring anything else in the system, like cooling for example.

Homie, no one is buying a 13900 and running DDR4. Get a grip.
and thats half of his argument it seems, i doubt very few would by a 13900k and pair it with the ddr4 intel platform, that is just asking for another upgrade in 1-2 years for just the board and cpu, and at that time, may as well just upgrade the cpu as well.

They don't, upgrades are almost never worth it for cpus within 2 generations of each other.
heh, i went from a 3900x to a 5900x about a year after getting the 3900x, and saw quite the performance bump in transcoding with handbrake, a bluray went from about 60 mins i think it was to about 30 mins.
 
Status
Not open for further replies.