Review AMD Ryzen 9 3900X and Ryzen 7 3700X Review: Zen 2 and 7nm Unleashed

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Polypheme
Ambassador
@PaulAlcorn ,

Whoa, that PCIe 3.0 vs 4.0 comparison chart, on the first page, is seriously flawed.

First, it lists the aggregate bandwidth of PCIe 3.0 x16 (or else it's talking about x32, which is irrelevant to this article). That's misleading, since it's pretty rare to have an evenly-balanced dataflow in both directions. People normally quote just the unidirectional bandwidth, which is ~16 GB/sec for x16.

Second, and the real head-scratcher, is PCIe 4.0's claim of 128 GB/sec and 32 GHz. This is badly flawed. It should be only twice as fast as PCIe 3.0.

I realize it's a big article and involved a lot of testing, etc. But, I still can't fathom how that slipped through.
 
Last edited:

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
You don't have to. Both are single threaded applications with random addons that may support multithreading, so you know Intel is going to win, just like most stuff from Adobe. People like Ncogneto don't seem to grasp how much commonly used software doesn't benefit at all from increased core counts beyond a few.
If you think Intel's next generation of CPU's are going to magically increase Operating frequency to 6+ Ghz than you are sadly mistaken. Single threaded applications are a thing of the past, and sooner or later software developers will realize that the only way they are going to get more performance out of their applications is to learn how to write proper code for mutli-threaded CPU's.
 
  • Like
Reactions: Soaptrail

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
You don't have to. Both are single threaded applications with random addons that may support multithreading, so you know Intel is going to win, just like most stuff from Adobe. People like Ncogneto don't seem to grasp how much commonly used software doesn't benefit at all from increased core counts beyond a few.

You also fail to realize your looking at a mature Intel platform running on a $200-$300 water cooling setup while the AMD CPU is running on a stock cooling solution. Pretty sure with a few bios revisions and a windows update, AMD will make Intel look even more foolish.

Now run along and go play your game.
 
Unless you're trying to give an indication of "performance-per-MHz" of varying architectures, yes, comparing clock speeds between differing architectures is a fundamentally invalid comparison (it's also not exactly an accurate predictor of per-core performance).

Never said between uArchs but it still is an advantage none the less. Both Intel and AMD have hit a fundamental performance wall and the only thing that helps right now is clock speed or instructions, with the later causing much more massive power draw than the former.

Clockspeeds between AMD and Intel are not apples to apples. Bulldozer hit 5ghz and it was a terrible CPU. Just because it could hit 5ghz, did not make a good chip.

I never said they were apples to apples but considering the single threaded performance is so close as is the per core clock speed is still an advantage to be had. If AMD was hitting the same clock speeds I have no doubt we would have a true fight on our hands.

May want to watch this link from world reknown overclocker de8auer:

View: https://www.youtube.com/watch?v=WXbCdGENp5I


He has 10 CPU's (mix of 6/8/12 cores). Forget overclocking, he couldn't get any of his 8 or 12 core CPU's to hit AMD's advertised max clocks (4.5 for 3800x, 4.6 for 3900x) even with a custom water cooling loop. There were leaks before launch that these chips would hit 5GHz. De8auer says in this video to forget that, he has chips that wouldn't hit 5GHz using liquid nitrogen.

I wonder if this is due to the process being similar to the 14nm they used which was a low power process tech.

Sort of takes the point out of every chip being unlocked if the boost rates are better than manual overclocking. Kind of killed that selling point.

So impressive. AMD hasn't been on my radar since the Athlon days. Competition is good! I wonder how long it'll take Intel to get to 7NM. Also, where's Intel's response hardware-wise? I mean isn't the 9900K almost a year old at this point? And it's still beating AMD in a lot of areas.

Per Intel 2021 is when they plan for the first 7nm products to come out, most likely from FAB42 in Chandler. I see it quite often and can tell you they have been working on it night and day.

https://www.tomshardware.com/news/intel-7nm-10nm-investor-process,39298.html

As of right now 10nm will hit the shelf for the holiday season of 2019 and then 1H'2020 for server.

Of course this is all to be seen.

Another note is that Intels 10nm should be, unless they vastly changed it, more dense than TSMCs 7nm. We shall see though.
 

nobspls

Reputable
Mar 14, 2018
902
12
5,415
... sooner or later software developers will realize that the only way they are going to get more performance out of their applications is to learn how to write proper code for mutli-threaded CPU's.

In what universe do you expect "sooner" to be any sort of reality? It will only be later and much later. How long has been since the first "moar cores" cry? Like 2006? Where are we now, 2019....

"Moar cores" is just moar marketing tripe to mislead the gullible.
 
3800x is practically only option for me, my 2700x. properly set up is too close to 3700x to bother with and to spend/loose money on, 3990x is posibility but have yet to see if it's fully compatible with my MB, there's not even proper BIOS.
Maybe the production process is still not perfected enough for higher frequency ?
 
So impressive. AMD hasn't been on my radar since the Athlon days. Competition is good! I wonder how long it'll take Intel to get to 7NM. Also, where's Intel's response hardware-wise? I mean isn't the 9900K almost a year old at this point? And it's still beating AMD in a lot of areas.
define : "a lot" - because apart from very high FPS gaming (like 720p CS:go) and pure frequency, Intel lose at:
- Instruction Per Cycle
- thread-aware productivity software
- thermals
- power consumption (105W boost compared with 95W base clock)
- PCI-E support
- backward compatibility
- bundled cooling
- AES
- core count
- security
- process
- price
 
  • Like
Reactions: Soaptrail
But it would be nice to see some tangible improvement regarding fps with CPUs. The GPU still remains king when it comes to a quality gaming build.
You seem to be looking at this a bit wrong. The main reason a faster CPU generally doesn't improve gaming performance much is because even a mid-range processor is already fast enough to not limit performance significantly under typical usage scenarios. The graphics card is typically what's holding performance back more than anything. As cards become faster, that extra performance is generally used to drive higher resolutions and more demanding visuals, so the CPU usually isn't much of a limiting factor in most games, at least for anyone not specifically targeting maximum frame rates on a high refresh rate display. And past a certain point, the benefits of higher frame rates tend to diminish.

Even the differences in gaming performance shown in this review are greatly exaggerated over what almost anyone will encounter in an actual system. Running games at 1080p resolution on a 2080 Ti makes for what is practically a synthetic benchmark. It's useful for showing potential performance differences in hardware when pushed to an extreme, but not really representative of the kind of performance people should expect to see in the real world. At resolutions considered "sensible" for a given graphics card, these Ryzen 3000 processors should perform within a couple percent or so of the fastest Intel offerings, and the prior Ryzens will be close behind. That's why putting that money toward graphics hardware typically makes a lot more sense.

While its still early the clock speed and overclocking advantage Intel has might make their CPUs last longer in gaming than Zen 2.
That seems questionable, especially more toward the mid-range, where there's a more relevant difference in thread counts. It's seems likely that before long, a six-core Ryzen 3600 with SMT will provide more stable performance in many games than a six-core i5-9600K without, despite being priced lower. Games are one type of software that have lots of systems that can be divided into separate threads, and developers are focusing on that more as time goes on. And of course, games in the real world often tend to run alongside various background applications and processes that are not represented in benchmark runs.

Will that core/thread advantage carry over to the eight-core models as well? Perhaps eventually, but I suspect that much like the Ryzen 5s, any of the Coffee Lake i7's should offer a suitable number of cores and threads to handle the vast majority of new releases well for some time to come, at least if one isn't streaming or something. Of course, those processors cost substantially more than a Ryzen 3600, and it comes back to that prior point that for a gaming system, most might be better off putting that money toward graphics hardware instead, where it might provide around 30% more performance right away.
 
Last edited:
  • Like
Reactions: Soaptrail

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
If you think Intel's next generation of CPU's are going to magically increase Operating frequency to 6+ Ghz than you are sadly mistaken. Single threaded applications are a thing of the past, and sooner or later software developers will realize that the only way they are going to get more performance out of their applications is to learn how to write proper code for mutli-threaded CPU's.

Of course not, on the 6GHz part. Stop acting stupid. Clock speeds aren't the only way to improve single threaded performance. Do you really think Intel architecture engineers have been sitting around for 4 years since Sky Lake was released doing nothing while waiting for the process engineers to get 10nm straightened out? Intel is claiming an 18% ipc improvement with Sunny Cove. Not up to 18%, but 18% on average. Will that pan out? Eh, who knows. Even taking marketing slant into account, I wouldn't expect the improvement to be any less than half their claim which would still put in double digits. When was the last time we saw a 10% or more improvement in IPC, Sandy Bridge? If we really see 15% or so with comparable clock speeds, that would be the first worthy generational upgrade in a decade.

The 2nd part of you statement just reveals your lack of knowledge of how programming works. You can't just magically make any program use as many cores as it has available to it. It's like nobspls said above. You can't use 9 women to make a baby in 1 month. It's not a question of learning how to multi-thread applications, when the code fundamentally can't be split up into parallel work loads.
 
Last edited:
  • Like
Reactions: TJ Hooker

Soaptrail

Distinguished
Jan 12, 2015
301
95
19,420
This article at least shows us AMD can handle dual rank DIMM's without the performance loss as long as you only use 2 slots but no mention of this in the review. Are the reviewers that swamped? This review seems really rushed due to the lack of commentary.
 
Per Intel 2021 is when they plan for the first 7nm products to come out, most likely from FAB42 in Chandler. I see it quite often and can tell you they have been working on it night and day.

https://www.tomshardware.com/news/intel-7nm-10nm-investor-process,39298.html

As of right now 10nm will hit the shelf for the holiday season of 2019 and then 1H'2020 for server.

Of course this is all to be seen.

Another note is that Intels 10nm should be, unless they vastly changed it, more dense than TSMCs 7nm. We shall see though.

Definitely this is all to be seen. Intel has been pushing back their processes since Haswell. Their road maps mean less than Facebook promising user privacy.
 

salgado18

Distinguished
Feb 12, 2007
928
373
19,370
I'm curious - I keep finding the 3600X listed as 95W while the 3700X is 65W.

Is that a mistake, or does the extra 200MHz of base clock speed really drive the TDP up that high?
It's not just the clocks, a 65W processor will hold itself down to comply to its TDP, while a 95W processor has more room to work with. Both can even go beyond in theory, the TDP is an artificial ceiling (sometimes practical).
 
  • Like
Reactions: TJ Hooker
I'm curious - I keep finding the 3600X listed as 95W while the 3700X is 65W.

Is that a mistake, or does the extra 200MHz of base clock speed really drive the TDP up that high?

No that's correct information even per AMDs own site.

Its possible that the extra 200MHz does affect it that much.

It's not just the clocks, a 65W processor will hold itself down to comply to its TDP, while a 95W processor has more room to work with. Both can even go beyond in theory, the TDP is an artificial ceiling (sometimes practical).

Except its odd that the next level of CPU, normally higher end, would have a lower TDP than the one below it.
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
Of course not, on the 6GHz part. Stop acting stupid. Clock speeds aren't the only way to improve single threaded performance. Do you really think Intel architecture engineers have been sitting around for 4 years since Sky Lake was released doing nothing while waiting for the process engineers to get 10nm straightened out? Intel is claiming an 18% ipc improvement with Sunny Cove. Not up to 18%, but 18% on average. Will that pan out? Eh, who knows. Even taking marketing slant into account, I wouldn't expect the improvement to be any less than half their claim which would still put in double digits. When was the last time we saw a 10% or more improvement in IPC, Sandy Bridge? If we really see 15% or so with comparable clock speeds, that would be the first worthy generational upgrade in a decade.

The 2nd part of you statement just reveals your lack of knowledge of how programming works. You can't just magically make any program use as many cores as it has available to it. It's like nobspls said above. You can't use 9 women to make a baby in 1 month. It's not a question of learning how to multi-thread applications, when the code fundamentally can't be split up into parallel work loads.


NO I think the engineers have been working for 4+ years trying to get 10 nm straightened out. And you could very well see a 15% increase in IPC, but also see a reduced clock speed. AMD already has the edge in IPC btw. As for my lack of understanding, which is laughable coming from you, the point is we are hitting a Clock speed wall, and as such, an architecture that reallies on clock speed alone to ramp performance is at a dead end. There is a reason why Intel has a negligible lead in some game benchmarks, that is because they utilize a single core 5.1 boost clock. Single core. Let me repeat that again....Single core. What does that tell you when a game app performs better because on a processor that has one core boosted significantly above the rest?

And for God's name, why would AMD choose to approach an upgrade to their architecture in this manner? Who is going to invest $1200 on a GPU only to play at 1080P? Why would anyone in their right mind spend the extra $300 for a water cooling setup that the Intel systems needs to reach those levels on settings that they will never play, when they could use that money and spend it on better faster storage or a better GPU?
 

joeblowsmynose

Distinguished
It did tie Intel in gaming. It tied basically the same Intel CPUs that have been on the market since 2015 with Skylake. We are in the back half of 2019 and we see the same gaming performance that we had in 2015 mainstream CPUs.

We know AMD is cheaper and comparing clockspeeds against different architectures between Intel and AMD is silly. But it would be nice to see some tangible improvement regarding fps with CPUs. The GPU still remains king when it comes to a quality gaming build.

The CPU was never nor ever will be a significant contributor to FPS in gaming. That is why you have to induce an artificial CPU bottleneck to even see a difference in these reviews. The reason why they invented the GPU was to eliminate the CPU bottleneck - that is their sole purpose - so reviews on CPUs remove them from the equation ... but that isn't real world ... again, the GPU was invented to ELIMINATE the cpu bottleneck.

Why people expect the CPU to make all the difference in the world is beyond me, especially considering that if you have bottlenecked your CPU, you are wasting precious GPU resources that you paid for ... on a 2080ti @4k and settings to get a 70+FPS in SoM, all the CPUS get 76 FPS. All of them. r5 2600 nonX, Threadripper, 9900k @5.2ghz -- -all exactly the same. Isn't that weird? Well thats what happens when you let a GPU do what it is intended for - eliminate CPU bottleneck ... why is this hard to understand for some people?

What was it that happened that made people forget that the GPU is what dictates game performance, not the CPU? How many people here bought a 2080ti and run at 720 or 180 on "medium"? Yeah ... spend a small fortune and turn all the settings to "pure crap" mode? People usually pair their GPU with resolution and settings for between 60 and 120hz TVs / monitors. While there might be a slight difference in 60 to 120 for those crazy basement twitch gamers, between 120 and 240 there is pretty much none. There' limits to human reaction times and if one has bottlnecked their CPU to get 400+FPS then one needs some education ...

So if you are like most people and have refresh between 60 and 120 on your TV / monitor ... you CPU makes no difference to your gaming framerates.


Tests for this review seemed a bit cherry picked to not favour the 3900x? Especially in productivity? Not enough time? Best to also explore other's reviews on the 3900x as well to get the broader picture.
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
In what universe do you expect "sooner" to be any sort of reality? It will only be later and much later. How long has been since the first "moar cores" cry? Like 2006? Where are we now, 2019....

"Moar cores" is just moar marketing tripe to mislead the gullible.
I remember a similar argument back when we switched over from 32 bit to 64 bit Os's. Call me crazy, but hasn't Intel been upping their core count as well?
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
The CPU was never nor ever will be a significant contributor to FPS in gaming. That is why you have to induce an artificial CPU bottleneck to even see a difference in these reviews. The reason why they invented the GPU was to eliminate the CPU bottleneck - that is their sole purpose - so reviews on CPUs remove them from the equation ... but that isn't real world ... again, the GPU was invented to ELIMINATE the cpu bottleneck.

Why people expect the CPU to make all the difference in the world is beyond me, especially considering that if you have bottlenecked your CPU, you are wasting precious GPU resources that you paid for ... on a 2080ti @4k and settings to get a 70+FPS in SoM, all the CPUS get 76 FPS. All of them. r5 2600 nonX, Threadripper, 9900k @5.2ghz -- -all exactly the same. Isn't that weird? Well thats what happens when you let a GPU do what it is intended for - eliminate CPU bottleneck ... why is this hard to understand for some people?

What was it that happened that made people forget that the GPU is what dictates game performance, not the CPU? How many people here bought a 2080ti and run at 720 or 180 on "medium"? Yeah ... spend a small fortune and turn all the settings to "pure crap" mode? People usually pair their GPU with resolution and settings for between 60 and 120hz TVs / monitors. While there might be a slight difference in 60 to 120 for those crazy basement twitch gamers, between 120 and 240 there is pretty much none. There' limits to human reaction times and if one has bottlnecked their CPU to get 400+FPS then one needs some education ...

So if you are like most people and have refresh between 60 and 120 on your TV / monitor ... you CPU makes no difference to your gaming framerates.


Tests for this review seemed a bit cherry picked to not favour the 3900x? Especially in productivity? Not enough time? Best to also explore other's reviews on the 3900x as well to get the broader picture.


Exactly. But hey, I spend $1200 on a GPU to play at 1080P, doesn't everyone? SMDH
 
  • Like
Reactions: joeblowsmynose
NO I think the engineers have been working for 4+ years trying to get 10 nm straightened out. And you could very well see a 15% increase in IPC, but also see a reduced clock speed. AMD already has the edge in IPC btw. As for my lack of understanding, which is laughable coming from you, the point is we are hitting a Clock speed wall, and as such, an architecture that reallies on clock speed alone to ramp performance is at a dead end. There is a reason why Intel has a negligible lead in some game benchmarks, that is because they utilize a single core 5.1 boost clock. Single core. Let me repeat that again....Single core. What does that tell you when a game app performs better because on a processor that has one core boosted significantly above the rest?

And for God's name, why would AMD choose to approach an upgrade to their architecture in this manner? Who is going to invest $1200 on a GPU only to play at 1080P? Why would anyone in their right mind spend the extra $300 for a water cooling setup that the Intel systems needs to reach those levels on settings that they will never play, when they could use that money and spend it on better faster storage or a better GPU?

Same people that will pay $800 for the X570 board for a mainstream product. People who will buy no matter the results. People who always go one way because "reasons".

The CPU was never nor ever will be a significant contributor to FPS in gaming. That is why you have to induce an artificial CPU bottleneck to even see a difference in these reviews. The reason why they invented the GPU was to eliminate the CPU bottleneck - that is their sole purpose - so reviews on CPUs remove them from the equation ... but that isn't real world ... again, the GPU was invented to ELIMINATE the cpu bottleneck.

Why people expect the CPU to make all the difference in the world is beyond me, especially considering that if you have bottlenecked your CPU, you are wasting precious GPU resources that you paid for ... on a 2080ti @4k and settings to get a 70+FPS in SoM, all the CPUS get 76 FPS. All of them. r5 2600 nonX, Threadripper, 9900k @5.2ghz -- -all exactly the same. Isn't that weird? Well thats what happens when you let a GPU do what it is intended for - eliminate CPU bottleneck ... why is this hard to understand for some people?

What was it that happened that made people forget that the GPU is what dictates game performance, not the CPU? How many people here bought a 2080ti and run at 720 or 180 on "medium"? Yeah ... spend a small fortune and turn all the settings to "pure crap" mode? People usually pair their GPU with resolution and settings for between 60 and 120hz TVs / monitors. While there might be a slight difference in 60 to 120 for those crazy basement twitch gamers, between 120 and 240 there is pretty much none. There' limits to human reaction times and if one has bottlnecked their CPU to get 400+FPS then one needs some education ...

So if you are like most people and have refresh between 60 and 120 on your TV / monitor ... you CPU makes no difference to your gaming framerates.


Tests for this review seemed a bit cherry picked to not favour the 3900x? Especially in productivity? Not enough time? Best to also explore other's reviews on the 3900x as well to get the broader picture.

And you miss the point of creating said CPU bottleneck. Yes at higher frame rates the bottleneck is not there and the difference is little to none. However look at CPUs from 5 years ago. How many FX series CPUs are still viable for high end gaming? Now how many Skylake CPUs are? Thats what it shows and always has. A CPU that gets say 20% better performance today will still be viable longer than the one that does not. Its the same reason they typically max out everything in GPU tests, to make the GPU the bottleneck to see how well the GPU is doing. Running games maxed out tells you nothing of the CPUs actual performance.

And they are not "cherry picked". TH uses a pretty standard suite. I swear every single time a review comes out if its Intel or nVidia its just as it is but if its AMD its always the same crap of them trying to make them look bad.

I mean it could just be that Intel still holds the gaming crown and has to fight anywhere else, maybe? Maybe all the rumors that were floating around before hand were just rumors?
 
One point i heard is by the time intels 10nm is out, AMD will be at or close to a 7nm refresh.

Unless things have changed vastly Intels 10nm is quite a bit more dense than TSMCs 7nm and will still be slightly more dense than TSMCs 7NM+. Thats all in 2020. Then by 2021 Intel is slating to push out their 7nm which would be more dense than TSMCs 5nm, or the last specifications show that.