Review AMD Ryzen 9 3900X and Ryzen 7 3700X Review: Zen 2 and 7nm Unleashed

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
In what universe do you expect "sooner" to be any sort of reality? It will only be later and much later. How long has been since the first "moar cores" cry? Like 2006? Where are we now, 2019....

"Moar cores" is just moar marketing tripe to mislead the gullible.

You are wrong ... a little education and research can go a long way. More cores is the direction that is required for improved performance. Let's ask why? Right now a 9900k@5.0ghz can pull up to 250w through socket. Yup, up to 250w - that's insane - the same as a 16 core threadripper.

"But doesn't Intel hold the efficiency crown?" -- well it used to, for sure. But what happened was that in order to stave off Ryzen three years ago, Intel was running low on options on how they could tweak their architecture for improved IPC so they just made faster cores. All they way up to the limit of x86 silicon, which is ~5ghz. AMD also found this 5ghz limit with Bulldozer.

So does anyone think that a 6ghz "faster" core, if even from Intel, is possible? It would be pulling more power through the socket than almost any cooling solution could deal with. Its not possible without becoming the laughing stock of power efficiency - especially when AMD now has 16 core at 105w TDP. What would an 8 core intel 14nm "6ghz" pull? 400-500w under full load? It would be a joke.

The solution to getting the power under control is by using a node shrink so that less overall power can be used. The problem with node shrinks is you have to decrease frequency on every node shrink (although AMD didn't have that issue this time - 1) they got lucky 2) zen one didn't have high clocks to start with) - that's just the way it is. So now that we can introduce node shrink, how do we combat the lower frequencies created by node shrink? Find ways to increase IPC ... easier said then done.

You see its all getting into the territory of greatly diminishing returns ... 6ghz will never happen, Intel can't do anything more on 14nm because they have already refined that node to entirely its potential -- there's nothing left to give -- I hear some people crying on here "the AMD is barely faster than 7th gen" - well so is Intel 9th gen and in some cases slower (gaming cases) ... why? Why hasn't Intel been able to make faster cores since the 7th gen? If making faster cores instead of more cores is what will make intel win ... why haven't they been able to make anything faster than 5ghz? It should be easy right? Or maybe they just simply ... can't.

What a conundrum if you are a processor manufacture ... what's the solution? You can always keep increasing performance with more resources (cores).

I personally love the ability to multitask, I can encode video or render 3D animations/vfx in the background WHILE playing games, I don't have to close all my applications just so I can get great game performance (lol), and I can do 10 things at once. You couldn't do any of that even if you had quad core that was 6ghz, lol. Not even close.

Now when you add in that I use my GPU to ensure my CPU is NOT the bottleneck (the reason the GPU was invented) for gaming, the gaming performance for my use would be exactly identical to 9900k@5.2OC for my use ... so I can get the exact same gaming in real life scenarios, because please people its the GPU that does gaming ... why did everyone forget this? On top of that I can do multithreaded tasks (assuming I have a 3900x) at least 50% faster than a 9900k -- all that for the same price.

So please stop the "we don't want cores!" Intel propaganda. If Intel could do multi-core like AMD, they'd have everyone trained to be singing a different tune ... I promise you that. (Cinebench used to be one of Intel's prized benchmarks - now they are trying to get reviewers and journalists to abandon it ... funny eh?)

Even phones are getting more cores ... time to get with the times ...
 
we will be building some of these up at work, replacing older junk.... can't wait! aces at runnign my vm labs and whatnot
 
You are wrong ... a little education and research can go a long way. More cores is the direction that is required for improved performance. Let's ask why? Right now a 9900k@5.0ghz can pull up to 250w through socket. Yup, up to 250w - that's insane - the same as a 16 core threadripper.

"But doesn't Intel hold the efficiency crown?" -- well it used to, for sure. But what happened was that in order to stave off Ryzen three years ago, Intel was running low on options on how they could tweak their architecture for improved IPC so they just made faster cores. All they way up to the limit of x86 silicon, which is ~5ghz. AMD also found this 5ghz limit with Bulldozer.

So does anyone think that a 6ghz "faster" core, if even from Intel, is possible? It would be pulling more power through the socket than almost any cooling solution could deal with. Its not possible without becoming the laughing stock of power efficiency - especially when AMD now has 16 core at 105w TDP. What would an 8 core intel 14nm "6ghz" pull? 400-500w under full load? It would be a joke.

The solution to getting the power under control is by using a node shrink so that less overall power can be used. The problem with node shrinks is you have to decrease frequency on every node shrink (although AMD didn't have that issue this time - 1) they got lucky 2) zen one didn't have high clocks to start with) - that's just the way it is. So now that we can introduce node shrink, how do we combat the lower frequencies created by node shrink? Find ways to increase IPC ... easier said then done.

You see its all getting into the territory of greatly diminishing returns ... 6ghz will never happen, Intel can't do anything more on 14nm because they have already refined that node to entirely its potential -- there's nothing left to give -- I hear some people crying on here "the AMD is barely faster than 7th gen" - well so is Intel 9th gen and in some cases slower (gaming cases) ... why? Why hasn't Intel been able to make faster cores since the 7th gen? If making faster cores instead of more cores is what will make intel win ... why haven't they been able to make anything faster than 5ghz? It should be easy right? Or maybe they just simply ... can't.

What a conundrum if you are a processor manufacture ... what's the solution? You can always keep increasing performance with more resources (cores).

I personally love the ability to multitask, I can encode video or render 3D animations/vfx in the background WHILE playing games, I don't have to close all my applications just so I can get great game performance (lol), and I can do 10 things at once. You couldn't do any of that even if you had quad core that was 6ghz, lol. Not even close.

Now when you add in that I use my GPU to ensure my CPU is NOT the bottleneck (the reason the GPU was invented) for gaming, the gaming performance for my use would be exactly identical to 9900k@5.2OC for my use ... so I can get the exact same gaming in real life scenarios, because please people its the GPU that does gaming ... why did everyone forget this? On top of that I can do multithreaded tasks (assuming I have a 3900x) at least 50% faster than a 9900k -- all that for the same price.

So please stop the "we don't want cores!" Intel propaganda. If Intel could do multi-core like AMD, they'd have everyone trained to be singing a different tune ... I promise you that. (Cinebench used to be one of Intel's prized benchmarks - now they are trying to get reviewers and journalists to abandon it ... funny eh?)

Even phones are getting more cores ... time to get with the times ...

Node shrinks do not always require speed drops. Intels 45nm was launched with faster stock and way faster overclock speeds than their 65nm node did.

Clock speed has never been the end all be all. We know eventually more cores will be the way to go but software has to catch up to the hardware. 16 cores is pointless for the mass majority of consumers. We are but a small percentage of said consumers.

And true phones are getting more cores but not the same way. They basically have 2 CPUs in one with a smaller, low power design, similar to Atom CPUs , and the other a high performance CPU. Its not quite the same as 8 full cores.

Intel has shown a design similar to that, AMD has yet to, but I am not sure it would work as well in the x86 world especially when the software is not ready for it yet.
 
...
And you miss the point of creating said CPU bottleneck.
No, actually I clarified it for you. "To induce an artificial situation you wouldn't see in real world gaming to highlight any difference you wouldn't other wise see". The only reason it is done is because without doing there is no difference ... its pretty easy to understand.



And they are not "cherry picked". TH uses a pretty standard suite. I swear every single time a review comes out if its Intel or nVidia its just as it is but if its AMD its always the same crap of them trying to make them look bad.
I mean it could just be that Intel still holds the gaming crown and has to fight anywhere else, maybe? Maybe all the rumors that were floating around before hand were just rumors?

Huh? What are you even talking about? Just a far less comprehensive view of productivity tasks compared to almost every other major reviewer - I was comparing to their AMD reviews - not Tom's Intel reviews or "rumours" ... sheesh, is this the only review site you ever visit?
 
Node shrinks typically let you increase frequency if anything.
No. Not anymore at all -- refining a node over 6 years allowed intel to increase frequncy - not any die shrink - hence Intel 5.0ghz on 14nm after 6 or whatever years ...

Bulldozer had 5ghz at 32nm ... if you're logic is right, at 7nm we should be at 10ghz ... newsflash ... we aren't and another newsflash - have you seen the clocks on sunny cove? Super low ...

I think if anyone is expecting Intel 10nm or 7nm to hit 5ghz, they are seriously fooling themselves ... it took intel years and years to get 5ghz on 14nm.
 
Last edited:
No, actually I clarified it for you. "To induce an artificial situation you wouldn't see in real world gaming to highlight any difference you wouldn't other wise see". The only reason it is done is because without doing there is no difference ... its pretty easy to understand.





Huh? What are you even talking about? Just a far less comprehensive view of productivity tasks compared to almost every other major reviewer - I was comparing to their AMD reviews - not Tom's Intel reviews or "rumours" ... sheesh, is this the only review site you ever visit?

You clarified nothing. In order to properly see a items performance you want the bottleneck to be on that product, not on another part. Again how many FX CPUs are viable for high end gaming today? When ruinning a FX CPU with the GPU as the bottleneck it typically performed the same. Yet very few are viable high end gaming CPUs today.

And the issue is that people swarm around rumors. Every time TH does a review the same crap floats to the top that its biased. They use the same programs typically with a few changes as time goes on.

And no its not but it is one of my more trusted sites.

No. Not anymore at all -- refining a node over 6 years allowed intel to increase frequncy - not any die shrink - hence Intel 5.0ghz on 14nm after 6 or whatever years ...

Bulldozer had 5ghz at 32nm ... if you're logic is right, at 7nm we should be at 10ghz ... newsflash ... we aren't and another newsflash - have you seen the clocks on sunny cove? Super low ...

Sunny Cove is also a low power part, not a high end desktop chip.
 
No. Not anymore at all -- refining a node over 6 years allowed intel to increase frequncy - not any die shrink - hence Intel 5.0ghz on 14nm after 6 or whatever years ...

Bulldozer had 5ghz at 32nm ... if you're logic is right, at 7nm we should be at 10ghz ... newsflash ... we aren't and another newsflash - have you seen the clocks on sunny cove? Super low ...
Process node is one factor that affects clock speeds. Architecture is another. Bulldozer had an architecture with a deep pipeline which favored high clock speeds, but AMD ultimately realized that chasing high clock speeds was a dead end (much like Intel did with Netburst years earlier). So they did a major overhaul with Ryzen which resulted in lower clock speeds (which was more than made up for with increased IPC), but those clock speeds have been increasing incrementally with new nodes.

Intel has had many well publicized issues and delays with their 10 nm node, hence them being forced to continually refresh their 14 nm node. And their initial 10 nm node is still not perfect, hence it being limited to lower clock speeds and debuting on low power mobile parts.
 
Last edited:
AMD Ryzen 7 1800x
14nm
4ghz turbo

$499 MSRP

AMD Ryzen 7 2700x
12nm
4.3ghz turbo

$329 MSRP

AMD Ryzen 9 3900x
7nm
4.6ghz turbo

$499 MSRP

Node shrink did bring faster clock speeds from Zen to Zen+ then to Zen2.
However, Lisa did say here engineers expected clocks to go down with Zen 2, but they went up 300mhz (400mhz once 3950x launches) over the fastest 2xxx cpus 4.3ghz turbo.

I understand clocks aren't everything with a node shrink either, but Ryzen 3000 has increased the IPC by 10-15%, and now IPC is greater than Intel's 9900k. AMD chips also are much more efficient.
 
...
And true phones are getting more cores but not the same way. They basically have 2 CPUs in one with a smaller, low power design, similar to Atom CPUs , and the other a high performance CPU. Its not quite the same as 8 full cores.

Intel has shown a design similar to that, AMD has yet to, but I am not sure it would work as well in the x86 world especially when the software is not ready for it yet.

This design actually interests me ... its a question I asked many years ago when the ideas of "lots of cores" was originally being floated and the issue of lots=small.

I do agree with you that I think this might be troublesome in an x86 design from a software perspective -- same issue as now with a lot of cores - software not entirely ready / able to utilize it properly ...
 
Last edited:
You clarified nothing. In order to properly see a items performance you want the bottleneck to be on that product, not on another part. Again how many FX CPUs are viable for high end gaming today? When ruinning a FX CPU with the GPU as the bottleneck it typically performed the same. Yet very few are viable high end gaming CPUs today.

Ok I think I'm getting you now -- so back in 2012, when they tested the Bulldozer in gaming with a bottlenecked CPU, it was to inform people who plan on keeping it for gaming until 2019, will know that by 2019, it won't be a high end gaming CPU compared to the modern competition. Otherwise ... how could we have known?



Sunny Cove is also a low power part, not a high end desktop chip.
That's true ... but don't expect their desktop parts (whenever they arrives) to be hitting 5ghz ... it won't happen.
 
Last edited:
  • Like
Reactions: NightHawkRMX
...
Intel has had many well publicized issues and delays with their 10 nm node, hence them being forced to continually refresh their 14 nm node.
Correct, and it is likely that they may have never hit 5ghz had their 10nm worked sooner ...


And their initial 10 nm node is still not perfect, hence it being limited to lower clock speeds and debuting on low power mobile parts.

I think 10nm may skip desktop altogether and we may end up going from 14nm comet lake in 2020 to 7nm~ something lake ... that would make some sense.
 
  • Like
Reactions: Soaptrail
When Intel shrunk their node to 10nm for mobile chips, the clocks were lower than the previous generation mobile chips.
When AMD shrunk their desktop CPU's node to 7nm (similar to Intels 10nm) the clocks were higher than previous generation desktop chips.

I'm surprised AMD shrunk the node of their desktop chips first, as 7nm would help battery life and thermals on mobile chips. Although, the desktop is their main market.
 
This design actually interests me ... its a question I asked many years ago when the ideas of "lots of cores" was originally being floated and the issue of lots=small.

I do agree with you that I think this might be troublesome in an x86 design from a software perspective -- same issue as now with a lot of cores - software not entirely ready / able to utilize it properly ...

Intels Terascale was interesting to me. 80 low power P56 cores but tons of performance.

I'm getting now -- so back in 2012, when they tested the Bulldozer in gaming with a bottlenecked CPU, it was to inform people who plan on keeping it for gaming until 2019, will know that by 2019, it won't be a high end gaming CPU compared to the modern competition. Otherwise ... how could we have known?




That's true ... but don't expect their desktop parts (whenever they arrives) to be hitting 5ghz ... it won't happen.

Some people want to but to me its just shows viability down the line. Typically if a CPU performed better today when its the bottleneck it still performs well 5 years down the road. My 4670K performs just fine for me with gaming, although my reason for not upgrading is I want something along the lines of NVDIMMs like Optane for servers has as storage to me is the biggest bottleneck in a system.

I don't expect anything. All news I see I tend to distrust until an independent third party gets their hands on them. I don't excpet 5GHz on Intels next desktop part, although I expect betetr clock speeds than AMD has as all of Intels proces nodes are designed for high power parts. Hell I don;t even expect 10nm for desktop at this point. I think they might be planning on skipping it in favor of their 7nm.
 
When Intel shrunk their node to 10nm for mobile chips, the clocks were lower than the previous generation mobile chips.
When AMD shrunk their desktop CPU's node to 7nm (similar to Intels 10nm) the clocks were higher than previous generation desktop chips.

I'm surprised AMD shrunk the node of their desktop chips first, as 7nm would help battery life and thermals on mobile chips. Although, the desktop is their main market.

Had AMD done that - it would still have a bit of a struggle vs Intel - especially with Sunny cove not too far off in the distant future, and Desktop would have still be about a wash between single/multi - with the 9900k still a slight performance outlier. With the way they did it, they can take some glory in actually being ahead of Intel in almost everything except bottlnecked gaming, where everyone knows (or should know) that real world gaming has a bottlenecked GPU anyway. Intel only has another refresh for 14nm on the horizon, and a couple more cores is all they'll be adding. This gives AMD at least a little time to be seen as on top of Intel in most areas - good for brand image.

I think it made strategic sense for AMD to keep with its rollout priorities as it has been doing.
 
  • Like
Reactions: Soaptrail
... I don't expect anything. All news I see I tend to distrust ...
A little distrust can go a long way ... (cough 5.2ghz 16 core Ryzen cough) :)

...
Hell I don;t even expect 10nm for desktop at this point. I think they might be planning on skipping it in favor of their 7nm.
I also think so ... same as I just mentioned to that hooker fella just above ...
 
Considering how most of the games/apps used in these tests historically favor Intel, and the AMD favored games/apps are absent, this represents a worst case scenario for AMD. And yet, it still comes across as a WIN.

Basically, ZEN2 averages 3-8% behind in gaming...but DECIMATES Intel in productivity... for a MUCH lower price...
  • theres a cooler cost required for Intel
  • you get the same performance on an X370 board and don't need to buy a motherboard if you are upgrading
  • the power savings...while Intel heats your room (Yeah, Intel is the new Bulldozer for power/heat)
  • AMD does it at ~20% lower clock speed

Source: Google it + look at more than 1 review site
 
  • Like
Reactions: NightHawkRMX
Is Toms going to look into this: From Anandtech "The new firmware (Version 7C35v12) for the motherboard contains AMD’s new ComboPI1.0.0.3.a (AGESA) firmware. " The firmware was only available to download on launch day and they have a graph that shows the clock being consistently higher with the new firmware. https://www.anandtech.com/show/14605/the-and-ryzen-3700x-3900x-review-raising-the-bar/5

Tom's lists both DDR 3200 and DDR 3600 in the test methodology page, where can I see difference between these two RAM speeds in there graphs? Am i blind?
 
If you think Intel's next generation of CPU's are going to magically increase Operating frequency to 6+ Ghz than you are sadly mistaken. Single threaded applications are a thing of the past, and sooner or later software developers will realize that the only way they are going to get more performance out of their applications is to learn how to write proper code for mutli-threaded CPU's.

We included that information in the table on the first page.

Yeah, commentary was a bit rushed. Unlike many other sites, we retested the entire test pool with the latest version of windows, as that is the only way to derive accurate comparisons. Most other sites just tested Ryzen with the new OS, but tested the other procs on older versions of Windows. Also, we received a BIOS update late in the game, which required retesting of all AMD platforms. Again, several other sites did not do that. So, I didn't have as much time to sprinkle in commentary, but I'll take test accuracy over blathering about inaccurate test results any day of the week.
 
Tom's lists both DDR 3200 and DDR 3600 in the test methodology page, where can I see difference between these two RAM speeds in there graphs? Am i blind?
Look under Article Testing Methodology Update (July 8th): in there you will see the graph I am talking about. It deals with the new BIOS revision and how max clocks are higher in the new BIOS.
 
Is Toms going to look into this: From Anandtech "The new firmware (Version 7C35v12) for the motherboard contains AMD’s new ComboPI1.0.0.3.a (AGESA) firmware. " The firmware was only available to download on launch day and they have a graph that shows the clock being consistently higher with the new firmware. https://www.anandtech.com/show/14605/the-and-ryzen-3700x-3900x-review-raising-the-bar/5
Anandtech tested with an older BIOS than what was available. We tested with the correct BIOS the first time. No need to retest. View: https://twitter.com/gavbon86/status/1148249785306140673
 
Last edited:
Tom's lists both DDR 3200 and DDR 3600 in the test methodology page, where can I see difference between these two RAM speeds in there graphs? Am i blind?
The stock memory frequency (3200) is used for stock testing. The overclocked memory frequency (3600) is used for the OC coinfigurations.
 
  • Like
Reactions: Soaptrail