Review AMD Ryzen 9 3900X and Ryzen 7 3700X Review: Zen 2 and 7nm Unleashed

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

joeblowsmynose

Distinguished
If AMD makes a 16 core TR 2 they would be just as stupid as Intel when they made a 4 core HEDT CPU that was devastated by mainstream quad cores.
...

Nope, that wouldn't be the same at all because you could get double the core count on mainstream at the time Intel was offering quad HEDT --- you can't get a 32core mainstream consumer chip at this time. (not that 8 core HEDT made a lot sense either -- it did for a few I guess)

Also there's platform advantages for wanting a 16 core TR over a 16 core mainstream chip that would have it make sense for some at the same core count. But AMD has the choice with the TR platform to do anything from 16 to 64 cores - tons of room for any decisions in there regarding core counts. With Intel only getting to 10 on mainstream any time soon, 18 on HEDT (Maybe they could pull a 28 core xeon down to HEDT), AMD is going to have free reign in this area for a while, granted they don't wait a year to get zen2 TR out.
 
Nope, that wouldn't be the same at all because you could get double the core count on mainstream at the time Intel was offering quad HEDT --- you can't get a 32core mainstream consumer chip at this time. (not that 8 core HEDT made a lot sense either -- it did for a few I guess)

Also there's platform advantages for wanting a 16 core TR over a 16 core mainstream chip that would have it make sense for some at the same core count. But AMD has the choice with the TR platform to do anything from 16 to 64 cores - tons of room for any decisions in there regarding core counts. With Intel only getting to 10 on mainstream any time soon, 18 on HEDT (Maybe they could pull a 28 core xeon down to HEDT), AMD is going to have free reign in this area for a while, granted they don't wait a year to get zen2 TR out.

Intels 7700K ate into the 7740K sales because the 7740K was a quad core with only dual channel memory support on a HEDT platform. It was a stupid procesor.

Yes TR would have more PCIe and more memory channels but Ryzen would still eat into TR sales and TR can be priced higher than Ryzen due to the platform.

My biggest guess is that they will offer either a 32 core or at least more than the 3950X and will be priced higher than the 3950X. Otherwise they will start to lose sales of a more profitable platform.
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
We included that information in the table on the first page.

Yeah, commentary was a bit rushed. Unlike many other sites, we retested the entire test pool with the latest version of windows, as that is the only way to derive accurate comparisons. Most other sites just tested Ryzen with the new OS, but tested the other procs on older versions of Windows. Also, we received a BIOS update late in the game, which required retesting of all AMD platforms. Again, several other sites did not do that. So, I didn't have as much time to sprinkle in commentary, but I'll take test accuracy over blathering about inaccurate test results any day of the week.
I might trust your test results more if you showed the ability to quote the right person. :)
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
That's actually incorrect. More and more people now are twitch streaming. Almost everyone who I know that games has a twitch and attempts to stream. Guess what their limiting factor is going to be? =p. CPU power for gaming is relevant now more than ever due to this. You ever try gaming and streaming on a 9600k but a 2080ti (or any gpu for that matter)? Yeah, doesn't work out too well. Enter a beefy, speedy cpu with multi-threading. So while your point about 4k 60hz gaming not bottlenecking a cpu by itself is technically true, the rise of streamers is demanding more cpu power day by day regardless and making your statement of "almost impossible" not only misguided, but it's actually very common. Even more so for folks who stream non AAA/lower gpu demanding titles.

Edit - And don't get me wrong I understand what you're saying about you wouldn't be pushing those frames in a real life scenario, but it still is an indicator of the general gaming performance of one cpu vs another.
Streaming while gaming is multi-tasking, an area that AMD destroy's intel.

"Straight up gaming relies more on lower count core performance, so in these tests, the more cores and threads won’t be much of value for Ryzen. Even though the Ryzen 3900X doesn’t beat the Intel i9-9900K when it comes to gaming, AMD has still closed the gap on gaming performance. In some of these tests, you will notice that the frames per second are almost the same, with less than a five frame per second difference. The gap between frames per second doesn’t start to show until the GTX 2080. Some games, even if there is a numerical difference with a GTX 2080, the game is already cranking out such high frame rates (148/354/186), it would be hard for me to imagine any experiential difference. If you’re a gamer who simultaneously streams or captures and edits video, the Ryzen 3900X is the better choice as the core count advantage kicks in. "


"
For high res gaming, PC World’s Gordon Ung didn’t even bother placing benchmarks for anything higher than 2560x1440 because he explains that the gaming performance will be so similar. PCMag reinforces this by explaining that in 4K, they saw all of the CPUs perform nearly identically (within a few frames per second) with the GeForce RTX 2080 Ti.

All this makes sense as high res gaming focuses more on the performance of the graphics cards rather than the performance of the CPU. We refer to this as being “GPU-limited” versus “CPU-limited.”"



In other words, Intel got stomped. end of story.
 
Last edited:
So impressive. AMD hasn't been on my radar since the Athlon days. Competition is good! I wonder how long it'll take Intel to get to 7NM. Also, where's Intel's response hardware-wise? I mean isn't the 9900K almost a year old at this point? And it's still beating AMD in a lot of areas.
Couple of years at least, They are still pushing out "new" CPUs at 14nm well into next year !!
 
Couple of years at least, They are still pushing out "new" CPUs at 14nm well into next year !!

Only for the mainstream desktop. For mobile and server, more profitable markets, 10nm is going to be pushed and servers will have 7nm pushed probably faster than the others considering it is a very high margin market in 2021. However the nm itself has become pointless. Intels 10nm should still be more dense than TSMCs 7nm. Its all just a marketing trick, Zen+ said it used 12nm when all it was really just a 14nm+. They just changed the name to market it.

Everyone needs to remember that while having a majority desktop market share is nice the HPC and server is the money maker and while Intel may not have the core count they have other features AMD does not yet, I say yet as eventually most features that are worth while get used by both.
 
Some pretty big assumptions there but Intel did bet at 10nm heavilly and now is scrambling to fix damage done by lost time. Could for instance just lower the prices and stay competitive with otherwise fine CPUs. I'm sure 14nm payed for itself by now but noooo, Intel must make a buck the hard way and relay on name instead on innovation. Even more so if they do have most revenue from other products. Production process is not that much different for all processors anyway.
 
For mobile and server, more profitable markets, 10nm is going to be pushed
As we have seen, Intels 10nm is struggling with clocks as of now. Their 10nm mobile chips are lower clocked than their 14nm counterparts.

So i suspect 10nm and then 7nm will be relegated to mobile and server chips for long time, since these markets dont require high clocked cpus.

Intels desktop gaming cpus won't come untill later once intel can get decent clocks. While clocks arent everything, they look good on paper. Amd said they expected the clocks to go down with 7nm, but unlike intels node shrink issues, amds clocks went up. Lets see how AMD capitalizes on that.
 
Some pretty big assumptions there but Intel did bet at 10nm heavilly and now is scrambling to fix damage done by lost time. Could for instance just lower the prices and stay competitive with otherwise fine CPUs. I'm sure 14nm payed for itself by now but noooo, Intel must make a buck the hard way and relay on name instead on innovation. Even more so if they do have most revenue from other products. Production process is not that much different for all processors anyway.

Not really assumptions just going off of Intels roadmaps.

And Intel does innovate. Moreso in the HPC market but they do. Optane there is actual memory DIMMs which are vastly faster than anything PCIe. It will probably eventually trickle down to consumers, or so I hope.

As we have seen, Intels 10nm is struggling with clocks as of now. Their 10nm mobile chips are lower clocked than their 14nm counterparts.

So i suspect 10nm and then 7nm will be relegated to mobile and server chips for long time, since these markets dont require high clocked cpus.

Intels desktop gaming cpus won't come untill later once intel can get decent clocks. While clocks arent everything, they look good on paper. Amd said they expected the clocks to go down with 7nm, but unlike intels node shrink issues, amds clocks went up. Lets see how AMD capitalizes on that.

I don't think we will get a 10nm desktop part. With the ramp up to 7nm being so soon after it would make sense to just jump to that and skip 10nm for consumer desktops.

As for clocks, it may not matter. If Intel can increase IPC as much as they are claiming they can then Intel might be able to match clock speeds while offering better performance.
 
Goodness, it might be 2030 before they figure out 7nm.

Amd does do stupid or misleading marketing, but so does everyone else, so I dont blame them for marketing 14nm chips as 12nm. Intel has admitted to doing the same to show progression.

At least the 14nm and 12nm chips were slightly different.
 
Goodness, it might be 2030 before they figure out 7nm.

Amd does do stupid or misleading marketing, but so does everyone else, so I dont blame them for marketing 14nm chips as 12nm. Intel has admitted to doing the same to show progression.

At least the 14nm and 12nm chips were slightly different.

I doubt Intel is going to have issues with 7nm considering they poured $7 billion into updating FAB 42 just for 7nm. Thats a lot of money.

I didn't blame AMD. I should have been clear, it was the FAB that named it 12nm when it was really just 14nm+. If Intel did the same we would be on 10nm now as every 14nm variation is different.

I only blame the FABs for the marketing. It sounds better that they have 7nm and Intel will just be on 10nm yet their 7nm is not better, per specs. Same with the 5nm TSMC which spec wise is not better than Intels 7nm.
 

paul prochnow

Distinguished
Jun 4, 2015
90
5
18,535
AMD's Ryzen 3000 series promises more performance and value via the benefits of the 7nm process and Zen 2 microarchitecture.

AMD Ryzen 9 3900X and Ryzen 7 3700X Review: Zen 2 and 7nm Unleashed : Read more

MSI X470Gaming Plus BIOS A.A
https://browser.geekbench.com/v4/cpu/13863634
First Benchmark that everyone knows.

Two S.A.N.D.R.A. sets of numbers.

SiSoftware Sandra RYZEN 9 3900X Benchmark Results CPU TEST
Aggregate Native Performance : 413.33GOPS
Dhrystone Integer Native AVX2 : 553.88GIPS
Dhrystone Long Native AVX2 : 558.47GIPS
Whetstone Single-float Native AVX/FMA : 336.55GFLOPS
Whetstone Double-float Native AVX/FMA : 282.7GFLOPS

Aggregate Native Performance
Dhrystone Aggregated-int Native : 556.17GFLOPS
Whetstone Aggregated-float Native : 308.45GFLOPS
Results Interpretation : Higher Scores mean Better Performance.

You see a clockspeed here and I guess SANDRA settled on that 4.38Ghz,
I was still happy with stock clock RAM and CPU.

I use the older SANDRA to do a better comparo with my older rigs.

SiSoftware Sandra MEMORY BANDWIDTH Benchmark Results
Aggregate Memory Performance : 24.73GB/s
Integer Memory Bandwidth B/F AVX2/256 : 24.7GB/s
Float Memory Bandwidth B/F FMA/256 : 24.76GB/s
Results Interpretation : Higher Scores mean Better Performance.
 
Yup. If the i7 8700x cost about as much as the slightly faster 3600x, it would be an easy sell as the i7 wins when overclocked

Unfortunately the i7 at stock performs simmilar in games and looses everywhere else while still costing $100 more. After factoring in the cooler cost, the ryzen is like $150 cheaper.
 
Last edited:
  • Like
Reactions: Isokolon

salgado18

Distinguished
Feb 12, 2007
931
375
19,370
Yup. If the i7 8700x cost about as much as the slightly faster 3600x, it would be an easy sell as the i7 wins when overclocked

Unfortunately the i7 at stock performs simmilar in games and looses everywhere else while still costing $100 more. After factoring in the cooler and extra mobo cost for z370 boards, the ryzen is like $200 cheaper.
And then be locked down to a dead platform, that stretches up to the i9-9900k with 8/16 cores/threads. The 3600X is an entry level chip with similar performance to the 8700k, and its platform goes up to 16/32 cores/threads, and possibly a new generation next year.

There is zero reason to go Intel even if they get slightly cheaper, other than pure gaming with high refresh monitors.
 

paul prochnow

Distinguished
Jun 4, 2015
90
5
18,535
https://browser.geekbench.com/v4/cpu/13863634

Rather a new rig and it is X470 up to the A.A BIOS and it is MSI Gaming Plus.
OK link #2 is here and I stroked the DDR$ up top 3333Mhz. I also stroked the fan
to stay sub 70C. Wild OCs will take water at least "in The Home" versus LiqN2 Lab.

https://browser.geekbench.com/v4/cpu/13865361

BTW where is the Bragging Thread? My MOBO is the MSI X470 Gaming Plus BIOS A.A makes Ryzen 9 go BTW.
I have yet to up the MULTI in case you want to know. I wonder what good Ocers will get with the right stuff.

Single-Core Performance
Single-Core Score5589
Crypto Score6888
Integer Score5190
Floating Point Score5409
Memory Score6431
You underst and that RAM set at 1672 is 1/2 the common referred to speed. 3344Mhz is the common nomenclature.

***Single-Core Score***Multi-Core Score
558947755
Geekbench 4.3.4 Tryout for Windows x86 (64-bit)
Result Information
Upload DateJuly 12 2019 08:16 PM
Views2
System Information
System Information
Operating SystemMicrosoft Windows 10 Pro (64-bit)
ModelMicro-Star International Co., Ltd. MS-7B79
MotherboardMicro-Star International Co., Ltd. X470 GAMING PLUS (MS-7B79)
Memory32768 MB DDR4 SDRAM 1672MHz
NorthbridgeAMD Ryzen SOC 00
SouthbridgeAMD X470 51
BIOSAmerican Megatrends Inc. A.A0
Processor Information
NameAMD Ryzen 9 3900X
Topology1 Processor, 12 Cores, 24 Threads
IdentifierAuthenticAMD Family 23 Model 113 Stepping 0
Base Frequency3.80 GHz
Maximum Frequency4.53 GHz
 
Yup. If the i7 8700x cost about as much as the slightly faster 3600x, it would be an easy sell as the i7 wins when overclocked

Unfortunately the i7 at stock performs simmilar in games and looses everywhere else while still costing $100 more. After factoring in the cooler and extra mobo cost for z370 boards, the ryzen is like $200 cheaper.

Not sure why everyone says "extra mobo costs" when a Z370 and X470 with similar features cost about the same. Sure you can drop to the B450 but then you drop features.
 
Agree on the cooler although I have never, nor when I built PCs for a living, ever used the stock cooler for Intel or AMD. Even the Wraith cooler is not quite enough to overclock with and any advantage you can give your CPU even at stock with PBO+ helps. So when I look at a build I always look at it with a separate cooler cost to keep things even.
 

paul prochnow

Distinguished
Jun 4, 2015
90
5
18,535
Agree on the cooler although I have never, nor when I built PCs for a living, ever used the stock cooler for Intel or AMD. Even the Wraith cooler is not quite enough to overclock with and any advantage you can give your CPU even at stock with PBO+ helps. So when I look at a build I always look at it with a separate cooler cost to keep things even.
I can help I got a Ryzen 9 3900X today and ran Geek Bench to see:

https://browser.geekbench.com/v4/cpu/13865361

Not bad.