Review AMD Ryzen 9 3900X and Ryzen 7 3700X Review: Zen 2 and 7nm Unleashed

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I would agree, but now the 9900K is no longer relevant for content/production (maybe if you only use Adobe and Q-sync) -- $500 is a bit of a hard sell on a purely gaming chip for most ... but there is that niche for those that can afford to not care about cost.
While quick sync destroys amd for export times in premier, i saw that amds extra cores make scrubbing through the timeline much better than intel since the igpu doesnt help there.
 
  • Like
Reactions: joeblowsmynose

cmmarco

Distinguished
Jan 6, 2009
5
0
18,510
So to answer the question no one has asked...just swapped a Ryzen 1700 out for a 3600x in a ASrock Taichi X370 (with supported BIOS).
I was hoping the only thing I lose is PCIe 4.0 and that seems to be the case for the most part.

I had liquid cooling for the 1700 (OC'd to 3.8) and with that the 3600x seems to run on all cores at just over 4.1 when under load and max at 4.5 for single core (with no specific OC changes) (standard is 3.8/4.4).

I was disappointed I was still limited to 3200Mhz speed on my memory (using 3600Mhz rated memory). Anything 3400 and above would not post, but memory has always been finicky.

I don't think PBO is ready for prime time, at least not from what I see. Enabling it with minimum settings seems to do nothing for clocks but adds instability (sound issues in Win10 and in games), putting on max settings I gain about 100-200Mhz on all cores (4.3) but games seem to crash.

I had none of these issues on the OC'd 1700 with the same BIOS. So either way, a worthy upgrade without having to buy a new motherboard. I am sure the kinks in PBO will be worked out at some point.
 
3700X is less than 5% slower on average in 1080p gaming while using 60% less power all at 67% the cost, IMHO that means the 9900k is irrelevant.
I should explain myself.

The 9900k still games amazingly, so it's not Obsolete. The 9900k not a good value now that Ryzen 3000 has launched, but it may be relevant for some very speciric uses. For example if you want to use quicksync for faster premiere exporting, or you already have an intel 300 series motherboard.

You SHOULD buy AMD right now for most uses and pricepoints, but if you need Intel, the 9900k is still relevant.
 
Last edited:

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360
Second this. No sites whatsoever are reporting 3800X benchmarks but it seems to be in stock and available to buy (at least it is here in New Zealand). When can we expect a review for this chip, which seems essentially identical to the 3700X save for slightly higher clocks and a significantly higher price?

Soon. Very soon.
 
Unfortunately, it seems like by the time intels 14nm+++ comes out in 2020, AMD will be on 7nm+.

I want intels new chips to come out soon, as i love competition.

Im impressed by 3rd gen ryzen the more and more i look at a 3600. Makes me want one sooo bad.
 
  • Like
Reactions: Soaptrail

xxxlun4icexxx

Honorable
Jun 13, 2013
519
5
11,065
The 2080ti according to the rather "not properly representative" steam survey has less than half of one percent of share with the 2080 just above half of one percent.

Its not about 4K at all - that other guy was reading into things I didn't say. Its about whether your CPU is bottlenecked or not. The fact that its almost impossible to CPU bottlneck a mid range card at any resolution (as another poster above confirmed), and that +75% of all cards on the "steam survey" (likely much more in real life) are midrange cards, means that no one is playing with a bottlnecked CPU. Also 4k represent 1.5% of the displays 3X the number of 2080ti ... lol.

Some people just plain don't get this though, and its precisely the problem I am talking about - KingGremlin proved my point that this perception problem is a real thing. People now somehow believe that since reviewers induce CPU bottlenecks that that's how it is when everyone plays games. Steam survey, as flawed as the data is, shows the exact opposite. No one is playing with a bottlnecked CPU. Why would you waste GPU resources to do that?

That's actually incorrect. More and more people now are twitch streaming. Almost everyone who I know that games has a twitch and attempts to stream. Guess what their limiting factor is going to be? =p. CPU power for gaming is relevant now more than ever due to this. You ever try gaming and streaming on a 9600k but a 2080ti (or any gpu for that matter)? Yeah, doesn't work out too well. Enter a beefy, speedy cpu with multi-threading. So while your point about 4k 60hz gaming not bottlenecking a cpu by itself is technically true, the rise of streamers is demanding more cpu power day by day regardless and making your statement of "almost impossible" not only misguided, but it's actually very common. Even more so for folks who stream non AAA/lower gpu demanding titles.

Edit - And don't get me wrong I understand what you're saying about you wouldn't be pushing those frames in a real life scenario, but it still is an indicator of the general gaming performance of one cpu vs another.
 
Last edited:

Soaptrail

Distinguished
Jan 12, 2015
302
96
19,420
That's actually incorrect. More and more people now are twitch streaming. Almost everyone who I know that games has a twitch and attempts to stream. Guess what their limiting factor is going to be? =p. CPU power for gaming is relevant now more than ever due to this. You ever try gaming and streaming on a 9600k but a 2080ti (or any gpu for that matter)? Yeah, doesn't work out too well. Enter a beefy, speedy cpu with multi-threading. So while your point about 4k 60hz gaming not bottlenecking a cpu by itself is technically true, the rise of streamers is demanding more cpu power day by day regardless and making your statement of "almost impossible" not only misguided, but it's actually very common. Even more so for folks who stream non AAA/lower gpu demanding titles.

Edit - And don't get me wrong I understand what you're saying about you wouldn't be pushing those frames in a real life scenario, but it still is an indicator of the general gaming performance of one cpu vs another.

I understand what you are saying but wouldn't any intel 8 core suffice for gaming + streaming? How much GHz does streaming need? I have not seen any reviewers state streaming needs GHz like gaming.

I look at 1080p gaming tests as synthetic benchmarks. Usually i skip synthetic benchmarks and go to the real world benchmarks. After reading this thread i need to skip 1080p CPU testing unless it shows a large variance then i think we will care but not for the little variances we are seeing on Ryzen 3000 series CPU's.
 

xxxlun4icexxx

Honorable
Jun 13, 2013
519
5
11,065
I understand what you are saying but wouldn't any intel 8 core suffice for gaming + streaming? How much GHz does streaming need? I have not seen any reviewers state streaming needs GHz like gaming.

I look at 1080p gaming tests as synthetic benchmarks. Usually i skip synthetic benchmarks and go to the real world benchmarks. After reading this thread i need to skip 1080p CPU testing unless it shows a large variance then i think we will care but not for the little variances we are seeing on Ryzen 3000 series CPU's.

I don't have any official benchmarks to quote as i'm mobile but depending on your resolution + encoding preferences it will bring an 8-core to it's knees. I've tried streaming on 3 different cpus, the 7800x, the 9600k, and the 9900k. Those are the only ones I can offer insight into. I generally stream at 1080p 60hz. Both the 7800x and 9600k had a rough time with medium encoding while playing AAA titles causing a high amount of stuttering during the stream. The 9900k has powered through everything but @ 100% utilization all cores most of the time. Now that is 1080p @ 60fps. 720p @ 60fps/ 1080p @30 used to be the golden standard for streaming because most cpus could not sustain anything higher. Now that newer tech is coming out, that standard will steadily rise. Of course your internet upload speed plays a part in this as well as a million other factors, but this is strictly speaking about the cpu processing/encoding.
 

Soaptrail

Distinguished
Jan 12, 2015
302
96
19,420
I don't have any official benchmarks to quote as i'm mobile but depending on your resolution + encoding preferences it will bring an 8-core to it's knees. I've tried streaming on 3 different cpus, the 7800x, the 9600k, and the 9900k. Those are the only ones I can offer insight into. I generally stream at 1080p 60hz. Both the 7800x and 9600k had a rough time with medium encoding while playing AAA titles causing a high amount of stuttering during the stream. The 9900k has powered through everything but @ 100% utilization all cores most of the time. Now that is 1080p @ 60fps. 720p @ 60fps/ 1080p @30 used to be the golden standard for streaming because most cpus could not sustain anything higher. Now that newer tech is coming out, that standard will steadily rise. Of course your internet upload speed plays a part in this as well as a million other factors, but this is strictly speaking about the cpu processing/encoding.

Interesting that you abandoned the Intel HEDT, but the pricing is so expensive and with 8 cores on the 9900K i would not do HEDT either. I wonder if AMD will maintain or reduce prices when Intel's refresh happens this fall.
 
AMD dropped ryzen 2000 cpu prices before ryzen 3000 launched.

For example, the Ryzen 5 2600 was $199 on launch, but went to like $160 as of the last few months. Its $140 now and the ryzen 5 1600 is like $120.

AMDs rx500 and vega gpus have decreased in price over time aswell.

I wouldnt be surprised if the ryzen cpus got cheaper if intel rolled out new cpus
 
Last edited:

Thom457

Distinguished
Nov 18, 2007
22
1
18,510
Paul,
A couple suggestions.

First, post the CPU utilizations with the posted CPU results for each test if you can. That will reveal something many suspect but will remove any guesswork regarding what software optimizations are at work. Looking at PassMark scores for the Ryzen 9 3900x and Ryzen 7 3700x vs. the intel I9-9900K, I7-9700K and older Ryzen 14/12 NM models says something much different than what is being shown in many of your tests. The Ryzen 7 3700x essentially matches the 16 Core Threadrippers 2950 and its single core score matches or exceeds the I9-9900K. To see some of the scores you posted would require something holding back the Ryzen 7 3700X beyond the 25+ models submitted to PassMark to date. Seeing the Ryzen 7 3900x report what it does over the Threadrippers (12 and 16 Core models) says the Infinity Fabric latency has been greatly improved. While many at Tom's are animated by the OCed extreme scores some intel models can deliver the mass market for such just runs them at stock clocks because its simply too much bother to find all the tweaks for your particular setup along with risking blowing your particular CPU. Very little real world benefit from manually OCs at this point.

Second, in 46 years in IT I've never seen anyone try to sell a CPU model using third party hardware such as a Graphic card. I understand why you do this but in practical terms what is being shown across the board for these top CPU/Graphic card models is not perceptible by the human eye. A twelve year old maybe. Back when it was a challenge to get to 30 FPS sustained at 1080P some of this might matter but the bulk of the gaming market is not served by the top CPU and Graphic card this week. Again, adding the CPU utilizations to the gaming FPS will reveal what many already know about what is going on here under the covers. Just adding in a standard 4 Core 4790K result will speak to that also in many cases. Wasn't long ago that an ersatz 4 core I7-7700K was all the rage if you kept it on ice.

Throughout my career starting out with way less powerful CPUs than what my watch contains today to these multi-core super computers of today I always noted that no matter how powerful (or fast, not the same thing today) the CPUs were there was always someone able to write something that rendered that moot in short order. Cyrsis was such an endeavor. I spent much of my life in the multi-tasking and then multi-threaded world added upon that rewriting software processes to make such stuff workable within the hardware available. The Gaming industry tries very hard to make hardware investments obsolete each and every year IMHO. If these extreme set ups were required to get playable results out of Gaming software there would be no industry today. Not 3D at least.

Adding a CPU utilization result to each benchmark score will serve a greater audience than only those interested in mine is bigger than yours. Intel code optimization has been the rule in gaming for decades now. That's probably not going to change but the relevance of the scores being shown vs. the cost of the investment to produce said scores would be clear if you showed just how wasteful most of the power of all these extra Cores is vs. the actual perceptible impact of the high end scores are (not).

Just a thought.
 
  • Like
Reactions: joeblowsmynose

Soaptrail

Distinguished
Jan 12, 2015
302
96
19,420
AMD dropped ryzen 2000 cpu prices before ryzen 3000 launched.

For example, the Ryzen 5 2600 was $199 on launch, but went to like $160 as of the last few months. Its $140 now and the ryzen 5 1600 is like $120.

AMDs rx500 and vega gpus have decreased in price over time aswell.

I wouldnt be surprised if the ryzen cpus got cheaper if intel rolled out new cpus

Unless AMD believes they are on par with Intel and do not need to reduce prices. I am also wondering if we will see more CPU's from AMD like a slower clocked dodecacore core CPU or a faster hexacore.
 

Soaptrail

Distinguished
Jan 12, 2015
302
96
19,420
Paul,
A couple suggestions.



Adding a CPU utilization result to each benchmark score will serve a greater audience than only those interested in mine is bigger than yours. Intel code optimization has been the rule in gaming for decades now. That's probably not going to change but the relevance of the scores being shown vs. the cost of the investment to produce said scores would be clear if you showed just how wasteful most of the power of all these extra Cores is vs. the actual perceptible impact of the high end scores are (not).

Just a thought.

I am wondering if AMD is going to show some real world improvements to coding when Borderlands 3 comes out which might mean other developers code better for AMD, but that assumes borderland 3 shows a strong FPS gain in AMD CPU's compared to Intel.
 

joeblowsmynose

Distinguished
That's actually incorrect. More and more people now are twitch streaming. Almost everyone who I know that games has a twitch and attempts to stream. Guess what their limiting factor is going to be? =p. CPU power for gaming is relevant now more than ever due to this. You ever try gaming and streaming on a 9600k but a 2080ti (or any gpu for that matter)? Yeah, doesn't work out too well. Enter a beefy, speedy cpu with multi-threading. So while your point about 4k 60hz gaming not bottlenecking a cpu by itself is technically true, the rise of streamers is demanding more cpu power day by day regardless and making your statement of "almost impossible" not only misguided, but it's actually very common. Even more so for folks who stream non AAA/lower gpu demanding titles.

Edit - And don't get me wrong I understand what you're saying about you wouldn't be pushing those frames in a real life scenario, but it still is an indicator of the general gaming performance of one cpu vs another.

Well no, that is not incorrect because what you are saying is an entirely different use case than my point altogether ... what you describe highlights the need for a CPU that has lots of cores and threads though (which AMD highlighted in their E3 stream demo vs 9900k - if it can be said that the 9900k can "game" better, why does the 3900x win in this task then? (actual use case issue aside -- no one needs to stream on "slow")) -- This represents how well your CPU can multitask, not how many max frames it can hit in an artificial situation.

None of that really relates to the "need" to bottleneck your CPU with a $1200 video card and quality/resolution settings on low. That's the need that doesn't exist and only shows up in gaming benchmarks and never, ever in 99.999% of regular gaming.

"And don't get me wrong I understand what you're saying about you wouldn't be pushing those frames in a real life scenario...indicator of the general gaming performance"
- Do note that those two points do somewhat contradict. Actually a bottleneck CPU isn't really an indicator of gaming performance at all - it's an indicator of something, but hardly real world gaming. Here's where the issue lies - in that contradiction.

And I'm not asking to outlaw low quality low res $1200 GPU benchmarks, I'm asking that reviewers make a distinction in benchmark numbers between typical real-use case scenarios, and wholly artificial induce ones. The onslaught of people and fanboys who don't understand this dichotomy between real life and artificial is getting a bit silly - even relatively intelligent people are loosing sight of this reality. What would be wrong with seeing CPU difference on other cards besides a 2080ti? The GTX 1060 is still the most popular card by a massive margin, why not show the difference on the card that almost everyone has - a benchmark they can relate to? Why not a 2070 and a 1070 so we can see the difference between them in real world values? Why hide the "real" world values? Why only show values that 99.999% of people will never see? Are there 3rd party incentives at play here? It is disingenuous and a disservice to people reading the reviews and believe the wrong things about their results.

And don't get me wrong, I was actually getting prepared to make that same speech to AMD fanboys if zen2 did end up wiping the floor with Intel in gaming ... It obviously goes both way - that issue has nothing to do with "brand"
 
Last edited:
  • Like
Reactions: Soaptrail

joeblowsmynose

Distinguished
Unfortunately, it seems like by the time intels 14nm+++ comes out in 2020, AMD will be on 7nm+.

I want intels new chips to come out soon, as i love competition.

Im impressed by 3rd gen ryzen the more and more i look at a 3600. Makes me want one sooo bad.

I'm willing to bet that Intel will bar no holds to get 10 series to market as fast as possible ... their prices (on the yet launch lineup) look to have come down strategically since 9th gen to meet the new Ryzen lineup and every chip has SMT enabled. Maybe we'll see them this holiday season ... and by the looks of it the relevancy of i5 (after zen2) might be restored ... we'll see.

I'm also betting that they won't bother with 10nm for desktop - probably working on 7nm right now for those, but might be a year or more out - I'm just speculating.
 

joeblowsmynose

Distinguished
Same as those who would spend stupid amounts on anything.

However the 9900K was never geared towards content creation. Intel has always had a pretty big divide in that and always pushed the HEDT platform for creators/workstation class systems with mainstream, and especially the K series, being more gamer focused.

I still am confused with AMDs decision on core count. I feel like the 3950X will just eat into HEDT sales. Especially since 32 core and up will not benefit most consumers even workstation class.

Missed this comment earlier ...

I think AMD was pretty strategic in their push to 16 core for mainstream ... but there's one "if".

"If" AMD plans on Threadripper to span 16-64 cores - then Intel's HEDT platform is decimated. 16 core Ryzen encroaching on 9980x territory does not bode well at all for Intel's expensive HEDT lineup.

Then with TR from 16 - 64, or maybe, 24 - 64 cores, then Intel has no immediate answer for HEDT. Prices would need to remain reasonable on th eTR side of course for that to work out, but low pricing is still a major weapon AMD is willing to use - they're still in "gain brand awareness" mode in their marketing, and they'll need more share to move from that in my opinion.

Sure it screws over previous TR owners in a sense but that's what progress brings, and that is not unreasonably part of AMD's roadmap in order to get advantage over Intel.

Makes sense to me from that perspective.
 
Coffee lake screwed over kaby lake owners.
If someone dropped $350 on a 4 core 7700k in early 17 they wouln't be happy when the 6 core 8700k launched in late 17 and wasnt backwards compatable with z270.

Luckily 3rd gen is backwards compatable with 300 series boards. I dont think 4th gen will be since the bios rom chips are already full with all of the current cpus.
 
Jun 7, 2019
7
0
10
Why doesn't the 3900x perform better in games since it has 64mb of level 3 cache, twice that of the other zen 2's? I'm a bit confused by this.
 
Missed this comment earlier ...

I think AMD was pretty strategic in their push to 16 core for mainstream ... but there's one "if".

"If" AMD plans on Threadripper to span 16-64 cores - then Intel's HEDT platform is decimated. 16 core Ryzen encroaching on 9980x territory does not bode well at all for Intel's expensive HEDT lineup.

Then with TR from 16 - 64, or maybe, 24 - 64 cores, then Intel has no immediate answer for HEDT. Prices would need to remain reasonable on th eTR side of course for that to work out, but low pricing is still a major weapon AMD is willing to use - they're still in "gain brand awareness" mode in their marketing, and they'll need more share to move from that in my opinion.

Sure it screws over previous TR owners in a sense but that's what progress brings, and that is not unreasonably part of AMD's roadmap in order to get advantage over Intel. Mainstream would eat into it.

However there is still some advantages to HEDT over mainstream, mainly more PCIe lanes and memory channels.

Makes sense to me from that perspective.

If AMD makes a 16 core TR 2 they would be just as stupid as Intel when they made a 4 core HEDT CPU that was devastated by mainstream quad cores.

Why doesn't the 3900x perform better in games since it has 64mb of level 3 cache, twice that of the other zen 2's? I'm a bit confused by this.

More cache doesn't always mean more performance. The cache speed is more key TBH.