Review AMD Ryzen 9 3900X and Ryzen 7 3700X Review: Zen 2 and 7nm Unleashed

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Look under Article Testing Methodology Update (July 8th): in there you will see the graph I am talking about. It deals with the new BIOS revision and how max clocks are higher in the new BIOS.

They didnt test with the latest available BIOS. The version they are retesting with was posted publicly on 7/2. We tested with the correct BIOS.
 
The CPU was never nor ever will be a significant contributor to FPS in gaming. That is why you have to induce an artificial CPU bottleneck to even see a difference in these reviews. The reason why they invented the GPU was to eliminate the CPU bottleneck - that is their sole purpose - so reviews on CPUs remove them from the equation ... but that isn't real world ... again, the GPU was invented to ELIMINATE the cpu bottleneck.

Why people expect the CPU to make all the difference in the world is beyond me, especially considering that if you have bottlenecked your CPU, you are wasting precious GPU resources that you paid for ... on a 2080ti @4k and settings to get a 70+FPS in SoM, all the CPUS get 76 FPS. All of them. r5 2600 nonX, Threadripper, 9900k @5.2ghz -- -all exactly the same. Isn't that weird? Well thats what happens when you let a GPU do what it is intended for - eliminate CPU bottleneck ... why is this hard to understand for some people?

What was it that happened that made people forget that the GPU is what dictates game performance, not the CPU? How many people here bought a 2080ti and run at 720 or 180 on "medium"? Yeah ... spend a small fortune and turn all the settings to "pure crap" mode? People usually pair their GPU with resolution and settings for between 60 and 120hz TVs / monitors. While there might be a slight difference in 60 to 120 for those crazy basement twitch gamers, between 120 and 240 there is pretty much none. There' limits to human reaction times and if one has bottlnecked their CPU to get 400+FPS then one needs some education ...

So if you are like most people and have refresh between 60 and 120 on your TV / monitor ... you CPU makes no difference to your gaming framerates.


Tests for this review seemed a bit cherry picked to not favour the 3900x? Especially in productivity? Not enough time? Best to also explore other's reviews on the 3900x as well to get the broader picture.

It wouldn't make sense to cherry pick tests to make the chips look bad, then give the chip the strongest possible reccomendation. These are the same tests we normally run, except we added MORE rendering tests. Which plays well to the 3900X. Does that mean we skewed the results in favor of the 3900X?
 
It wouldn't make sense to cherry pick tests to make the chips look bad, then give the chip the strongest possible reccomendation. These are the same tests we normally run, except we added MORE rendering tests. Which plays well to the 3900X. Does that mean we skewed the results in favor of the 3900X?

Yeah I actually didn't mean "to make it look bad" it was a poor choice of wording - I just meant it was a lot less comprehensive on the productivity and more comprehensive on the gaming side, but at the same time this is a 12 core part. Gaming should be a secondary concern, and while I recognized you used your standard suite, I learned much more about this CPU with reviews that covered that productivity end a little fuller.

That was all I meant, really.
 
Yeah I actually didn't mean "to make it look bad" it was a poor choice of wording - I just meant it was a lot less comprehensive on the productivity and more comprehensive on the gaming side, but at the same time this is a 12 core part. Gaming should be a secondary concern, and while I recognized you used your standard suite, I learned much more about this CPU with reviews that covered that productivity end a little fuller.

That was all I meant, really.
Cool., what test specifically would you like to see? We're always open to suggestions.
 
The overclocking results were disappointing. 4.1Ghz max on all cores. Given that the 3950x does a 4.7Ghz single core turbo boost and the 3900x does 4.6Hz single core turbo boost. I'd have assumed any of the Ryzen 3000 would OC to 4.6/4.7Ghz on all cores with decent air/water cooling.

Power consumption: AIDA 64 seems to punish AMD a lot more than Intel. When you were using Prime95 Intel was punished a lot more. It seems the switch from Prime95 to AIDA 64 gives Intel an unfair advantage in the stress test power consumption test. While Prime95 gave AMD an unfair advantage. I'd suggest using both in reviews or find another torture test that will fully punish both AMD and Intel for a max load test. With such wild variation. I can't see how either is an accurate measure of a CPU under full load.

Example Review: https://www.tomshardware.com/reviews/intel-core-i9-9900k-9th-gen-cpu,5847-11.html

The Intel i9-9900K hit 204.6W in your old reviews stress test. This time it is only 113W.

The AMD Ryzen 2700x hit 104.7W in your old review. Now it is 133W.

Part of the disparity between results comes down to the BIOS. Motherbaord vendors always release BIOS revisions for launches that essentially ignore power limits, at least in my opinion, or perhaps they are just raw. Later BIOS releases dial it back. These new tests with new BIOSes are a good example of that in practice. I think that the best measure is an actual workload that fully stresses the cores, cache, and FPU, but actually does some sort of work. That's why we're using y-cruncher, x264 and x265. If you have other ideas, let me know. But Prime95 is unrealistic from any angle. I have yet to see any real application behave that way.
 
  • Like
Reactions: TJ Hooker
Also, one more note here. I've seen both y-cruncher and HandBrake "break" overclocks that have run Prime95 for hours. So much so, in fact, that i test with them first to validate overclocks. Both are AVX-heavy.
 
  • Like
Reactions: NightHawkRMX
I'll see if that is possible. From what I've seen it seems pretty light on the GPU, though. I haven't profiled it for CPU, though. I'll take a closer look. thanks for the suggestion.
That is contrary to my findings depending on settings. Some settings like dynamic reflections and antialiasing are really hard on the GPU and can use over 6gb of vram on my 480, especially since the games rather unoptimized due to it being early access. If these settings are disabled, it may be light on GPU.

Performance varies per map as well. Maybe not a great benchmark on 2nd though, but I still would like to see the performance.

My 4 core is rather underutilized and my 480 pegs to 100% with most settings. It would be a better GPU bench I think.
 
Cool., what test specifically would you like to see? We're always open to suggestions.

Superpi ... no just kidding! NOT superpi for god's sake!

My suggestions ...
For high core skus - less gaming more productivity skew, low core, skew the other way ...

For higher core count CPUs:
Some Adobe alternatives, Vegas / daVinci, etc. and get rid of the synthetic PCMark tests. Not sure about Geekbench for real world indications. Also POV-ray isn't needed - no one uses it and you have more real world rendering already that is better than POV-ray.

Maybe some compile tests, or maybe some DAW performance benchmarks? Just a little broader as I think due to the good multi and high core count, this CPU will probably get used across a varied range of workstation tasks, despite it not being HEDT.

So maybe I'm thinking a bit of a dichotomy between tests for gaming vs productivity, based on the CPU itself. Less synthetic, more real world, and a bit broader for high score CPU counts.

More along those lines, if I had my way. :)

EDIT: I'm also slightly curious if the new platform had any affect on storage performance.
 
Last edited:
Oddly, even with settings set so i get 200fps (still in 1080p but all settings nerfed) my GPU is still the limiting factor. None of my 4 threads get above 75% usage.

This might vary on map tho.

That's because you don't have a 2080ti. If you did, the GPU would be demanding 400fps, and then your CPU would start to bottleneck (not only is cpu bottlneck unrealistic, but unless you paid well over $1000 for a GPU, it isn't even that easy to do).

But because you have a reasonable CPU/GPU combo - you may never even be able to see a CPU bottlneck even if you tried your hardest. Most people never ever would see a CPU bottleneck in any gaming situation -that is why I have issue with reviewers who don't give us at least a little real world gaming numbers along with their "no real scenario" bottleneck ones.

Let us see both, so a certain type of overly susceptible person doesn't go around making comments about "CPU" gaming numbers as though they actually get these numbers at their PC at home ... when they are speaking nothing other than a scenario that doesn't really exist in the real world. Its become an epidemic.

I saw a review yesterday that had 4k resolution ultra quality with the 2080ti -- every CPU got the exact same FPS, save for the r5 1600, which was down 2FPS from all th eothers that included 5.2OC 9900k. All the same. Real world. Intel would be so proud of me spreading "real world gaming" words of wisdom. :)
 
Last edited:
The stock memory frequency (3200) is used for stock testing. The overclocked memory frequency (3600) is used for the OC coinfigurations.

Tried to find the details in the review, but could not. Just curious.

What make/model RAM was used? Just wondering because tight timings on Samsung B-Die ICs, or an updated equivalent, do make a difference for Ryzen.
 
The stock memory frequency (3200) is used for stock testing. The overclocked memory frequency (3600) is used for the OC coinfigurations.

Thank you for clarifying. I am surprised you did not say anything about the ability to use rank 2 memory with Ryzen 3000 series or was that a non issue with the older CPU's?
 
That's because you don't have a 2080ti. If you did, the GPU would be demanding 400fps, and then your CPU would start to bottleneck (not only is cpu bottlneck unrealistic, but unless you paid well over $1000 for a GPU, it isn't even that easy to do).

But because you have a reasonable CPU/GPU combo - you may never even be able to see a CPU bottlneck even if you tried your hardest. Most people never ever would see a CPU bottleneck in any gaming situation -that is why I have issue with reviewers who don't give us at least a little real world gaming numbers along with their "no real scenario" bottleneck ones.

Let us see both so a certain type of overly susceptible person doesn't go around making comments about "CPU" gaming numbers as though they actually get these numbers at their PC at home ... when they are speaking nothing other than a scenario that doesn't really exist in the real world. Its become an epidemic.

I saw a review yesterday that had 4k resolution ultra quality with the 2080ti -- every CPU got the exact same FPS, save for the r5 1600, which was down 2FPS from all th eothers that included 5.2OC 9900k. All the same. Real world. Intel would be so proud of me spreading "real world gaming" words of wisdom. :)

I artificially tried to create a cpu bottleneck to verify the game isnt light on the gpu, rather the contrary.

I do not play with these settings (60hz monitor anyhow), as i understand that usually you arent cpu bound unless I play gtav
 
Last edited:
If anyone has suggestions on new tests or things they would like to see, sound off. We're always interested in new testing!

Paul, I know why you use 1080p tests but even my system plays games at 1440 ultra wide using a 4790K and RX580 so please add some 1440p resolutions to your metrics. As others are pointing out in this thread in real world gaming at 1440p resolution we can get a better overall view of gaming on Ryzen 3000.
 
That's because you don't have a 2080ti. If you did, the GPU would be demanding 400fps, and then your CPU would start to bottleneck (not only is cpu bottlneck unrealistic, but unless you paid well over $1000 for a GPU, it isn't even that easy to do).

But because you have a reasonable CPU/GPU combo - you may never even be able to see a CPU bottlneck even if you tried your hardest. Most people never ever would see a CPU bottleneck in any gaming situation -that is why I have issue with reviewers who don't give us at least a little real world gaming numbers along with their "no real scenario" bottleneck ones.

Let us see both, so a certain type of overly susceptible person doesn't go around making comments about "CPU" gaming numbers as though they actually get these numbers at their PC at home ... when they are speaking nothing other than a scenario that doesn't really exist in the real world. Its become an epidemic.

I saw a review yesterday that had 4k resolution ultra quality with the 2080ti -- every CPU got the exact same FPS, save for the r5 1600, which was down 2FPS from all th eothers that included 5.2OC 9900k. All the same. Real world. Intel would be so proud of me spreading "real world gaming" words of wisdom. :)

That is a very good point. Even with my old 4790K and RX 580 i am able to play many games at ultra wide 1440p resolutions and it is nice to see the 1080p delta between Intel and AMD but it does not matter to the people who are reviewing CPU's on websites like this one. Esssentially the Ryzen 3000 series is as good as Intel at real world gaming but way better at everything else.
 
Also, one more note here. I've seen both y-cruncher and HandBrake "break" overclocks that have run Prime95 for hours. So much so, in fact, that i test with them first to validate overclocks. Both are AVX-heavy.

Prime 95 is much like IBT. Good for qualifying an overclock. Typically if it passes Prime 95 and IBT its stable. At least that is my experience. IBT especially since intel uses it to validate thier CPUs clock speeds. Or did at least.

Paul, I know why you use 1080p tests but even my system plays games at 1440 ultra wide using a 4790K and RX580 so please add some 1440p resolutions to your metrics. As others are pointing out in this thread in real world gaming at 1440p resolution we can get a better overall view of gaming on Ryzen 3000.

I am sure they will do a follow up review but my best guess is that Ryzen 3000 will be tied or throw punches with Intel at similar clock speeds in most games above 1080.
 
Rumor has it, AMD didn't sample the 3800X because is because it can't realistically hit the advertised boost, and essentially gets identical performance to 3700X - both at stock and max OC.
 
Also, one more note here. I've seen both y-cruncher and HandBrake "break" overclocks that have run Prime95 for hours. So much so, in fact, that i test with them first to validate overclocks. Both are AVX-heavy.

Yeah my ryzen 1700 is game/cinebench stable at 3.95, prime95 stable at 3.8 but I had to dial it down to 3.75 for long renders with either Corona or the 3dsMax ART renderer. I wouldn't have expected the renderers to require greater stability that what can be shown with an hour of Prime95.

Back in the old days you had less variance in "stability" levels, it seems.
 
I am sure they will do a follow up review but my best guess is that Ryzen 3000 will be tied or throw punches with Intel at similar clock speeds in most games above 1080.

Or with anything less than a 2080ti ... someone did post some 4k benches on a 2080ti - I can't recall what site.

@4k All numbers 100% identical regardless of any CPU used (total spread of 2 FPS across all CPUs) - I know the "same" isn't as interesting, but it at least helps combat creating a false reality of real world gaming performance in peoples minds when included with the bottleneck numbers.
 
  • Like
Reactions: Soaptrail
Or with anything less than a 2080ti ... someone did post some 4k benches on a 2080ti - I can't recall what site.

@4k All numbers 100% identical regardless of any CPU used (total spread of 2 FPS across all CPUs) - I know the "same" isn't as interesting, but it at least helps combat creating a false reality of real world gaming performance in peoples minds when included with the bottleneck numbers.

In truth TH should have included 1080, 1440 and 4K in their benchmarks for games. Basically show how it performs today at higher levels and what the potential will be for the future at bottleneck levels.