Thanks! Good eye, sorry about that. fixed.Might want to edit the article. You liat the 3700x as 35w, when it is actually 65w.
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Thanks! Good eye, sorry about that. fixed.Might want to edit the article. You liat the 3700x as 35w, when it is actually 65w.
Look under Article Testing Methodology Update (July 8th): in there you will see the graph I am talking about. It deals with the new BIOS revision and how max clocks are higher in the new BIOS.
The CPU was never nor ever will be a significant contributor to FPS in gaming. That is why you have to induce an artificial CPU bottleneck to even see a difference in these reviews. The reason why they invented the GPU was to eliminate the CPU bottleneck - that is their sole purpose - so reviews on CPUs remove them from the equation ... but that isn't real world ... again, the GPU was invented to ELIMINATE the cpu bottleneck.
Why people expect the CPU to make all the difference in the world is beyond me, especially considering that if you have bottlenecked your CPU, you are wasting precious GPU resources that you paid for ... on a 2080ti @4k and settings to get a 70+FPS in SoM, all the CPUS get 76 FPS. All of them. r5 2600 nonX, Threadripper, 9900k @5.2ghz -- -all exactly the same. Isn't that weird? Well thats what happens when you let a GPU do what it is intended for - eliminate CPU bottleneck ... why is this hard to understand for some people?
What was it that happened that made people forget that the GPU is what dictates game performance, not the CPU? How many people here bought a 2080ti and run at 720 or 180 on "medium"? Yeah ... spend a small fortune and turn all the settings to "pure crap" mode? People usually pair their GPU with resolution and settings for between 60 and 120hz TVs / monitors. While there might be a slight difference in 60 to 120 for those crazy basement twitch gamers, between 120 and 240 there is pretty much none. There' limits to human reaction times and if one has bottlnecked their CPU to get 400+FPS then one needs some education ...
So if you are like most people and have refresh between 60 and 120 on your TV / monitor ... you CPU makes no difference to your gaming framerates.
Tests for this review seemed a bit cherry picked to not favour the 3900x? Especially in productivity? Not enough time? Best to also explore other's reviews on the 3900x as well to get the broader picture.
It wouldn't make sense to cherry pick tests to make the chips look bad, then give the chip the strongest possible reccomendation. These are the same tests we normally run, except we added MORE rendering tests. Which plays well to the 3900X. Does that mean we skewed the results in favor of the 3900X?
Cool., what test specifically would you like to see? We're always open to suggestions.Yeah I actually didn't mean "to make it look bad" it was a poor choice of wording - I just meant it was a lot less comprehensive on the productivity and more comprehensive on the gaming side, but at the same time this is a 12 core part. Gaming should be a secondary concern, and while I recognized you used your standard suite, I learned much more about this CPU with reviews that covered that productivity end a little fuller.
That was all I meant, really.
I would love to see BeamNG drive added to the game suite.
The overclocking results were disappointing. 4.1Ghz max on all cores. Given that the 3950x does a 4.7Ghz single core turbo boost and the 3900x does 4.6Hz single core turbo boost. I'd have assumed any of the Ryzen 3000 would OC to 4.6/4.7Ghz on all cores with decent air/water cooling.
Power consumption: AIDA 64 seems to punish AMD a lot more than Intel. When you were using Prime95 Intel was punished a lot more. It seems the switch from Prime95 to AIDA 64 gives Intel an unfair advantage in the stress test power consumption test. While Prime95 gave AMD an unfair advantage. I'd suggest using both in reviews or find another torture test that will fully punish both AMD and Intel for a max load test. With such wild variation. I can't see how either is an accurate measure of a CPU under full load.
Example Review: https://www.tomshardware.com/reviews/intel-core-i9-9900k-9th-gen-cpu,5847-11.html
The Intel i9-9900K hit 204.6W in your old reviews stress test. This time it is only 113W.
The AMD Ryzen 2700x hit 104.7W in your old review. Now it is 133W.
That is contrary to my findings depending on settings. Some settings like dynamic reflections and antialiasing are really hard on the GPU and can use over 6gb of vram on my 480, especially since the games rather unoptimized due to it being early access. If these settings are disabled, it may be light on GPU.I'll see if that is possible. From what I've seen it seems pretty light on the GPU, though. I haven't profiled it for CPU, though. I'll take a closer look. thanks for the suggestion.
Cool., what test specifically would you like to see? We're always open to suggestions.
Oddly, even with settings set so i get 200fps (still in 1080p but all settings nerfed) my GPU is still the limiting factor. None of my 4 threads get above 75% usage.
This might vary on map tho.
The stock memory frequency (3200) is used for stock testing. The overclocked memory frequency (3600) is used for the OC coinfigurations.
The stock memory frequency (3200) is used for stock testing. The overclocked memory frequency (3600) is used for the OC coinfigurations.
That's because you don't have a 2080ti. If you did, the GPU would be demanding 400fps, and then your CPU would start to bottleneck (not only is cpu bottlneck unrealistic, but unless you paid well over $1000 for a GPU, it isn't even that easy to do).
But because you have a reasonable CPU/GPU combo - you may never even be able to see a CPU bottlneck even if you tried your hardest. Most people never ever would see a CPU bottleneck in any gaming situation -that is why I have issue with reviewers who don't give us at least a little real world gaming numbers along with their "no real scenario" bottleneck ones.
Let us see both so a certain type of overly susceptible person doesn't go around making comments about "CPU" gaming numbers as though they actually get these numbers at their PC at home ... when they are speaking nothing other than a scenario that doesn't really exist in the real world. Its become an epidemic.
I saw a review yesterday that had 4k resolution ultra quality with the 2080ti -- every CPU got the exact same FPS, save for the r5 1600, which was down 2FPS from all th eothers that included 5.2OC 9900k. All the same. Real world. Intel would be so proud of me spreading "real world gaming" words of wisdom.
If anyone has suggestions on new tests or things they would like to see, sound off. We're always interested in new testing!
That's because you don't have a 2080ti. If you did, the GPU would be demanding 400fps, and then your CPU would start to bottleneck (not only is cpu bottlneck unrealistic, but unless you paid well over $1000 for a GPU, it isn't even that easy to do).
But because you have a reasonable CPU/GPU combo - you may never even be able to see a CPU bottlneck even if you tried your hardest. Most people never ever would see a CPU bottleneck in any gaming situation -that is why I have issue with reviewers who don't give us at least a little real world gaming numbers along with their "no real scenario" bottleneck ones.
Let us see both, so a certain type of overly susceptible person doesn't go around making comments about "CPU" gaming numbers as though they actually get these numbers at their PC at home ... when they are speaking nothing other than a scenario that doesn't really exist in the real world. Its become an epidemic.
I saw a review yesterday that had 4k resolution ultra quality with the 2080ti -- every CPU got the exact same FPS, save for the r5 1600, which was down 2FPS from all th eothers that included 5.2OC 9900k. All the same. Real world. Intel would be so proud of me spreading "real world gaming" words of wisdom.
Also, one more note here. I've seen both y-cruncher and HandBrake "break" overclocks that have run Prime95 for hours. So much so, in fact, that i test with them first to validate overclocks. Both are AVX-heavy.
Paul, I know why you use 1080p tests but even my system plays games at 1440 ultra wide using a 4790K and RX580 so please add some 1440p resolutions to your metrics. As others are pointing out in this thread in real world gaming at 1440p resolution we can get a better overall view of gaming on Ryzen 3000.
Also, one more note here. I've seen both y-cruncher and HandBrake "break" overclocks that have run Prime95 for hours. So much so, in fact, that i test with them first to validate overclocks. Both are AVX-heavy.
I am sure they will do a follow up review but my best guess is that Ryzen 3000 will be tied or throw punches with Intel at similar clock speeds in most games above 1080.
Rumors, rumors, rumors.Rumor has it, AMD didn't sample the 3800X because is because it can't realistically hit the advertised boost, and essentially gets identical performance to 3700X - both at stock and max OC.
Or with anything less than a 2080ti ... someone did post some 4k benches on a 2080ti - I can't recall what site.
@4k All numbers 100% identical regardless of any CPU used (total spread of 2 FPS across all CPUs) - I know the "same" isn't as interesting, but it at least helps combat creating a false reality of real world gaming performance in peoples minds when included with the bottleneck numbers.