News Core i9-9900K vs. Ryzen 9 3900X: Gaming Performance Scaling

st379

Distinguished
Aug 24, 2013
169
69
18,660
Excellent article.Really enjoyed all the different resolution and thank you for using normal settings unlike Anandtech who are using stupid resolutions and settings.

One question... why is the average fps gap remain the same although the average is getting lower and lower? I would expect that if the average fps is lower than the gap would be closed completly.
 

King_V

Illustrious
Ambassador
While it confirmed what was generally accepted as conventional wisdom all along, it's good to have these numbers to back it.

My minor nitpick/gripe/pet-peeve/whatever is:
Could you notice such a difference? 15%, sure;

At lower frame rates, certainly, but, with 1920x1080/Medium on the 2080Ti, Red Dead Redemption 2's 149.6 vs 174.8, while a hefty 17% difference, if I were a betting man, I'd say nobody would be able to tell the difference between 150fps and 175fps.

If we were talking 15% more from, say, 60fps to 70, I'd say it's noticeable.

(of course, that ties in with my whole gripe about "MOAR HERTZES" with monitor refresh rates constantly climbing)


That all said - thanks for the article; this looks like it was a ton of work!
 

Gomez Addams

Prominent
Mar 4, 2020
53
27
560
One small detail : R0 is not the stepping of the CPU. It is the revision level and they are two different things which is why the CPU reports them as separately. If you don't believe this then fire up CPU-Z and you will see it shows both items separately. Steppings are always plain numbers while revision levels are not. This laptop's CPU is also revision R0 and it is stepping 3.

Stepping refers to one specific thing : the version of the lithography masks used. Revision level refers to many different things and stepping is just one of them. There are hundreds of processing steps involved in making a chip and revision level encompasses all of them. The word stepping is used because the machines that do the photo lithography are called steppers and that is because they step across the wafer, exposing each chip in turn.

I realize this is somewhat pedantic but as a website writing about technology you should purvey accurate information.
 

mrsense

Reputable
Jun 28, 2019
6
7
4,515
"The RTX 2080 Ti only gave Intel a 10% lead over AMD CPUs at 1080p, but what happens when new GPUs arrive that are 50% faster? The bottleneck shifts back to the CPU"

With Ryzen 4000, if the rumors are true, tables might turn.
 

mrsense

Reputable
Jun 28, 2019
6
7
4,515
Excellent article.Really enjoyed all the different resolution and thank you for using normal settings unlike Anandtech who are using stupid resolutions and settings.

Well, when you have the top of line CPU and graphic card, your intent is to turn on all the eye candies with high (1440p and up) resolution monitors when playing games unless, of course, you're a pro gamer playing in a competition.
 
  • Like
Reactions: bit_user
My guess was that he started the benchmarks before the 10th Gen Intel came out, and to start over was not an option. So, he continued on and released. I would expect a new set of benchmarks soon with the 10th Gen Intel compared to the Ryzen 3900x, but by then new processors will be out....
Not really. Our standard GPU test system is a 9900K, and I don’t think it’s worth updating to the 10900K. We talked about this and I recommended waiting for Zen 3 and 11th Gen Rocket Lake before switching things up. Yes, 10900K is a bit faster, but platform (ie, differences in motherboards and firmware) can account for much of the difference in performance. So, give me another six months with CFL-R and then we’ll do the full upgrade of testbeds. 😀
 
Excellent article.Really enjoyed all the different resolution and thank you for using normal settings unlike Anandtech who are using stupid resolutions and settings.

One question... why is the average fps gap remain the same although the average is getting lower and lower? I would expect that if the average fps is lower than the gap would be closed completly.
The gap does get smaller, especially at 1440p/4K and ultra settings, but it never fully goes away with 2080 Ti. Some of the gap is probably just differences in the motherboards and firmware — nothing can be done there, but it’s as close as I can make it with the hardware on hand.
 
  • Like
Reactions: bit_user and st379
While it confirmed what was generally accepted as conventional wisdom all along, it's good to have these numbers to back it.
Read that “sure” as less definitive and more wry. Like, “sure, it’s possible, and benchmarks will show it, but in actual practice it’s not that important.”

There were a couple of games where Ryzen felt a bit better on the first pass — less micro stutter or something — but there were also games where it felt worse. Mostly, though, with your an FPS counter or benchmarks I wouldn’t notice.
 
  • Like
Reactions: King_V
"The RTX 2080 Ti only gave Intel a 10% lead over AMD CPUs at 1080p, but what happens when new GPUs arrive that are 50% faster? The bottleneck shifts back to the CPU"

With Ryzen 4000, if the rumors are true, tables might turn.
Yes, definitely interested in seeing what Zen 3 manages relative to Rocket Lake. Or Tiger Lake or whatever the best option from Intel is next year. And when those CPUs arrive, I plan to repeat this testing with RTX 3000 and RX 6000! 😀
 
  • Like
Reactions: st379

watzupken

Reputable
Mar 16, 2020
1,027
520
6,070
"In our testing, we found the 10900K outperformed the 9900K by up to 17% in a few games. "

This is because all the tests were carried out at 1080p in your 10900K review. It is a way to kind of show the CPU performance, but generally is an unrealistic scenario when you consider that most folks that have a i9 10900K and RTX 2080/ 2080 Ti will not game at 1080p. That kind of defeats the purpose of getting such a high end setup.
 
  • Like
Reactions: bit_user
"In our testing, we found the 10900K outperformed the 9900K by up to 17% in a few games. "

This is because all the tests were carried out at 1080p in your 10900K review. It is a way to kind of show the CPU performance, but generally is an unrealistic scenario when you consider that most folks that have a i9 10900K and RTX 2080/ 2080 Ti will not game at 1080p. That kind of defeats the purpose of getting such a high end setup.
That’s the whole point of this article: Showing how CPU performance matters less with higher resolutions and/or slower GPUs. If you look at Paul’s figures for 9900K and 3900X at 1080p ultra, and compare that with my figures for the same 1080p, the gap is similar (within a few percent). And the gap at 1440p ultra shrinks to only a few percent instead of 10-15%.

A 10900K would still be faster, and with a next gen GPU I expect the gap even at higher resolutions to grow. But it will probably still max out at around a 15% difference between 9900K and 3900X.
 
  • Like
Reactions: bit_user

watzupken

Reputable
Mar 16, 2020
1,027
520
6,070
That’s the while point of this article: Showing how CPU performance matters less with higher resolutions and/or slower GPUs. If you look at Paul’s figures for 9900K and 3900X at 1080p ultra, and compare that with my figures for the same 1080p, the gap is similar (within a few percent). And the gap at 1440p ultra shrinks to only a few percent instead of 10-15%.

A 10900K would still be faster, and with a next gen GPU I expect the gap even at higher resolutions to grow. But it will probably still max out at around a 15% difference between 9900K and 3900X.
Don't get me wrong, I get what you are trying to achieve in this article, and I acknowledge this is a good article. I am just commenting on what you mentioned about the 10900K being 17% faster than 9900K in some games. Just to point out that while it is deliberate to test at 1080p to showcase what the CPU can do, the fact is that most people with that kind of setup will not game at 1080p.
 
  • Like
Reactions: bit_user
One small detail : R0 is not the stepping of the CPU. It is the revision level and they are two different things which is why the CPU reports them as separately. If you don't believe this then fire up CPU-Z and you will see it shows both items separately. Steppings are always plain numbers while revision levels are not. This laptop's CPU is also revision R0 and it is stepping 3.

Stepping refers to one specific thing : the version of the lithography masks used. Revision level refers to many different things and stepping is just one of them. There are hundreds of processing steps involved in making a chip and revision level encompasses all of them. The word stepping is used because the machines that do the photo lithography are called steppers and that is because they step across the wafer, exposing each chip in turn.

I realize this is somewhat pedantic but as a website writing about technology you should purvey accurate information.
I've corrected the text to properly refer to stepping D, revision R0, which is what CPU-Z reports for the 'new' CPU.
 

Gomez Addams

Prominent
Mar 4, 2020
53
27
560
Stepping D? That is very interesting. I don't recall ever seeing letters used for steppings. Are they using hexadecimal numbers for them? If the letters only range from A to F then that implies hex. All my other machines with AMD and Intel CPUs use only numbers. Maybe they made more than 9 and are now using the full alphabet now. If either scheme are used, that means more than 9 mask sets were made (taped out) and that's a LOT! It requires a major change to make a new set of masks.

Tape out is another very old term. In the very old days of making chips they used tape to make the masks. Literally, masking tape.
 
Stepping D? That is very interesting. I don't recall ever seeing letters used for steppings. Are they using hexadecimal numbers for them? If the letters only range from A to F then that implies hex. All my other machines with AMD and Intel CPUs use only numbers. Maybe they made more than 9 and are now using the full alphabet now. If either scheme are used, that means more than 9 mask sets were made (taped out) and that's a LOT! It requires a major change to make a new set of masks.

Tape out is another very old term. In the very old days of making chips they used tape to make the masks. Literally, masking tape.
I don’t know if there’s a specific reason, but maybe because Kaby Lake, Coffee Lake, and Coffee Lake Refresh are all basically the same architecture they kept bumping the stepping? Here are two 9900K chips, one from launch (stepping C) and the other purchased retail in February 2020:

66

67

Also interesting is that the older ES lists TSX support but the R0 revision does not. Both shots were taken in the same mobo, same BIOS, no difference in the settings used.
 

Gomez Addams

Prominent
Mar 4, 2020
53
27
560
That's a good point - if the processors are really that close then they could be using modified masks. Normally an entirely new set are made for different CPUs.

Very interesting. Thank you for the info.
 
  • Like
Reactions: JarredWaltonGPU

Zarax

Reputable
Apr 30, 2020
34
22
4,535
Excellent article as always Jarred.
I was wondering if you could consider doing this at the polar opposite of the CPU scale: what's the lowest performing CPU (especially old ones) that can feed at least 60fps at 1080p when paired with a newer card?
 
  • Like
Reactions: JarredWaltonGPU

lubomirz

Reputable
Jun 23, 2019
15
4
4,515
let me chime in - excellent article, tremendous amount of work and hours has been put there ! Thank you very much.

One note : AMD results would be BETTER with faster memory as that would clock Infinity Fabric higher. 3200MHz is really just basement for anyone these days ; 3600MHz is much better and almost all 3900x CPUs can run 3800MHz RAM with 1:1 Infinity Fabric meaning it would be clocked 1900 real MHz.

Also, memory timings play huge role in GAMING performance on AMD platform. 3600MHz/CL18 (with relaxed secondary and tertiary settings) easily has 5% less REAL fps than tuned 3600MHz/CL15 or CL16, not even mentioning high-end kits with 3600/CL14 timings.

Nothing has been said about BIOS settings for each platform, both Intel and AMD can EASILY gain 5-8 fps everywhere except highest 3840x2160 resolutions just by optimizing CPU config in motherboards' BIOSes.

My guess would be that 5700XT and higher-end graphic cards can GAIN additional 5-10 fps in every resolution except 4K 3840x2160 especially on AMD platform.

Once again, hefty amount of work put to this article, thank you very very much !
 

King_V

Illustrious
Ambassador
Excellent article as always Jarred.
I was wondering if you could consider doing this at the polar opposite of the CPU scale: what's the lowest performing CPU (especially old ones) that can feed at least 60fps at 1080p when paired with a newer card?
I think that might be more difficult. The enormous wildcard would be "in which game?"

For example, running The Witcher (first one) on an i5-4460 with a GTX 1080 was a piece of cake... UNTIL I was in a city with a lot of NPCs in one area standing around engaging in idle chit-chat. Then things slowed down and got jittery.
 
  • Like
Reactions: JarredWaltonGPU
let me chime in - excellent article, tremendous amount of work and hours has been put there ! Thank you very much.

One note : AMD results would be BETTER with faster memory as that would clock Infinity Fabric higher. 3200MHz is really just basement for anyone these days ; 3600MHz is much better and almost all 3900x CPUs can run 3800MHz RAM with 1:1 Infinity Fabric meaning it would be clocked 1900 real MHz.

Also, memory timings play huge role in GAMING performance on AMD platform. 3600MHz/CL18 (with relaxed secondary and tertiary settings) easily has 5% less REAL fps than tuned 3600MHz/CL15 or CL16, not even mentioning high-end kits with 3600/CL14 timings.

Nothing has been said about BIOS settings for each platform, both Intel and AMD can EASILY gain 5-8 fps everywhere except highest 3840x2160 resolutions just by optimizing CPU config in motherboards' BIOSes.

My guess would be that 5700XT and higher-end graphic cards can GAIN additional 5-10 fps in every resolution except 4K 3840x2160 especially on AMD platform.

Once again, hefty amount of work put to this article, thank you very very much !
I intentionally didn’t try to test with maximum performance RAM and other parts. There are ways to improve the performance of both platforms — 10900K, CL14 RAM, tunes timings, etc. on Intel; 3800XT, DDR4-3600, tunes timings on AMD. Even a change in motherboard could make a 5% difference. But for these tests both platforms are BIOS ‘optimized’ defaults plus applying the XMP profile and no other tweaks.

a big part of the equipment used for testing is simply what I had available. I have CL14 memory that performs better, but only 2x8GB. Plus the Intel testing was already done for the regular GPU reviews, and it would be completely biased to test Intel with 2x16GB 16-18-18 RAM and then switch to 4x8GB 14-14-14 RAM for AMD.

I’m hoping to do additional testing of some form, but there are so many options. Core i3-10300, Ryzen 5 3600, different RAM, different mobo, tuned RAM timings and more. Probably the next major comparison will be done once the new AMD and Nvidia GPUs arrive, and potentially Zen 3 and Rocket Lake.
 
  • Like
Reactions: bit_user
Excellent article as always Jarred.
I was wondering if you could consider doing this at the polar opposite of the CPU scale: what's the lowest performing CPU (especially old ones) that can feed at least 60fps at 1080p when paired with a newer card?
This will vary by game and the area used for testing. Paul would probably need to do the testing since he has a lot more CPUs. Then we’d need to pick a ‘reasonable’ GPU for the tests. Or I could do RX 5600 XT with a recent Core i3 or i5, or I’ve got a first gen Ryzen 5 1500X still kicking around I think. But nothing older than that.