News Core i9-9900K vs. Ryzen 9 3900X: Gaming Performance Scaling

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Eagerly waiting to see what new GPUs from both vendors are able to do - round and round , you can re-test everything from scratch :)

Sure, there are soooo many 'optimization" options. We briefly touched the memory tuning in discussion above, we didn't speak a single bit about GPU overclocking / undervolting / better cooling, nobody tried to overclock CPUs etc, etc.

That's why we love this never ending game, right ? :)
 
  • Like
Reactions: JarredWaltonGPU
Why are you not including any of the the AMD threadripper CPU's or at least the 3950X in this comparison. Some of us would like to see how the threadripper compares to intel chips as a gaming platform. Yes, it may be a bit more expensive, but how much extra bang will you get on these platforms for your buck - something I'd love to know - as well as loading and running the games on say 4 x PCIe4 NVMe SSD's in a Raid 0 config - as Intel can't do this at these levels. What kind of impact would it have?

I tried to determine this when posting this question on the forum's, but never got to see a true comparison of the best from Intel vs the best from AMD. Not including threadripper CPU's is a failure on this article and other articles that Tom's Hardware have released. Some of us may actually like to spend the money to get the best of the best - and not including threadrippers or at least the 3950X cpu is very dissappointing... 😔

https://forums.tomshardware.com/threads/true-pc-gaming-performance-intel-or-amd.3606470/
 
I don't get it : CPUs matter only in low resolutions and rich guys are not playing 1280x720 anymore, are they ? When you have anything above 2560x1440, your GPU(s) are the only important factor. At 4K (3840x2160), CPUs play negligible role at all.

And yes, it's not necessary to start talking about "it's downright stupid to spend $$$$ on fastest 256core Threadripper when you can get more performance using Tier1 Nvidia 2080Ti, watercooling and overclocking it".

No, it's not disappointing to omit 3950X+ chips from this review. You have to realize what's the readers base of this site - how many people have Ryzens and how many Threadrippers ? This article was not aimed for "the 0.1% specialty group which pays 10.000$ entry fee to register on special dedicated social network to show off".

BTW :
Also, PCI4 has shown to bring next-to-zero benefit as of games loading time ; plus it has zero influence on FPS in the game of course. Raid0 the same. (quite contrary, for some strange reason Raid0 NVMe setups can have negative effect on game loading speeds). Several major sites tested that already.


Cheers !
 
Damn,so at 1080 medium you can go a whole tier lower on the GPU with a 9900k and still have the same better performance,at least for minimums.

I don't get it : CPUs matter only in low resolutions and rich guys are not playing 1280x720 anymore, are they ?
That's not quite right,CPUs matter in resolutions that the GPUs can keep up with.

Yes those naturally are the lower resolutions but depending on how strong a GPU is, or will be in the future, it's not 720 or even 1080,if the GPU is fast enough the same difference would be visible in 4k ultra.
 
That’s the while point of this article: Showing how CPU performance matters less with higher resolutions and/or slower GPUs. If you look at Paul’s figures for 9900K and 3900X at 1080p ultra, and compare that with my figures for the same 1080p, the gap is similar (within a few percent). And the gap at 1440p ultra shrinks to only a few percent instead of 10-15%.

A 10900K would still be faster, and with a next gen GPU I expect the gap even at higher resolutions to grow. But it will probably still max out at around a 15% difference between 9900K and 3900X.
But those next gen GPUs likely to be PCIe4 - so 10900K would not be a contender on the Intel side - would need to be Rocket Lake. - although still not convinced that even an Ampere GeForce would saturate a 16 lane PCIe3, so maybe the 10900K would be possible.
 
But those next gen GPUs likely to be PCIe4 - so 10900K would not be a contender on the Intel side - would need to be Rocket Lake. - although still not convinced that even an Ampere GeForce would saturate a 16 lane PCIe3, so maybe the 10900K would be possible.
I’ll bet you money right now that when Nvidia’s PCIe Gen4 GPUs show up (Ampere), the difference in gaming performance between an i9-9900K and an i9-11900K with an RTX 3090 (or whatever the future parts are called) will still be well under 10% at 1440p and 4K ultra gaming across a reasonable test suite of at least eight games. For a single graphics card.

Gen2 to Gen3 when the first Gen3 parts arrived only made a 10% difference in SLI, and SLI is basically dead these days. There may be a few outliers, but most games will be GPU limited, not PCIe limited.
 
I’ll bet you money right now that when Nvidia’s PCIe Gen4 GPUs show up (Ampere), the difference in gaming performance between an i9-9900K and an i9-11900K with an RTX 3090 (or whatever the future parts are called) will still be well under 10% at 1440p and 4K ultra gaming across a reasonable test suite of at least eight games. For a single graphics card.

Gen2 to Gen3 when the first Gen3 parts arrived only made a 10% difference in SLI, and SLI is basically dead these days. There may be a few outliers, but most games will be GPU limited, not PCIe limited.
Agreed upto the SLI part. I am probably a tiny minority - my last rig h(8700K Dual 1080TI) and my current rig (i9900K and dual 2080Ti) are both SLI, and can game comfortably in 4K at 120Hz pretty easily only requiring a few tweaks here and there. Initially I will evaluate the high end variant (3080Ti / 3090) and IF I can get the same fps with a single card I will stay with a single - general consensus is +20% 3080 over 2080TI and possible +40% 3080TI/3090 over 2080Ti... May not be enough - but single card efficiency over SLI maybe enough to make the 2nd card unnecessary (at current 4K/120fps no RTX)

Whatever variant allows SLI will be what I buy - maybe next step is 4K/120fps RTX on. I c an't see moving to 5K/8K any time soon (can get max of 30 fps at 8K on a couple of games,) - the difference at desktop focal length is negligible - and I am not 20 anymore - so need to work with the limitations my almost 50yo bio hardware have.
 
Agreed upto the SLI part. I am probably a tiny minority - my last rig h(8700K Dual 1080TI) and my current rig (i9900K and dual 2080Ti) are both SLI, and can game comfortably in 4K at 120Hz pretty easily only requiring a few tweaks here and there. Initially I will evaluate the high end variant (3080Ti / 3090) and IF I can get the same fps with a single card I will stay with a single - general consensus is +20% 3080 over 2080TI and possible +40% 3080TI/3090 over 2080Ti... May not be enough - but single card efficiency over SLI maybe enough to make the 2nd card unnecessary (at current 4K/120fps no RTX)

Whatever variant allows SLI will be what I buy - maybe next step is 4K/120fps RTX on. I c an't see moving to 5K/8K any time soon (can get max of 30 fps at 8K on a couple of games,) - the difference at desktop focal length is negligible - and I am not 20 anymore - so need to work with the limitations my almost 50yo bio hardware have.
I can't see any point to SLI or multi-GPU for the coming generation, until and unless support for multi-GPU ray tracing becomes a thing. None of the RTX-enabled (or VulkanRT) games support more than a single GPU AFAIK, and with consoles going RT later this year, I think that will be an important tech. Basically every major new game coming out in late 2020 or after will potentially use RT effects in some fashion and thus have no multi-GPU support. I'll have to see about finagling a pair of next-gen RTX 30-series (and Big Navi) to run some multi-GPU scaling tests when they arrive. It's not going to be pretty I suspect!
 
  • Like
Reactions: King_V
I can't see any point to SLI or multi-GPU for the coming generation, until and unless support for multi-GPU ray tracing becomes a thing. None of the RTX-enabled (or VulkanRT) games support more than a single GPU AFAIK, and with consoles going RT later this year, I think that will be an important tech. Basically every major new game coming out in late 2020 or after will potentially use RT effects in some fashion and thus have no multi-GPU support. I'll have to see about finagling a pair of next-gen RTX 30-series (and Big Navi) to run some multi-GPU scaling tests when they arrive. It's not going to be pretty I suspect!
You are probably right, but want the option for a future upgrade (would be the only system I plan an upgrade for, usually just replace the whole thing). Have you seen the Jayz with the huge increases to RTX performance - made me dust off BF5 and do a little testing (NOT a BF fan at all - not much into FPS). I will buy a high spec Big Navi - I usually do (5700XT, Vega VII) - do my own testing against whatever beast I can pry out of Gigabyte's offerings - both Navi and Ampere.

We have had an 8 GPU DGX-A100 (half pop) on evaluation for almost 2 weeks now - 80-85% as fast as our 16 GPU DGX-2s with Volta Next (high TDP) - and in some specific cases it's faster. It leaves for it's next eval Thursday or Friday of next week - was told if we order now (will get a 16x) it would be Late Q4 before a small customer like me can get one.

Getting interesting again.
 
So, I have a minor update to the benchmarks (after being gone on vacation for two weeks). I mistakenly used half a kit of DDR4-3600 CL16 memory in the AMD system. Actually, to be entirely correct, I thought I had set the RAM to DDR4-3200 speed in the BIOS (matching the timings of the Intel DDR4-3200 16-18-18 kit), but it was using the default A-XMP setting that resulted in full DDR4-3600 16-18-18 settings. That has a minor impact on the results (in a good way), and anyone thinking I 'unfairly' penalized AMD by not using overclocked memory is mistaken.

I'm retesting the RTX 2080 Ti now with the same DDR4-3200 kit used on the Intel test bed. It is indeed slower -- how much depends on the game and settings. Getting better memory timings (CL14) at DDR4-3200 may help more than higher memory speeds (ie, DDR4-3600 CL18) in some games, though the cost is higher. The cheapest CL16 DDR4-3600 kit of 2x16GB currently costs $175 (unless you want to try older G.Skill Ripjaws V, which is $150). The cheapest DDR4-3200 14-14-14-34 kit is $200+.

Better results could be achieved by tuning the memory timings and subtimings on both platforms, along with overclocking the CPU(s) and other tweaks. Other motherboards would also behave slightly differently. I'm going to add some testing results using the same DDR4-3600 kit in the Intel system as well, to show how that does/doesn't improve performance.

The point wasn't/isn't to show absolutely best-case performance for both platforms, but rather what you would get out of a reasonable build running stock + XMP settings. That said, I'm looking to do some limited retesting (eg, with different RAM kits, including tuning of timings and subtimings). I suspect the only real differences with memory testing will come at 1080p, with 1440p and 4K largely performing the same in the majority of games. There are probably a few games that will like the increased bandwidth a bit more, but others would prefer tighter timings at lower bandwidths. So many things to test, so little time....


Update: I've started on the retesting. Initial results so far show AMD performing 5-6% worse with DDR4-3200, but Intel is also performing worse (up to 10%) with DDR4-3600. There's a good chance summer temperatures are playing a role in skewing my results now, since temperatures are definitely warmer now than in June. That or the Intel platform really doesn't like the DDR4-3600 Platinum RAM for whatever reason.

Which is entirely possible, as some memory kits (even with ostensibly similar specs) just work better on some motherboards. Could be the firmware doesn't have a 'good' profile for the Corsair kit and is using less aggressive defaults. Anyway, I only have a few 2x16GB DDR4 kits available: Corsair Platinum RGB DDR4-3600 16-18-18, Corsair LPX DDR4-3200 16-18-18, Corsair Dominator RGB Pro 16-18-18, and a 'budget' (old) kit of G.Skill Ripjaws V DDR4-3200 16-18-18.

This is the difficulty of trying to compare platforms. Certain kits definitely perform better!


Update #2: Scratch that on the Intel PC. I screwed up when I uninstalled the AMD drivers via DDU, as it wiped out Vulkan. (My fault for checking that option and not doing a fresh install of the Nvidia drivers). Red Dead Redemption 2 wouldn't error out, but just ran with DX12 mode. DX12 runs 10% slower (maybe more) vs. Vulkan on RTX 2080 Ti, which accounts for the poor performance I was seeing. Benchmarking is fun! [Shifty-eyes]
 
Last edited:
  • Like
Reactions: Zarax
You're a true professional Jarred.
Tom's audience tends to forget that most people (even fairly experienced ones) rarely go more in-depth than CPU model +RAM quantity +GPU model, especially when shopping for pre-assembled PCs, which is the large majority.
 
For anyone who has commented here and follows this thread, I've now updated all the charts with Intel + DDR4-3600 CL16 (same kit used on the AMD PC), and AMD + DDR4-3200 CL16 (same kit used on the Intel PC). Dropping to DDR4-3200 CL16 definitely hurts AMD, with a few games showing up to a 7% delta. But that's only at 1080p. Overall, at 1080p the DDR4-3600 improved performance by 4% for AMD, but that drops to 2% at 1440p and <1% at 4K.

On the Intel side, testing with the same DDR4-3600, overall performance is 2% faster at 1080p, 1% faster at 1440p, and 0.5% faster at 4K -- mostly margin of error on the latter two.

As I noted above, I originally thought the AMD PC was also running DDR4-3200, but it wasn't. And just to beat a dead horse even further, I have four 2x16GB Corsair DDR4 kits that have 16-18-18 timings and can do DDR4-3200 speeds. I have tested the DDR4-3600 kit (Dominator Platinum RGB) at DDR4-3200 speed and it performs the same (within 1%, which is margin of error) as the DDR4-3200 kit (Vengeance RGB Pro).

Oddly, I also have another DDR4-3200 kit (Vengeance LPX) that performs about 2% faster on Intel. Proof that RGB makes your PC slower! Or more likely, proof that the particular motherboard I'm using used slightly better timings and tuning on the LPX kit -- I'm actually not sure what the root cause of the difference is, but possibly some of the kits are Micron chips and others are Samsung.

Anyway, I've standardized on using the RGB kits, but whenever I swap CPUs, the BIOS resets and I need to reapply XMP profiles and then set the DDR4-3600 kits to run at DDR4-3200. Which I forgot to do on the AMD testbed last time, apparently.

Maybe I'll just switch things up to a faster memory kit in the future! I'm working to get something 'good' but I don't know when that will happen. DDR4-4000 32GB (2x16GB) kits are available now, with CL19. I'm trying to procure some of those for future tests -- wish me luck. (Probably need to run AMD at DDR4-3733 though, for optimal performance, but maybe Zen 3 will change that.)
 
This is really good work. I already said it once in this thread but you well deserved it for the second time too.

I hope you will do it again with the new GPU and ryzen 4000 desktop series. You could update it again when Intel release a new cpu in 2023.
 
This is really good work. I already said it once in this thread but you well deserved it for the second time too.

I hope you will do it again with the new GPU and ryzen 4000 desktop series. You could update it again when Intel release a new cpu in 2023.
Yeah, that's the plan. The latest Intel news is certainly worrying. Obviously we'll see some other CPUs in the interim, and I won't be at all surprised if 10nm++ ends up doing a Rocket Lake sequel since 7nm is now going to be 2023. Intel already has Ice Lake Xeon chips in the works, so clearly it can do more than a 4-core Ice Lake design. But it's a serious clusterfudge.

Currently Intel has the following in the works or recently shipped:
Ice Lake 10nm+ mobile (2019/2020)
Comet Lake 14nm+++ desktop/mobile (2020)
Lakefield 10nm+ ultramobile (2020)
Tiger Lake 10nm+ mobile and maybe desktop/server (2020)
Rocket Lake 14nm+++ desktop (2021)
Ice Lake 10nm+ server (2021)
Alder Lake 10nm+ mobile / desktop / server? (2021)
Sapphire Rapids 10nm+ server (2021)
Meteor Lake 7nm mobile / desktop / server (2023?)

Tell me that isn't a mess!
 
Yeah, that's the plan. The latest Intel news is certainly worrying. Obviously we'll see some other CPUs in the interim, and I won't be at all surprised if 10nm++ ends up doing a Rocket Lake sequel since 7nm is now going to be 2023. Intel already has Ice Lake Xeon chips in the works, so clearly it can do more than a 4-core Ice Lake design. But it's a serious clusterfudge.

Currently Intel has the following in the works or recently shipped:
Ice Lake 10nm+ mobile (2019/2020)
Comet Lake 14nm+++ desktop/mobile (2020)
Lakefield 10nm+ ultramobile (2020)
Tiger Lake 10nm+ mobile and maybe desktop/server (2020)
Rocket Lake 14nm+++ desktop (2021)
Ice Lake 10nm+ server (2021)
Alder Lake 10nm+ mobile / desktop / server? (2021)
Sapphire Rapids 10nm+ server (2021)
Meteor Lake 7nm mobile / desktop / server (2023?)

Tell me that isn't a mess!
They pushed the 10nm+++ Alder Lake for second half instead of first half.
Alder Lake is a big maybe if we look at recent history. 7nm pushed for a year if not more.
I don't know what is going on with Rocket Lake but they are seriously not going to release a 10 core cpu?
Intel: "We are going to upgrade your 10 cores i9 to 8 cores. Who wants to upgrade?"
I want to see how are they going to sell to people 10 cores to 8 cores "upgrade".
 
They pushed the 10nm+++ Alder Lake for second half instead of first half.
Alder Lake is a big maybe if we look at recent history. 7nm pushed for a year if not more.
I don't know what is going on with Rocket Lake but they are seriously not going to release a 10 core cpu?
Intel: "We are going to upgrade your 10 cores i9 to 8 cores. Who wants to upgrade?"
I want to see how are they going to sell to people 10 cores to 8 cores "upgrade".
It will depend on the cores. Theoretically, an 8-core architecture with improved cores could outperform a 10-core architecture. It's difficult to imagine Intel improving performance by 25% through architecture, though, and rumors suggest Rocket Lake may also have lower clocks than Comet Lake. It might be more efficient, but if Rocket Lake tops out at around 4.5GHz, the CPU cores would need to be about 40% faster just to match something like i9-10900K.

We'll have to wait and see, but I don't expect the Willow Cove cores to be anywhere close to sufficient to win the multi-threading argument. So AMD will have Zen 3 with up to 16-core 'mainstream' consumer solutions I imagine, and Intel will be trying to push Rocket Lake with better integrated graphics and only half as many CPU cores?
 
It will depend on the cores. Theoretically, an 8-core architecture with improved cores could outperform a 10-core architecture. It's difficult to imagine Intel improving performance by 25% through architecture, though, and rumors suggest Rocket Lake may also have lower clocks than Comet Lake. It might be more efficient, but if Rocket Lake tops out at around 4.5GHz, the CPU cores would need to be about 40% faster just to match something like i9-10900K.

We'll have to wait and see, but I don't expect the Willow Cove cores to be anywhere close to sufficient to win the multi-threading argument. So AMD will have Zen 3 with up to 16-core 'mainstream' consumer solutions I imagine, and Intel will be trying to push Rocket Lake with better integrated graphics and only half as many CPU cores?
It still does not sound right to "upgrade" from 10 to 8 cores.
Amd said they are pushing for more cores so maybe a "mainstream"
24 cores for 1k$ on x570 :) ?
I am sure there will be buyers for that kind of beast.
 
still not convinced that even an Ampere GeForce would saturate a 16 lane PCIe3, so maybe the 10900K would be possible.
It's wrong to think of PCIe vs. speed in terms of saturation. The role of PCIe speeds, in interactive graphics, is more about reducing latency than trying to fill a pipe. Latency, in this case, is the delay between when a command buffer is sent to the GPU and when it can start executing it. Or, between when the GPU requests some data to reneder vs. the time it arrives. So, these latencies can actually affect throughput.
 
PCI4 has shown to bring next-to-zero benefit as of games loading time ; plus it has zero influence on FPS in the game of course.
That's not true. It can have up to a few % benefit, depending on the game.

Techpowerup tested 22 games and the mean improvement @ 1080p was 0.8%. However, the max was actually 3.2%. Those numbers are form an analysis I did on this, previously:


And that's just with the RX 5700 XT. Wait 'till there are even faster GPUs that have it!
 
It still does not sound right to "upgrade" from 10 to 8 cores.
10 cores on a ring bus was a stretch, anyhow. Sure, if you need 10 cores, then Comet Lake makes sense.

However, I'd prefer a more efficient architecture with 8 cores and a lower TDP, even if multithreaded workloads were on par or even a touch slower. Because most workloads are more sensitive to single-thread performance, and that's where Ricki Lake should really be an improvement.
 
Why are you not including any of the the AMD threadripper CPU's or at least the 3950X in this comparison. Some of us would like to see how the threadripper compares to intel chips as a gaming platform. Yes, it may be a bit more expensive, but how much extra bang will you get on these platforms for your buck - something I'd love to know - as well as loading and running the games on say 4 x PCIe4 NVMe SSD's in a Raid 0 config - as Intel can't do this at these levels. What kind of impact would it have?

https://forums.tomshardware.com/threads/true-pc-gaming-performance-intel-or-amd.3606470/

It's hard to imagine any real demand for TR4 or 3950X gaming numbers, as generally both are slower than the 3800X/3900X numbers....ergo, it's hard to imagine anyone intentionally purchasing one for gaming... :)

'extra bang' with them? The only folks feeling 'banged' would be likely those purchasing them for gaming, then noticing gaming performance on par with a 3600X.

Testing gaming with 'x' number of PCI-e 4.0 drives in RAID 0 would, IMO, seem more like a solution in search of a problem. (Seems like desperately hoping that such a rig might load a game level 1/4 sec faster than a rig with just one drive?)
 
  • Like
Reactions: JarredWaltonGPU
10 cores on a ring bus was a stretch, anyhow. Sure, if you need 10 cores, then Comet Lake makes sense.

However, I'd prefer a more efficient architecture with 8 cores and a lower TDP, even if multithreaded workloads were on par or even a touch slower. Because most workloads are more sensitive to single-thread performance, and that's where Ricki Lake should really be an improvement.
I don't think they are aiming for efficiency but for the gaming crown.
They don't want to lose "the best CPU for gaming" title.
There is no need for 10 cores but it looks bad not to release one.
 
Also interesting is that the older ES lists TSX support but the R0 revision does not. Both shots were taken in the same mobo, same BIOS, no difference in the settings used.
That is odd, as TSX should be present according to current 9900K Product Specifications page.
I see others have reported missing TSX on the KF... but it is also present in the specs.
Perhaps was disabled due to ZombieLoad 2, etc., and they forgot to update specs?
TSX is not a minor feature. It has the potential of major performance benefit, though is somewhat of a niche in terms of programmers that take advantage of it. It still is not something to be misdocumented in the specs.