Best Eight-Core CPU Battle: AMD Ryzen 7 3800X vs Intel Core i7-9700K

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
If you use a high end GPU and high end CPU, are you really going to do all these overclocking tricks to get maybe, at most, 2-5 fps? That falls into the "unnoticeable" category.

Note TH used a 2080Ti to generate the FPS data in this article so you can't get higher FPS with any other current GPU.

Overclocking the 9700k generates an 11.4 % increase in Average FPS to 144.2 from 129.5. From 60 FPS to 140 FPS Nvidia research shows essentially linear increases in gamers kill/death percentage death ratios with increasing FPS.

https://www.extremetech.com/gaming/...e-rates-can-almost-double-your-gaming-prowess
 
Last edited:
Some graphs showing the Ryzen multi thread performance leads would have been nice, and gaming benchmarks without multiple resolution results are lame in my opinion.
 
Initially, I went to consider Intel's higher CPU+platform cost but going for the I7 platform really only makes the whole system just a few percent more expensive compared to a 3800x based system. Noteworthy, but nothing major.

Personally, I would buy a 9700k if you fit the below:
  • You do not run applications (like Adobe Premiere for example) that Ryzen 7 excels in, or you do not require the extra performance over an i7 in said applications.
  • You do not see significant value in the upgrade path of socket AM4 which will receive more new CPUs up until 2020, as per AMD.
  • You game at 1080p high refresh and thus are unlikely to be GPU bound so you in some cases will see a benefit from the I7s extra frames.
Really, you can't go wrong with either as both are powerful CPUs that can excel at any task. They just trade blows depending on the task.

This is coming from a person who currently owns both an I5 and Ryzen 5 system and likes both CPUs.
 
  • Like
Reactions: RodroX
I generally don't buy mainstream CPUs because of the pervasive lack of PCIe lanes, but whenever looking at CPUs, or buying for family who run mainstream systems with no bells and whistles or HEDT CPUs for myself, the first two questions I ask are (a). How much will it cost to run the CPU per year? and (b). How much heat in degrees will it output from my case?

I desperately want to dump intel, but the third question is How much will it cost for memory (if my current memory won't work on AMD), and a new motherboard, and any other hardware I might have to change because of compatibility issues?

I've never seen a review that looks at things from this perspective. AMD products here are more expensive than Intel. And as an example, the latest Threadripper was going to cost me and estimated $600 more per year in power consumption, and cost me $1000 more to buy given my existing memory was QVL'd for AMD. With temps here easily approaching 100F ambient in summer, not only was it more expensive to run, but it also produced more heat in a place where it was already oppresively hot and where there is no cooling.

Of course, these data are subjective, and my use case might be uncommon. There is no way to know exactly for sure what the real increase in running costs will be without actually buying the hardware. But the point is, running costs are just as important as initial purchase costs. And when purchasing, if we are moving from Intel to AMD we must also consider motherboard and RAM costs, and potentially whether our existing cooler will work (If we have an aftermarket cooler).

It would just be nice for some of the factors other than number of cores, frequencies and gaming vs productivity are taken into consideration when writing reviews.
 
That seems implausible. Starting with, say, 4.4GHz, and gong to 4.575GHz (the 175Mhz increase) . . . that's a clock speed increase of only 4%. And the real-world performance increase in gaming will be less than 4% because other factors (memory subsystem, GPU, etc) contribute to the performance, not just CPU alone.

How is that a very nice boost in gaming performance? If you use a high end GPU and high end CPU, are you really going to do all these overclocking tricks to get maybe, at most, 2-5 fps? That falls into the "unnoticeable" category.

Ok, and by the same argument the 9900K only has a ~6% gamine advantage over processors such as the 3900X while benchmarking a full gaming suite (as tested by Gamers Nexus, Hardware Unboxed, Linus, ect), yet is hailed as the "best gaming processor" and the processor to get for gaming even though in most games the difference is "unnoticeable". Getting an additional up to 5 FPS+ in some titles is consequential when the top of the line is only separated by ~6%. The only thing really separating the 3900X and 3800X for gaming is clock speed (PBO boost). If you overclock the 3800X properly you end up with as good as or better than gaming performance of the 3900X (because 12 cores and 24 threads aren't really needed for gaming yet).

That being said I have a RTX 2070 so my "limiting factor" is the GPU. I have it overclocked so I get roughly the same performance as a stock 2070 Super but I would have to be running a 2080Ti for the processor to be the limiting factor. The best I can do is run synthetics with Firestrike and Time Spy. Running these benchmarks and comparing the Physics / CPU score there is a good bump in performance going from 4.4Ghz to 4.575Ghz. The overclocked Time Spy and Firestrike benchmarks stack up very well even when compared to a 9900K @ 5Ghz. Real world gaming probably has the 9700K, and 9900K ahead depending on the title but I have no way of really testing that without the RTX 2080Ti. The reviews online are basically pointless because no one has their 3800X running with SMT off and overclocked to 4.575Ghz or even SMT on and overclocked to 4.5Ghz. Every review site I've seen has the 3800X clocked at only 4.3Ghz and even then is only behind the 9900K by an ~8% margin.

Of course that makes an excellent point as well, without buying a $1200 GPU you will have no gaming difference between the 9700K, 9900K, 3800X, or even the 3700X.
 
Last edited:
  • Like
Reactions: King_V
I generally don't buy mainstream CPUs because of the pervasive lack of PCIe lanes, but whenever looking at CPUs, or buying for family who run mainstream systems with no bells and whistles or HEDT CPUs for myself, the first two questions I ask are (a). How much will it cost to run the CPU per year? and (b). How much heat in degrees will it output from my case?

I desperately want to dump intel, but the third question is How much will it cost for memory (if my current memory won't work on AMD), and a new motherboard, and any other hardware I might have to change because of compatibility issues?

I've never seen a review that looks at things from this perspective. AMD products here are more expensive than Intel. And as an example, the latest Threadripper was going to cost me and estimated $600 more per year in power consumption, and cost me $1000 more to buy given my existing memory was QVL'd for AMD. With temps here easily approaching 100F ambient in summer, not only was it more expensive to run, but it also produced more heat in a place where it was already oppresively hot and where there is no cooling.

Of course, these data are subjective, and my use case might be uncommon. There is no way to know exactly for sure what the real increase in running costs will be without actually buying the hardware. But the point is, running costs are just as important as initial purchase costs. And when purchasing, if we are moving from Intel to AMD we must also consider motherboard and RAM costs, and potentially whether our existing cooler will work (If we have an aftermarket cooler).

It would just be nice for some of the factors other than number of cores, frequencies and gaming vs productivity are taken into consideration when writing reviews.
Some facts with some personal experience

Memory does not have to be on QVL of your motherboard in order to work. In fact, the vast majority of times ram not on the QVL will work just fine. I have used 2 ram kits with my Ryzen system. Neither has been on the QVL, and neither has had issues running at 3200mhz XMP speed. I have used both 1st and 2nd gen Ryzen CPUs.

Not sure where you live, however, consider this, Ryzen Threadripper 3000 CPUs perform better and are more power-efficient than Cascade lake Intel HEDT, so the operating costs will be lower. However, the Ryzen 3000 Threadripper CPUs cost a lot more initially.

Also, Ryzen Threadripper 3000 is easier to cool than Intel cascade lake HEDT CPUs.
 
I find it very interesting that despite the 3800X having more L3 Cache, faster memory speeds, and hyperthreading, the 9700K beats it in all gaming benchmarks. Thanks for the article.
What I find as interesting as we have yet another TH review with absolutely no data on the test setup. What cooler was the Intel CPU running? The AMD CPU?

Also fails to get into the memory settings, etc. Fine tuning a Ryzen CPU is a little different than your standard Intel "crank up the voltage and lets see what she can do" CPU".

I can get my 3800x to 4.4 ghz all core on 1.35v easy with decent cooling. All while running an FCLK of DD4 3800 (1 to 1). Not trying to say all 3800x CPU's can do this, but it is not unheard of. First thing you do with this CPU is pair it with some decent low latency ram and at a minimum run it at DDR4 3600 (FLCK 1:1).

Maybe the tester did this? No idea, again, no test system data.
 
Yeah, the 3800x is cheaper (in Canada), includes a cooler, is only 5 fps lower on average if you play around 120fps, while being up to 40 percent faster in productivity. And better yet, go buy the 3700x and have an actual valid comparison. Nobody should be buying the 3800x. So I'll save over $100 in Canada and get a faster CPU while losing 5 fps, and get a free cooler.

Btw, one game setting change is all it takes to get that 5 fps back, and you won't even notice the difference in image quality. As the other poster said, the only reason there is a slight deficit is clock speeds and inter core latency, and you can see that if you compare against Intel's HEDT lineup which also doesn't have the advantage of the ring bus and which can fall behind AMD even in gaming.

I don't want Intel to be poor value, but they are not even trying. As we all know, the 9900k should be the price of the 9700k, I won't buy Intel until then.

While I can see a point to getting the 3800X in the current generation if you want to be able to get the best overclocking and performance (basically if you want to pay extra for a binned 3700X), I don't think we will be seeing two 8 core 16 thread processors with Zen 3. I think AMD has realized the 3800X basically being a binned 3700X just wasn't convincing enough. At the end of the day I'm sure AMD sold way more 3700X than 3800X processors. Its basically the difference between the i9 9900K and the 9900KS- the KS is simply better binned but the same processor. Every company does it, but I don't think we will see it again with Zen 3.
 
Overclocking the 9700k generates an 11.4 % increase in Average FPS from 144.2 vs 129.5. From 60 FPS to 140 FPS Nvidia research shows essentially linear increases in gamers kill/death percentage death ratios.

https://www.extremetech.com/gaming/...e-rates-can-almost-double-your-gaming-prowess
FWIW correlation != causation. Serious gamers who play lots of competitive games tend to have higher end graphics cards and displays, and/or are willing to disable some eye candy for more frames. Casual gamers tend to have more moderate GPUs, higher latency displays/input devices, and tend to leave detected settings alone (runs good most of the time, looks pretty, but can bog down when the SHTF*). So while I expect some scaling with framerate, I wouldn't expect it to be truly linear once you factor in skill, competitive nature, and supporting hardware. Of course, that sort of talk doesn't sell graphics cards as much as "FPS = near perfect scaling in skill" so of course a graphics card firm isn't going to take a nuanced position. 😛

*Dynamic resolution can help but in my experience large swings still mean temporary performance tanking while it shifts gears. It's better on carefully-optimized console games, and it's better than static resolution usually, but you are still better off tweaking settings downward than bumping into dynares every time things get hairy.
 
Now that’s an understatement. Just had a quick look and the 3700X is about 14% cheaper in the UK while giving nearly identical performance to the 3800X. They perform so closely you have to ask why they both exist and the only answer I can come up with is marketing and squeezing a bit more out of customers who don’t properly research their next purchase.
3800x is better binned silicon. You have a small point, but the 3800x is often on sale for a mere $10.00 more than the 3700x.
 
  • Like
Reactions: alextheblue
I find it very interesting that despite the 3800X having more L3 Cache, faster memory speeds, and hyperthreading, the 9700K beats it in all gaming benchmarks. Thanks for the article.

Intel has been faster in gaming all along, esp. when overclocked. The new Ryzen only closes the gap, its not faster. The main problem with Intel is the price......Its ridiculously expensive.
 
Intel has been faster in gaming all along, esp. when overclocked. The new Ryzen only closes the gap, its not faster. The main problem with Intel is the price......Its ridiculously expensive.
When concidering that the I7+cooler+motherboard is only around $100 more than a R7+stock cooler+board, it really isnt much when factored into a system with a decent GPU like a 2060 super or 2070 super
 
What I find as interesting as we have yet another TH review with absolutely no data on the test setup. What cooler was the Intel CPU running? The AMD CPU?

Also fails to get into the memory settings, etc. Fine tuning a Ryzen CPU is a little different than your standard Intel "crank up the voltage and lets see what she can do" CPU".

I can get my 3800x to 4.4 ghz all core on 1.35v easy with decent cooling. All while running an FCLK of DD4 3800 (1 to 1). Not trying to say all 3800x CPU's can do this, but it is not unheard of. First thing you do with this CPU is pair it with some decent low latency ram and at a minimum run it at DDR4 3600 (FLCK 1:1).

Maybe the tester did this? No idea, again, no test system data.

That's my fault, I edited and put in the images, etc. I forgot to add the test system at the bottom, but will fix now. Thanks for the heads up :)

edit - sigh, there is an issue with the formatting due to a bug, but will get that squashed. Data should all be in there, though.
 
From 60 FPS to 140 FPS Nvidia research shows essentially linear increases in gamers kill/death percentage death ratios.
You mean "Nvidia marketing nonsense". Actually, a lot of what they claim about higher frame rates making it a bit easier to aim is true, and it's actually a pretty reasonable "article" for the most part, at least until you get down to that rubbish data tacked on to the end. The K/D ratio part is little more than deceptive marketing. They're trying to make it sound like getting a high-end graphics card and high refresh rate monitor will double a player's performance, but they are using data in an incorrect way to come to that conclusion. If you actually follow the links through to their own study they are referencing, you see that this is how they acquired that data...

One of the common metrics of player performance in Battle Royales is kill-to-death (K/D) ratio -- how many times you killed another player divided by how many times another player killed you. Using anonymized GeForce Experience Highlights data on K/D events for PUBG and Fortnite, we found some interesting insights on player performance and wanted to share this information with the community.
So they simply looked at anonymous player statistics, and found that the players with higher K/D ratios were also more likely to be getting higher frame rates. They used this to support the suggestion that the high framerates were what was making them play so much better, but in reality, the data should be correlated in the opposite direction. Namely, players with the highest K/D ratios have them largely because they play the game a lot, and are more likely to buy higher end hardware to support their gaming than the more casual players who don't have such high K/D ratios.

SMT won't really improve game performance and games love high clocks - its straight forward.
For an 8-core processor running today's games, that's reasonably accurate. For a 6-core processor, SMT is already helping to avoid performance hitches in some demanding games. That will only become more of a concern in the years to come, and may even impact performance on these 8-core parts within the next couple years or so. And that's even more important if people are running tasks in the background, whether it's for streaming, or just something like a web browser, which review benchmarks don't really test for. With Intel adding SMT across the lineup for their new processors within the coming months, having 6 or 8 cores with SMT will become the norm for nearly all new mid-range or better gaming systems, and while developers won't abandon targeting lower thread count processors overnight, you will likely see more performance hiccups on systems lacking SMT down the line. The slightly higher per-core performance of the current Intel hardware in games will also be beneficial of course, depending on the game, but the benefits of SMT are there too, even if they are not as straightforward at this time.

And as an example, the latest Threadripper was going to cost me and estimated $600 more per year in power consumption
This seems pretty unlikely even if the cost of electricity your region were absurdly high. The current average cost of electricity in the US is about 13 cents per kilowatt-hour. At that cost, 1 watt of power draw running 24/7 for an entire year would cost you a little over a dollar ($1.14). In order for the processor to be drawing $600 more per year at that rate, it would need to be drawing about 525 watts more than another competing processor doing the same amount of work, and would need to be running under full load for that entire year. If anything, the 3000-series Threadripper processors should be significantly more efficient than the competition, so I don't see any way that those numbers could come anywhere close to panning out.

I don't want Intel to be poor value, but they are not even trying. As we all know, the 9900k should be the price of the 9700k, I won't buy Intel until then.
I mean, that's pretty much what should be happening in the coming months. The i7-10700K should more or less be an i9-9900K at a somewhat more reasonable price, from the looks of it. Of course, it will still be around $100 USD more than an 8-core, 16-thread Ryzen, plus the cost of a capable cooler, which for a processor with that level of heat output should be close to another $100 extra. So while the value should be improved, I'm not sure it's quite there, and aside from very high-end systems, one would probably still be better off putting that money toward graphics hardware instead.
 
What I find as interesting as we have yet another TH review with absolutely no data on the test setup. What cooler was the Intel CPU running? The AMD CPU?

Also fails to get into the memory settings, etc. Fine tuning a Ryzen CPU is a little different than your standard Intel "crank up the voltage and lets see what she can do" CPU".

I can get my 3800x to 4.4 ghz all core on 1.35v easy with decent cooling. All while running an FCLK of DD4 3800 (1 to 1). Not trying to say all 3800x CPU's can do this, but it is not unheard of. First thing you do with this CPU is pair it with some decent low latency ram and at a minimum run it at DDR4 3600 (FLCK 1:1).

Maybe the tester did this? No idea, again, no test system data.

I would also be interested in the test setup. Would be interesting to see if the RAM was set in a 1:1 ratio so the infinity fabric would be running at the proper speed.
 
  • Like
Reactions: alextheblue
That's my fault, I edited and put in the images, etc. I forgot to add the test system at the bottom, but will fix now. Thanks for the heads up :)

edit - sigh, there is an issue with the formatting due to a bug, but will get that squashed. Data should all be in there, though.

Thanks......I think?


Cooling:

Corsair H115iCustom Loop, EKWB Supremacy EVO waterblock, Dual-720mm radiatorsAMD Wraith Prism, Wraith Stealth Stock Coolers

Am I to understand the Intel system was running a Corsair Custom loop and the AMD system it's stock cooler? Not sure what to make of this.........
 
  • Like
Reactions: alextheblue
I bought my 3800x for $300. The 3700x was selling for $280 and the 9700k for $350. And yeah, haha, lol, stock cooler, but a free cooler is better than the air intel is packaging with their unlocked processor. I haven't used AMD since the Athlon 600, so I'm by no means a team red fan boy, but Intel really has lost the advantage in this generation, and it doesn't sound like the next one is going to help them make up ground.
 
  • Like
Reactions: alextheblue
I would also be interested in the test setup. Would be interesting to see if the RAM was set in a 1:1 ratio so the infinity fabric would be running at the proper speed.

Typically speaking the 3800x will do better in gaming benchmarks with AOC and PBO enabled, and will do much better in applications such as blender with an all core OC.

As to why they were only able to obtain the modest OC they did could depend On a variety of things, such as VRM design of the MOBO (please tell me you didn't use the MSI gaming plus!!) Or maybe it was pre-release silicon sent out prior to launch to reviewers.....hard to say really.

Also, need to now what bios revision the board was running at, if it had the updated chipset drivers, etc etc. A lot of these reviews that are regurgitating old test data for new material aren't really accurate anymore. Not saying that is the case here, but without disclosure, how can anyone tell?
 
  • Like
Reactions: alextheblue
Thanks......I think?


Cooling:

Corsair H115iCustom Loop, EKWB Supremacy EVO waterblock, Dual-720mm radiatorsAMD Wraith Prism, Wraith Stealth Stock Coolers

Am I to understand the Intel system was running a Corsair Custom loop and the AMD system it's stock cooler? Not sure what to make of this.........

Yea, what the heck is up with this??? Did the Intel system have a custom loop and the AMD systems were given the stock coolers? Is this more of that "its what came with the AMD system" nonsense? By that standard we should test that 9700K without anything right... How far would it overclock then?
 
  • Like
Reactions: alextheblue
Yea, you don't run one brand on stock cooler and another on some of the best cooling possible.

How high a CPU will boost clock up to is heavily dependent on cooling, especially with AMD. You could have seen better performance from AMD with better cooling.

Remember, this is what principled technologies did in their testing!

Also, tweaked mem timings make a big difference.
 
  • Like
Reactions: alextheblue
Yea, what the heck is up with this??? Did the Intel system have a custom loop and the AMD systems were given the stock coolers? Is this more of that "its what came with the AMD system" nonsense? By that standard we should test that 9700K without anything right... How far would it overclock then?

Hardware:
AMD Socket AM4 (X570)AMD Ryzen 9 3900X, Ryzen 7 3800X, Ryzen 7 3700X, Ryzen 5 3600X, Ryzen 7 2700XMSI MEG X570 Godlike2x 8GB G.Skill Flare DDR4-3200Ryzen 3000 - DDR4-3200, DDR4-3600Second-gen Ryzen - DDR4-2933, DDR4-3466Intel LGA 1151 (Z390)Intel Core i9-9900K, i7-9700K, Core i5-9600KMSI MEG Z390 Godlike2x 8GB G.Skill FlareX DDR4-3200 @ DDR4-2667 & DDR4-3466AMD Socket AM4 (X470)AMD Ryzen 5 1600XMSI X470 Gaming M7 AC2x 8GB G.Skill FlareX DDR4-3200 @ DDR4-2933All SystemsNvidia GeForce RTX 2080 Ti 2TB Intel DC4510 SSDEVGA Supernova 1600 T2, 1600WWindows 10 Pro (1903 - All Updates)

Looks like they used an MSI Godlike (great board) but it's anyone's guess what they used for RAM..........

Come on Tom's just set the systems up again and retest. If we are comparing the two processors, the cooling should be the same at a minimum. Both processors should have the most up to date bios, and most up to date Windows version with all the patch's for the Intel CPU vulnerabilities.
 
  • Like
Reactions: alextheblue
Typically speaking the 3800x will do better in gaming benchmarks with AOC and PBO enabled, and will do much better in applications such as blender with an all core OC.

As to why they were only able to obtain the modest OC they did could depend On a variety of things, such as VRM design of the MOBO (please tell me you didn't use the MSI gaming plus!!) Or maybe it was pre-release silicon sent out prior to launch to reviewers.....hard to say really.

Also, need to now what bios revision the board was running at, if it had the updated chipset drivers, etc etc. A lot of these reviews that are regurgitating old test data for new material aren't really accurate anymore. Not saying that is the case here, but without disclosure, how can anyone tell?

Depends on the overclock utilized. AOC and PBO are good, but manual overclock will do better in gaming if you disable SMT and are able to overclock 8 cores 8 threads to 4.5Ghz+. I know my 3800X is a good overclocker but I can hit 4.5Ghz all core SMT enabled @ 1.41V, SMT disabled I can do 4.575Ghz. Every 3800X I have worked with has hit 4.4Ghz all core with ranging voltages between 1.3 - 1.34V on average (LLC set to max). My bios is 1.0.0.3ABBA which for my X470 board has the best overclocking ability - from my experience 1.0.0.3ABBA is the best bios for X470 and X570 does the best with 1.0.0.4 (patch B). My drivers are all straight from AMD (not Asus) and are the latest available. I would assume using a X570 Godlike ($$$) they would be running the latest bios and drivers but who knows.
 
Hardware:
AMD Socket AM4 (X570)AMD Ryzen 9 3900X, Ryzen 7 3800X, Ryzen 7 3700X, Ryzen 5 3600X, Ryzen 7 2700XMSI MEG X570 Godlike2x 8GB G.Skill Flare DDR4-3200Ryzen 3000 - DDR4-3200, DDR4-3600Second-gen Ryzen - DDR4-2933, DDR4-3466Intel LGA 1151 (Z390)Intel Core i9-9900K, i7-9700K, Core i5-9600KMSI MEG Z390 Godlike2x 8GB G.Skill FlareX DDR4-3200 @ DDR4-2667 & DDR4-3466AMD Socket AM4 (X470)AMD Ryzen 5 1600XMSI X470 Gaming M7 AC2x 8GB G.Skill FlareX DDR4-3200 @ DDR4-2933All SystemsNvidia GeForce RTX 2080 Ti 2TB Intel DC4510 SSDEVGA Supernova 1600 T2, 1600WWindows 10 Pro (1903 - All Updates)

Looks like they used an MSI Godlike (great board) but it's anyone's guess what they used for RAM..........

Come on Tom's just set the systems up again and retest. If we are comparing the two processors, the cooling should be the same at a minimum. Both processors should have the most up to date bios, and most up to date Windows version with all the patch's for the Intel CPU vulnerabilities.

Good catch, no mention of RAM (frequency / timings)... Plus a X570 MSI Godlike, arguably the best X570 board available and only 4.3Ghz @ a whopping 1.42V.... Seriously that's horrible... But if they were trying that with stock cooling thermal throttling would be a major issue.... I sure hope they didn't put 1.42V on a 3800X with stock cooling...

Good luck with "all the patch's for the Intel CPU vulnerabilities"... Intel would blow a gasket if the tests were run with all the mitigations in place...
 
  • Like
Reactions: alextheblue