AMD Ryzen 7 2700X vs Intel Core i7-9700K: Which CPU is Better?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Yeah I'm very happy with my 9700K so far too....idles 31 Degrees at stock frequencies (with an admittedly large Corsair 360mm AIO radiator...but at that temperature the fans don't even kick on, so it's perfectly silent!).

These temps suggest that I have good overclocking headroom, but it's performing so well (and quietly) that I'm not sure I'm even going to bother. Maybe years from now when it gets long in the tooth and I really need or want that extra 5%.
 


To add to your answer ...

I think what the OP should have said was " ... no one buys these GPUs and plays at 1080p ..."

Computers aren't just about playing games for many people.

I have a powerful CPU and I game at 1080 without a 144mhz monitor ... I just happen to have use for a powerful CPU and only game casually with a mid range card ... So if some reviewer would ever post a single dang benchmark with mid range cards, that would be actually useful to me, as I am sure it would to 85% of all the other readers.
 
I don't know why every CPU review people have to complain about games being tested in lower resolutions. The CPU's that win at the 1080p resolution are the same CPUs that will have higher frame rates at 2k and 4k. At 1080p you're not bottlenecking the GPU so you can see a larger difference in FPS that is driven by CPU performance but that difference will shrink at 2k and 4k. The faster CPUs will stay on top as you move to higher resolutions placing more load on the GPU.
 
Today the 9700K doesn't have a particularly compelling argument over the excellent 8700K, which IMHO is the best CPU Intel has put out since the 3770.

I do expect that to change though in a couple years as more games get optimized or designed to take advantage of more cores, and suddenly those higher frequencies of the 9th gen products get multiplied times x6 or x8 cores instead of x4 at most. You watch...these new Intel CPU's will hold up well over time and show even MORE of a performance edge compared to their contemporary competitors on future games then they do today.
 
Using lower resolutions for gaming benchmarks is Benchmarking 101. The gap shortens, if not disappears, once you move up the resolution scale. Fanboys just like to cry because they cannot accept that their precious CPU doesn't win everywhere. Nothing but pure dominance will make them happy, and any result that shows the other side winning results in a meltdown of tears.
 
Hey everyone. I'm glad you are all interested in the Intel vs AMD debate!

I wanted to clarify why the author of this story used the game testing methodology that he did without getting into a long winded internet war.

Question: Why do we test CPUs in 1080p when elite gamers are more interested in 1440p and 4K?
Answer: 1080p is preferred because its a relatively low resolution where the strengths of the CPU shine the most. The limiting performance factor in higher resolution gaming is the graphics card. Using a higher resolution takes the focus off the CPU. We DON'T want to do this while testing CPUs. Therefore, it makes the most sense to test at 1080p (or below).

I also want to address redgarl's statement about using mainstream GPUs for testing to simulate real world conditions.

Question: Why do we test CPUs with expensive video cards that most people don't own?
Answer: Using a less powerful video card shifts the bottleneck back to the GPU. We want to highlight CPU performance in these tests so shifting the bottleneck defeats the purpose of the test. Its true that most users don't own $800 video cards, but for the sake of testing, it makes the most sense to use an expensive video card.

Here's a follow up question to think about: Do you think testing a CPU with a Radeon HD 7770 would be a good idea? I'm sure that video card is still floating around out there in a lot of machines.

Anyway, thanks for participating on the Tomshardware message boards.
 


Because the difference of 0 or 2 or 4 fps (check the actual benchmarks: https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/17) that we would see in a real world gaming rig @4k is not the same as the up to 15% difference that all the benchmarks show when they CPU is artificially restricted, by running it in a non-real world state.

Do you have no use for data that represents real world situations? Maybe, but other people differ, and that is why they want to see the benchmarks on a spread of resolutions and even cards - reviewers will rarely do that anymore for some strange reason.

Sure if you start piling on more GPU power - say quad 1080ti SLI at 4k you will of course see the stronger CPU start to pull away again ... but again that's because the CPU is being bottlenecked again -- not the GPU.
 


Answer ... maybe not. But how about testing with the actual most popular card in the world that almost everyone currently owns? The 1060. Where are those benchmarks?

My take isn't an AMD vs Intel issue. Its about including real-world scenarios in benchmark testing that is strangely lacking everywhere ...
 


Yeah, if the Intel CPUs and motherboards were cheaper, likely it would have changed the overall "winner" given by the author. Price is certainly key when determine the best value.

I'm just glad that AMD is back to being competitive, and I look forward to seeing Zen2 processors hit the market!
 


Well said. That was the reason I'd pointed to the Techspot article here, because they had a very definitive example (using only Intel CPUs, for those that thought there might be a bias) & how when they paired the CPUs with a GTX 1050TI a Pentium G4560 (2C/4T locked CPU) had "identical" performance to the Kaby Lake i5-7600K (4C/4T) & Skylake i7-6700K (4C/8T) -- & the latter 2 CPUs at stock even had faster clock speeds (~8% faster for the i5, ~15% faster for the i7).
 


But don't you think that owners of a 1050ti have a right to know that upgrading their CPU from a pentium to a 6700k would not in real life give them any performance benefit? Don't you see that by not including the "real world" testing, that the results are misleading to the majority of end users?

I have nothing against creating artificially created scenarios to emphasize testing results. I have issue with these replacing real world test results altogether, which we are seeing everywhere.

If the pentium owner in my above example had only the artificial results, they may be entirely mislead into buying a shiny new, very expensive 6700k believing that they will get 20% more FPS, but in reality they experience getting ripped off due to "misleading" info contained within the benchmarks.

This clearly isn't an AMD vs Intel issue ... and thank you for the clear example of how not including real-world testing in cpu benchmarks results can be very misleading.
 


I think you are arguing for my point and you don't know it. If you look at the 4k benchmarks in the link you sent you will see all the CPUs excluding the one AMD APU are within 4fps. We all could guess within +- 2 FPS the numbers on the 9700k at 4k gaming without even reading a review because at 4k the test is GPU limited. Im saying I don't care about real world situations unless there is a significant difference between the new product in this case a CPU and all the other products on the market.
 
Context. If I'm interested in finding out if upgrading from a Pentium to an i7 is going to improve my gaming, then I will look for articles that provide relevant tests. This article compares two particular CPUs. There is no material here that suggests what gain might be achieved by upgrading from a Pentium to one of these, and using this article for that purpose, quite frankly, would make no sense.
One very good point to make here though, is if you already have a 300-series board, the value proposition changes considerably, but again this article makes no assumptions about what you already have.
So again, context.
 


Somewhat, I see your point ... yes, we could all "guess" at what real world benchmarking might be ...

But I argue it is unfair to assume the casual reader of reviews would actually be able to make that same "guess". They are here to get informed by graphs and such. Why are we having to "guess" at actual real world performance while reading a benchmark that should actually be telling us real world performance ... ?

The example that spdragoo gave is a perfect example of how not including the real world results that you say we should just "guess" at, can be misleading to consumers, whether they are buying AMD or Intel.

Again I have nothing against synthetic and artificial benchmarking - there's a place for that, but when it reaplces real world results that we are supposed to "guess" at what is real, there is a problem for the casual reader of these benchmarks.
 


To be fair, a $64 Pentium processor is within 14% of the 9900k at 4K, and with a 1080Ti, and that average difference is there only thanks to specific, CPU-heavy benchmarks like Ashes.

They test at 1080P to measure things, at least in line with the current game engines and APIs. Putting whether such resolutions are still used by any Ryzen7/i7 buyers aside, it's the only way you can see any meaningful difference between those CPUs due to how weak the GPUs proportionally are at higher resolutions. If there was magically a GPU capable of outputting 200fps at 4K, the results would be the same as measured now at 1080P.
We know that there's none, and by the time there is one, games might behave entirely differently and provide different results on different processors. But measuring at 1080P on the strongest GPUs is the only way to see the differences in games of today, no matter how abstract it is. Does it really make sense? There are good arguments for it, even if the argument against it is that it doesn't reflect the actual reality for the vast majority of buyers in this price range (the question isn't who games in 1080P, but who has a 144-240hz 1080P monitor?!), nor does it allow us to predict how these chips will behave in future games, potentially able to unload onto more threads more evenly. I think the main takeaway is that if you wanted to measure CPU gaming performance in a way that is actually relevant to the user, you would conclude that all high-end CPUs released in the last 5 or so years are equally overpowered at most games, and that choosing a CPU for gaming isn't really relevant unless you game at ultra-high refresh rates at low resolutions, as otherwise any CPU you can buy today is good enough and will likely be for years to come.

I agree that presenting 1440P and 4K results next to those would be an eye-opener to a lot of people, mostly that they don't need a high-tier processor at all, because there's no difference. But then in 4K even with a 2080Ti you'd get a scale ranging from 75fps on a Pentium to 85fps on the 9900k as an average across all games, with only the most CPU heavy games showing any difference at all.
Should they be included though? I believe so. Most people, even on tech forums, are weirdly surprised when they actually appear.
And that's on tech forums. Most of the public that I talk with about CPUs have absolutely no idea that for 4K you can actually buy a significantly cheaper CPU and there won't be any difference. It seems to be weirdly counter-intuitive to most people, as they just tend to extrapolate these super high 1080P results onto 4K, assuming the performance ratios would be similar, or that the weaker chips would for some reason be even weaker at 4K. Not sharing such results for sure doesn't help fight such misconceptions.
 


Actually, that was my point about the CPU testing: only doing "realistic" test pairings ends up being misleading -- because, as we both agree, the average user would believe that the cheap Pentium is just as powerful as the super-expensive Core i7, when in fact the "equal" performance was only due to the GPU limitation.

That's why CPU testing should be done at a "mainstream" resolution (currently 1080p, but previously was at 720p & even lower back in prehistoric times) but with the top-of-the-line (or possibly #2 on the list) GPU & as much RAM as possible so that the differences in performance can be assumed to be due solely to the CPUs' capabilities.
 


Well, for starters, Tom's didn't say that.

 


My argument from the start has never been that we only should do real world testing, you can read back all my posts - I made that disclaimer in almost ever one. If anyone was arguing that we should only do real world testing, I would argue against that, with you.

Two points -- you say that a gaming benchmark that shows a pentium and 6700k on 1050ti is misleading because it doesn't show the "true" power of the 6700k in non-gaming workloads. We are talking about a gaming benchmark, right? Gaming benchmarks, you know, the ones that show you gaming performance? The whole topic of this discussion ... If I am looking at gaming benchmarks, I am obviously interested in gaming performance, and the gaming performance between the pentium and 6700k is EQUAL on the 1050ti as posted prior. So a proper gaming benchmark should show the real gaming result, what's wrong with showing reality? Why hide it?

Now let's talk about productivity; let's say I want to edit videos and I am deciding between keeping my pentium and going with the 6700k, would I look at gaming benchmarks to determine if my pentium is better? No I wouldn't, I would look at productivity, specifically video editing benchmarks, and as long as we keep results "real-world" I would see the appropriate results there that I was looking for.

So please keep your productivity argument out of a gaming benchmark performance debate ... I am, and have only been referring to the created expectations of CPU gaming performance, based on how CPU gaming benchmarking is done, and how most often these numbers can never represent real world conditions unless one is stupid enough to actually build a gaming rig with a bottlenecked CPU.

I am not arguing that we should not do any 1080p gaming benchmarks, that was never my argument, sheesh, my argument is, and as I asked be fore, is it appropriate to show 1080p or 720p benchmarks with a bottlenecked CPU without also showing how the CPU performs as it would in someones rig at home?

If we show only the bottlenecked scores, we shouldn't keep saying "X processor is faster at gaming", we should say "here's what the benchmark shows if we artificially induce a non-realistic situation, and over here is what your real world results look like". That would make me happy.

(for people who also love cars ...)
Just like you can put a modified corvette on a dyno and say "here we have 750hp, and that 2017 Ford GT over there has only 650hp" - if that was the only thing that was said, or al you had was the dyno graph to look at you would believe that the 'vette was the faster car. Put them both on a drag strip or a track and you realize that your assumption based on the dyno graph couldn't be more wrong, and the real-world testing will show what your result will "actually" be. I'm not saying the dyno graph isn't fun or "good to know", but if the verdict on which car is faster was based on that, it would be wrong.

Just as I would argue in the opposite direction ... sure the 2700x has more multithreaded chops than the 8700k, so one would assume that it can encode video faster in Adobe Premiere ... the reality is that Adobe has optimized their code to utilize intel iGPUs so that 8700k actually does a better job. If we disabled the iGP, sure that would show the true performance of the actual CPUs and the 2700x would be winner by far margin, but that wouldn't represent a real-world scenario, now would it? It would be misleading wouldn't it?

So it should be tested as it would in the real world, which means that the 8700k is a better Adobe Premiere CPU than the 2700x ... period. I don't care if you artificially induce a "proper" comparison by disabling the iGPU in the Intel chip, inducing that would be "wrong" wouldn't it? Or does "real world" results not matter enough to you that you think we should be disabling the iGPU on the 8700k for an Adobe testing? I would argue that it definitely does, and see, I can argue for the blue team as well ... its not about red vs blue.





 
I don't know how often you folks think people upgrade, but I have a i5 3570k and a r9 290. Upgrading the gpu to a 1080 doesn't help my fps much (25%), but the 9600k i just put in added over 50% fps at 1080p with no MSAA. I have no interest in gaming at 4k (who can tell when the action is rough and ready?) ... I don't even see the point of a 4k monitor to be honest. I prefer my 144 hz refresh rate to good use.
 
I know some people are unhappy with some parts of this article, however over all this article is very fair. There was an article pitting the new coffee lake refresh against Ryzen and I took a lot of exception to pricing and "value" ratings in that article, however this one is very fair.

If all you are doing is gaming, the 9700K is better than the R7 2700X due to a very slight advantage in IPC and an very real advantage in clock speed. There are three things to take into consideration there though: 1. people aren't buying these processors to play games at 1080p, and at 1440p+ the gaming performance is virtually even with Ryzen 7 2700X. 2. For pure gaming Intel has a much better processor and that is the i5 9600K which simply has two less cores but overall the same gaming capability but is only $280 comparted to the 9700K $400 listed in this article, but actual price from newegg is $499.95. Keep in mind as well that the very best gaming processor right now (performance and cost) is the R5 2600 which ovrclocked to 4.2Ghz has the same overall gaming prowess as the 2700X and comes in at only $150. And 3. I don't know if the 9700K suffers from this issue but the numbers for the 9900K are all over the place and the two processors are identical with the exception of hyperthreading being disabled on the 9700K- here is a good video explaining the core issues:

https://www.youtube.com/watch?v=XfGz22ZjeGk

I don't know for sure if the 9700K suffers from those same issues but it would stand to reason as the only difference is hyperthreading. So we have the issue are the 8 core Intel processors cheating the TDP rating, or are they cheating in benchmark scores, because they are cheating somewhere. Setting the 9900K to cap at the 95W limit is it supposed to operate at reduced its maximum all core boost on stock from 4.7Ghz to 4.0Ghz, however all the reviews and benchmarks we have for it at "stock" are from the aspect of 4.7Ghz. But this can't be "stock" as to hit 4.7Ghz its pulling 150W, way over the 95W "stock" limit and by definition then is being overclocked. The biggest problem with these Intel 8 core processors is we really have no true "stock" benchmarks to look at. Without a doubt when overclocked we know they pull much higher clocks than Ryzen but not everyone is going to be overclocking, and not every board is going to be able to handle overclocking these power hungry, hot processors so we really need to know what the true "stock" benchmarks really are.

Overall though the main conclusions in this article are true:
#1. Intel's 9700K offers slightly higher FPS in gaming and is a better gaming processor.

-Just keep in mind that the i5 9600K for roughly half the cost will game just as good in most every title and is therefore a better "go to" gaming processor, and keep in mind that the R5 2600 is only half the the cost of 9600K and overclocked will have the same overall gaming prowess of the 2700X- taking all that into consideration buying a 9700K for its current $500 makes no sense at all when the R5 2600 is $150 and only 16% slower at 1080p- 16% better performance doesn't translate to being $350 more costly without factoring in the need for premium cooling not included with Intel's processors.

#2 The Ryzen R7 2700X is a better value and will generally perform workstation tasks better than the 9700K due to it having twice the threads.

-Keep in mind that not all workstation tasks are faster, if your using Adobe the 9700K will perform better due to Adobe favoring single core high clock speeds. However also keep in mind that the R7 2700X can easily be found for $300 or under and the 9700K if you can find it is going to run you a steep $500. Also the i9 9900K will outperform the Ryzen in all workstation related tasks but will also cost you $600. And don't forget the only numbers we really have for the new Intel 8 core processors are all overclock benchmarks as the "stock" was measured at an all core 4.7Ghz (over 150W) and the "stock" TDP of 95W will only support all core boost of 4.0Ghz which we have no numbers for. Taking cost into consideration the 2700X is a much better buy for workstation tasks.