Should you upgrade your GPU or CPU for faster gaming? We tested many hardware combos to find out

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Including the 5800x3d wouldn't have made much difference in this chart.

Its faster than a 11900k in gaming and it would join the 11900k, 14900k and 7800x3d in maxing out the 3080.
It would almost certainly max out the 4080 as well
Tom's CPU charts shows it as 95 percent as fast as a 7800x3d with a 4090 at 1440P - a 4080 is nothing to it.
 
really depends at that point. if your running games that require more cpu cores, you should go for a better cpu, but if your running games that require more gpu work, go for the gpu. but it also depends on the age of the system, any thing 8 years+ you should really be looking for a system upgrade, but usually older cpus can run newer gpus and not bottleneck, for instance the i7 4790k and 2060, the pairing is 7-8 years apart but it still works eccelent for gaming. the same thing goes for gpus, people still use 900 series and older rx cards in newer systems, like me: i use a 3700x with a 1070, and it works fairly well. but keep in mind to do alot of research before upgrading, not all upgrades are super simple, it depends on bottleneck and compatibility. so be careful when doing an upgrade.
 
"So, if you're rocking a top-tier GPU like the RTX 4080 or above, or the RX 7900 XTX, but you're running a five or six years old CPU, you're still giving up a lot of performance at 1440p ultra and it's time for an upgrade — or at least, it will be time to upgrade once AMD's Zen 5 and Ryzen 9000 CPUs arrive, unless you want to wait a bit longer for Intel's Arrow Lake CPUs."

I didn't see any AMD GPUs in that test, so can I safely assume that the above statement is subjective?
We can see how top-end GPUs scale with CPU here. There's no reason to expect AMD's GPUs to behave differently. It's what our GPU benchmarks hierarchy covers. Did I test a 7900 XTX in this article? No, but I've tested it in the past on the 13900K, 7950X, and 12900K and the results from those platforms coincide with what I'm seeing here. I'm not saying I can't/won't do more testing using more AMD hardware (see below), but adding just the 7900 XTX would mean another full week of benchmarks.

It's a topic to look at for the future, depending on how this does in terms of traffic.
 
Just a heads up to anyone here that likes this sort of article:

If you do, please share it on all your social networking platforms. Share it far and wide. Because that's ultimately what will determine how many more articles like this we end up doing.

This took, literally, a solid month of testing (because I have other stuff to do besides just this). 16 combinations and it's a full day of running benchmarks for each one. So that's more than three weeks of just running the tests, and then trying to figure out if some of the initial results were anomalies or were repeatable. (I basically retested 7800X3D with the 2080 before determining that, yes, all of those numbers are consistent and it's slower than an 11900K with that particular GPU; I still don't fully understand why, but suspect it's just drivers being tuned on that older GPU for Intel processors.)

I can tell you that, at present, the traffic on this article is merely okay. As in, maybe it will equal the level of traffic we get from a Faceoff article like RTX 4060 vs RX 7600. And if that's all it manages, that does not bode well for future articles of this sort. Because, again, this was a month of work. That Faceoff? It used data I already had from the GPU benchmarks hierarchy, and even if I had to redo the testing (which I often do for when a Faceoff is new, just to be sure the numbers are up to date), that's two days of testing versus 16, minimum. The math doesn't look good, in other words.

I explained the reasons for sticking to only Nvidia GPUs in the text of the article: It fully removes driver and major architectural difference from the equation (meaning, DLSS and DXR are present on all three past generations). Yes, Nvidia may put more effort into newer GPUs on drivers than older GPUs, but it's at least more understandable than if we just start trying to reproduce the GPU benchmarks hierarchy data except run everything on three more platforms.

So, again, please share this around and try to help boost the traffic, because that's the best way for me to be able to make a case for doing the same sort of testing with a stronger focus on AMD CPUs and GPUs. Also note that AMD is more limited in how far back we can go, since it has only had two generations of RT-compatible architectures. I could do 6600, 6900 XT, and 7900 XTX as an example, but there's nothing from before 2020 that can run our full test suite. And sure, AMD fans won't really care as much about RT... but if the traffic from this suite of testing is only "okay," I can't help but worry that doing the same testing with Zen, Zen 2, Zen 3, and Zen 4 plus AMD Radeon GPUs would have even less traction.
 
We can see how top-end GPUs scale with CPU here. There's no reason to expect AMD's GPUs to behave differently.
AMD's driver overhead has historically been lower than nvidia's and I've seen zero evidence that it has changed so on the 8700K, and maybe some of the 11900K, results could look different. I'm sure it wouldn't make any difference on the higher CPUs.

This is absolutely not to take anything away from the article as it's very good and useful. It's just something that is rarely covered as most of the industry never bothers looking at old enough platforms for the difference to really matter.
 
  • Like
Reactions: 35below0
I loved the article, but if articles like this aren't feasible, please consider adding a test of a game that shows some CPU dependency with a couple of CPUs when reviewing new low and mid range GPUs.

i.e. Spiderman 1440P High/DXR for a mid range GPU or Spiderman 1080P/DXR for a high end GPU.

The comparison isn't nearly as important for high end GPUs - anyone buying a 4080 or 7900XTX is probably pairing it with a 7800X3D or 14900K anyway.
 
i like to see also some other games testing,

like hearts of iron, football manager, those kind off games, because how further you go in the game how slower the games gonna be, end than you can see a real difference between the hardware,
 
  • Like
Reactions: Mattzun
i like to see also some other games testing,

like hearts of iron, football manager, those kind off games, because how further you go in the game how slower the games gonna be, end than you can see a real difference between the hardware,
Most of the games you listed will play on old hardware @ max settings, playing it on a newer system would give FPS numbers most monitors can not support. Who really needs 500 FPS or ever 240 FPS for a game to be enjoyable?
My monitor displays 60 FPS@ 1080p
I just max out all graphics settings and enjoy smooth gameplay. Forget about the other 80 or 300 FPS my video card or CPU could be producing.
OOOhh something shinny !!!!!!!!!!! 😵
 
  • Like
Reactions: stonecarver
Most of the games you listed will play on old hardware @ max settings, playing it on a newer system would give FPS numbers most monitors can not support. Who really needs 500 FPS or ever 240 FPS for a game to be enjoyable?
My monitor displays 60 FPS@ 1080p
I just max out all graphics settings and enjoy smooth gameplay. Forget about the other 80 or 300 FPS my video card or CPU could be producing.
OOOhh something shinny !!!!!!!!!!! 😵
For me is not out about the FPS,
its about its about the time it needs to proceed, end than you can see the real difference


this totally different than FPS,
 
  • Like
Reactions: bit_user
AMD's driver overhead has historically been lower than nvidia's and I've seen zero evidence that it has changed so on the 8700K, and maybe some of the 11900K, results could look different. I'm sure it wouldn't make any difference on the higher CPUs.

This is absolutely not to take anything away from the article as it's very good and useful. It's just something that is rarely covered as most of the industry never bothers looking at old enough platforms for the difference to really matter.
I think it’s more correct to say that AMD’s driver overhead can be different from Nvidia’s and that it very much depends on the game in question. Sometimes it will be lower, sometimes higher, and on average it’s probably quite similar.

And what I mean by this is that games like Borderlands 3 and The Last of Us (ie, AMD promoted that tend to favor AMD GPUs) are statistical outliers and not the common case. Just as Total War: Warhammer 3 is not the typical result either.
 
I think it’s more correct to say that AMD’s driver overhead can be different from Nvidia’s and that it very much depends on the game in question. Sometimes it will be lower, sometimes higher, and on average it’s probably quite similar.

And what I mean by this is that games like Borderlands 3 and The Last of Us (ie, AMD promoted that tend to favor AMD GPUs) are statistical outliers and not the common case. Just as Total War: Warhammer 3 is not the typical result either.
I'm not talking statistical outliers I'm talking about when CPU limited AMD cards tend to perform better because there's less overhead. This seems like something you haven't tested, and I haven't seen anyone other than HUB perform testing on it.

Here's an example from one of their videos from a few years ago, and I'm pretty sure at no point CP2077 has been advantage AMD:
9FjuhF0.jpg
 
For me is not out about the FPS,
its about its about the time it needs to proceed, end than you can see the real difference


this totally different than FPS,
I get what you're asking, but those games don't need a GPU at all, so it's safe to remove the GPU from the equation.

What you need for good performance is a good/better CPU and fast RAM. Also, don't play it on a HDD.
As far as the GPU is concerned, games like FM or Factorio are going to be happy with a tyre on a swing. Or an Intel UHD 730.
 
>I can tell you that, at present, the traffic on this article is merely okay.

It's not really up to users to evangelize articles, even if they were willing. From my view, the main problem affecting THW's visibility is just its awful layout, with filler articles pushing main articles off the top of the page, then off the front page altogether. Already, your piece is below the top visible portion of the page, mixed in with 4-5 other junk pieces.

If you want your work to get more traffic, fix THW's layout. Or get whichever corporate type to fix it. Use stickies.

For the particular piece, I can point to other deficits, such as the lack of an exec summary. One would need to be a dedicated reader to wade through the copious details, that, yes, took a lot of work, but with poor readability. Other issues are systemic to all THW pieces, such as being text-heavy, poor use of graphics, etc.

It's not geared toward the wider audience with brief attention spans. In the Internet era, less is more.
 
  • Like
Reactions: evdjj3j
It's not geared toward the wider audience with brief attention spans. In the Internet era, less is more.
You either chase the short attention span readers who have no loyalty as well as attention span, or you explain the subject in detail, which is what returning readers want.

Having a featured article would help.
But picking on TH layout is kinda weak when dozens of websites use the same layout. There's only so much you can do if you're trying to cover tech in depth. You can sex it up a little bit, or simplify it as much as you can but you'll still have to split hairs in order to cover coolers or performance gaps in CPUs.
Throw in marketing geniuses in AMD or Acer who come up with brilliant naming conventions that no sane person would want to understand.
I am sure Acer name their monitors by picking through sounds of people blowing their nose.
 
Lots of info in the article. I appreciate all the hard work. People can have a hard time deciding, CPU/GPU upgrade times. Guides like this make it easier. It's like RAM reviews over the years. The Timings vs Sheer Data Rate. As the RAM changes over the years, so does the real world effect of the specs. Is Timing More important than Data Rate? It changes and these kind of articles are handy.
Thanks!
 
Including the 5800x3d wouldn't have made much difference in this chart.

Its faster than a 11900k in gaming and it would join the 11900k, 14900k and 7800x3d in maxing out the 3080.
It would almost certainly max out the 4080 as well
Tom's CPU charts shows it as 95 percent as fast as a 7800x3d with a 4090 at 1440P - a 4080 is nothing to it.
But 5800x3D can drop in a motherboard from 2017.
 
You either chase the short attention span readers who have no loyalty as well as attention span, or you explain the subject in detail, which is what returning readers want.

Having a featured article would help.
But picking on TH layout is kinda weak when dozens of websites use the same layout. There's only so much you can do if you're trying to cover tech in depth. You can sex it up a little bit, or simplify it as much as you can but you'll still have to split hairs in order to cover coolers or performance gaps in CPUs.
Throw in marketing geniuses in AMD or Acer who come up with brilliant naming conventions that no sane person would want to understand.
I am sure Acer name their monitors by picking through sounds of people blowing their nose.
I have to agree with the OP that the filler articles push the good ones out of the spotlight.
 
  • Like
Reactions: 35below0
I can tell you that, at present, the traffic on this article is merely okay.
FWIW, it's the type of article that I will refer to & share with people (i.e. when the topic comes up) for probably the next couple of years, on occasion. I almost never refer back to news articles. An extreme example is Anandtech's i9-12900K launch review, which I've probably gone back to like a hundred times.

Other articles I would refer to, multiple times & share to friends & colleagues are:
  • CPU & GPU Power-scaling (like your excellent RTX 4090 article, from last year).
  • DDR5 scaling benchmarks, including a look specifically at CAS latency
  • PCIe scaling benchmarks

Oh, and while I'm listing things I'd like to see, I do wish Toms would revisit Raptor Lake performance, based on Intel's updated guidance, at some point.
 
Oh, and while I'm listing things I'd like to see, I do wish Toms would revisit Raptor Lake performance, based on Intel's updated guidance, at some point.
I think the problem is that the guidance keeps changing! At one point, Paul did a bunch of testing... only to find out the settings he was originally given were wrong. I think they specified a 127W PL2 limit which it was supposed to be 255W.

Incidentally, I got a new 13900K, as the one I had basically failed. The new chip performs within ~1% of the old chip in general, but no longer crashes when installing Nvidia drivers or other stuff. The only major differences in performance would be if you had an earlier BIOS using the unsafe settings, compared to new BIOS running safe settings. That might show a 3~5% delta, but if you weren't enabling XMP probably even less than that.
 
  • Like
Reactions: bit_user
It's not really up to users to evangelize articles, even if they were willing.
This is the whole point of stuff like social networking and YouTube. If you like this sort of article but don't feel inclined to share it, that's fine. I knew going into it that it would be a lot of work and probably wouldn't do heavy traffic. Now I have data to back that up that says, in essence, don't do hard articles — do the easier stuff. I'm a PC enthusiast and that's why I did this testing, because it's good to put concrete numbers out there. Short of this picking up a lot more long-tail traffic than usual, though, I don't really plan to revisit the subject.
From my view, the main problem affecting THW's visibility is just its awful layout, with filler articles pushing main articles off the top of the page, then off the front page altogether. Already, your piece is below the top visible portion of the page, mixed in with 4-5 other junk pieces. If you want your work to get more traffic, fix THW's layout. Or get whichever corporate type to fix it. Use stickies.
Even before the article left the top box, traffic was flagging. It was not a rousing success, and I suspect it's far more than just the TH layout. It wasn't a clickbait piece, the data wasn't shocking, and — yes — it didn't get widely shared and upvoted on places like Reddit. And the SEO for this isn't really clear either, so Google isn't going to signal people to view it either. As you said, less is more. GPU Faceoffs clearly trump this sort of deeper dive.
 
Hi,

I was wondering why in the BEST CPUs benchmarks the only results are with RTX4090 and 1080p and 1440p. I believe no one with RTX4090 is thinking about 1440p and less FullHD (1080p).

Would it be too much to ask for 4K resolution benchamrks in Best CPU section?
https://www.tomshardware.com/reviews/best-cpus,3986.html

I am aware that at 4K resolution the workload goes to the GPU more than CPU, but it would be very interesting to see the CPU benchamrk results at 4K. That way we could see if it is really worth it to upgrade from 12600K to 13700K or 7800X3D for example.

Thank you for your consideration.

Regards,
Sue
 
Hi,

I was wondering why in the BEST CPUs benchmarks the only results are with RTX4090 and 1080p and 1440p. I believe no one with RTX4090 is thinking about 1440p and less FullHD (1080p).

Would it be too much to ask for 4K resolution benchamrks in Best CPU section?
https://www.tomshardware.com/reviews/best-cpus,3986.html

I am aware that at 4K resolution the workload goes to the GPU more than CPU, but it would be very interesting to see the CPU benchamrk results at 4K. That way we could see if it is really worth it to upgrade from 12600K to 13700K or 7800X3D for example.

Thank you for your consideration.

Regards,
Sue
This article is the perfect explanation for why we don't do our normal CPU testing at 4K ultra. Even when looking at Core i7-8700K versus Ryzen 7 7800X3D, the overall performance improvement is only 9.5%. That's because, as you note, the bottleneck mostly becomes the GPU. Going to a newer generation CPU like the i9-11900K, the difference between that and the other faster CPUs (13900K and 7800X3D) is only 2–3 percent.

That's with an RTX 4080, of course, and the 4090 is faster and thus would likely have slightly larger deltas. But the 1080p and 1440p results tell you how much difference the CPU can make if you're less GPU limited. Our GPU benchmarks meanwhile tell you how much of a difference the GPU can make if you're less CPU limited. Combining both CPU and GPU hierarchies then gives a reasonable approximation of what level of CPU you should try to have with a specific level of GPU.

As an example, let's say you're looking at the RTX 4070 Super. Well, performance in that case ends up roughly at the level of the RTX 3090 Ti (faster at 1080p and 1440p, slower at 4K). It's also 25% slower than the 4090 at 1440p, and 42% slower at 4K ultra. Now flip over to the CPU benchmarks. The i5-13400 is 42% slower than the 7800X3D at 1440p... and that's the slowest CPU in our list. Based on the same 42% delta, it would still end up being mostly GPU limited with RTX 4070 Super and Core i5-13400 when gaming at 4K ultra.

For 1440p gaming on the other hand, you'd want more like a Core i5-13600K (25% slower) or Ryzen 7 7700X. And for 1080p gaming, well, those same CPUs still look like they should do reasonably well at powering the 4070 Super.

Is that a perfect solution? No, but it's going to get you relatively close. Adding a third resolution to the CPU gaming benchmarks would mean taking 50% more time to do gaming tests. Just as adding another CPU to the GPU testing doubles the time required to test a GPU. It's not sustainable to do even more testing than we already conduct, in other words.
 
I can tell you that, at present, the traffic on this article is merely okay. As in, maybe it will equal the level of traffic we get from a Faceoff article like RTX 4060 vs RX 7600.
The other article is titled "Product X vs Product Y", which is exactly the sort of phrase people will be searching for regularly. I'd argue that equalling that sort of article is actually pretty good. It is not a good ROI if your metric is views per hour of work, but if most views come from search engine traffic I think you'll always struggle with that metric. Engaged regulars like interesting articles, but we make up a small proportion of viewers. Unfortunately people coming from search engines are searching for phrases suitable for "pays the bills" articles and not interesting insights. That's why listicles are so popular. No effort required but wow do people like lists.

Maybe you should have titled this article "CPU vs GPU for gaming" :)
 
Last edited:
Status
Not open for further replies.