Question Understanding Bottlenecks

Drerunsit

Distinguished
Jun 7, 2017
57
4
18,535
Link to my current build: https://pcpartpicker.com/list/Btn7tn

Hello, everyone. I am trying to get a better understanding of CPU/GPU bottlenecks. I am using the following site, though I have tried a couple of other ones: https://pc-builds.com/bottleneck-calculator/

First, I can't get a report for my system, since the Arc A750 isn't in the database. Aside from that, what boggles my mind is how the numbers are impacted by resolution. For example, if I pair my Ryzen 7 5700X with an RX 6950 XT at 1080p, the result says that "AMD Ryzen 7 5700X is too weak for AMD Radeon RX 6950 XT." At face value, that would make me think I need a new CPU. However, if I change the resolution to 4K, the new result is "AMD Ryzen 7 5700X and AMD Radeon RX 6950 XT will work great together."

If I am trying to use the tool to figure out what upgrade path to take, this doesn't seem very useful. How does increasing the resolution make the CPU a better match with the GPU? If the idea is that the CPU and GPU should both be around similar utilization levels, then I can see it. However, a casual person would assume that if your CPU is at lower utilization while playing a graphically demanding game, that's a good thing. If anyone can help me better understand this whole thing, I would greatly appreciate it.
 
If I am trying to use the tool to figure out what upgrade path to take, this doesn't seem very useful. How does increasing the resolution make the CPU a better match with the GPU?
It's about getting similar FPS with both,
A GPU can get more FPS at 1080 than at 4k which is only logical, so even if it is too fast for the CPU at 1080 it can be a good match at 4k where the GPU will be slower and the CPU will be able to keep up.
 
Hey there,

Agree with above, bottlenecking as a term is often misunderstood, and overrated as a measure of performance.

The faster the CPU, the more pre-rendered frames it can send to the GPU. When that's limited by the GPU being midrange, an Arc 750 at 1080p, the GPU simply won't be able to keep up with the CPU. So it will max out. CPU usage will be lower than expected.

If on the other hand your GPU was an RTX4090, your current CPU would bottleneck the GPU., because at 1080p, faster CPU's will push that 4090 more, but also because the 4090 can do more work (as long as the CPU is fast).

Kinda confusing, eh!?

With all of that said, the 5700x is a quality gaming CPU, performs very well at all resolutions. You can also set PBO and CO to push your CPU a bit harder too. It's suitable for all GPUs, high end or not.

I guess the point here is having a balance is important.

Play your games, do they feel ok? Does it stutter?


How does increasing the resolution make the CPU a better match with the GPU?
This is because at higher resolutions (mainly above 1080p) the shift of importance going from CPU to the GPU at higher resolutions. At 1440p and above, the CPU is less the limiting factor. at 4K most strong CPU's perform the same, as it's the GPU doing more work.
 
Last edited:
  • Like
Reactions: Drerunsit
However, a casual person would assume that if your CPU is at lower utilization while playing a graphically demanding game, that's a good thing
Yes and no. For the most part a CPU working at or about 40-70% is about right. You don't want your CPU maxed out, as that causes stuttering.

For the GPU it's a little different. Ideally you want the CPU pretty much maxed at about 97-99%/100%. This doesn't cause stuttering, but is the way the GPU are designed to work. This is where the balance comes in.

If you have an 8c/16t CPU it will hardly even be using 50% of it's compute resources ,even in most gaming and running background tasks. There's just so much CPU horsepower available. Most games don't use more than maybe 6-8t, so any 8c/16t CPU will have bags of room to manoeuvre. This of course will/is changing as game resources become more and more important.
 
Last edited:
  • Like
Reactions: Drerunsit
If I am trying to use the tool to figure out what upgrade path to take, this doesn't seem very useful. How does increasing the resolution make the CPU a better match with the GPU?
To try to make it easy.
Your CPU just process information it can put out X amount of FPS in a given game the number of FPS will stay about the same in 1080p, 1440, or 4K not really changing. So it could really care less about the resolution.

The video card is completely different it must render the game. So if it can put out X amount of FPS at 1080p, if you switch to 1440p with the same in game settings it can put out about 1/2 the FPS as it did at 1080p if you go to 4K with the same settings then about 1/4 the FPS as it did in 1080p.
 
  • Like
Reactions: Drerunsit
A bottleneck is a symptom of a problem. It's not the problem. So what is the problem? It's you're not getting the performance you want. When you get to that point, you profile the computer and figure out what the weakest link is. That is the bottleneck.

To make this easier on you when you want to know when to upgrade, set reasonable performance requirements. 1080p 60 FPS on medium presets is what I call a reasonable performance requirement. 4K 120FPS Ultra is not. Heck, I would say even 1440p 120 FPS Ultra quality is a bit of a high bar. In any case, when most of the games aren't meeting this requirement, then it's time to look at what to upgrade.
 
That "bottleneck calculator", and all the others, are pure junk.
No relation to any reality.
Ignore them completely.

I agree. Complete garbage.

To make this easier on you when you want to know when to upgrade, set reasonable performance requirements. 1080p 60 FPS on medium presets is what I call a reasonable performance requirement. 4K 120FPS Ultra is not.

I would agree with this too. What's funny is I saw a post the other day where someone mentioned gaming in 4K 144hz? Is that even possible with a 4090? Without DLSS I'm talking about. That's something I've never used.

I only ask because my OLED does 120hz but my system wasn't hitting 120 frames on Ultra. It makes no difference to me because I've looked at my display at both 60hz and 120hz and I see no difference. Maybe it's because I'm getting old... who knows?

So I just game at 4K 60hz Ultra and my system has no problem getting there.
 
Last edited by a moderator:
  • Like
Reactions: Drerunsit
Thank you to everyone who responded...you're making it hard to choose which post gets the "Best Answer" nod. Though, as I'm writing this, I don't seem to see it as an option anymore.

Anyhow, right now I suppose I don't have any specific issue that is concerning. I have made a couple of threads over the past few weeks concerning fairly weak FPS performance in PUBG, as well as "within range but unimpressive" 3DMark scores with my GPU. What truly prompted this thread was seeing all these price reductions for Radeon GPUs that would be a huge step up for me (e.g., the RX 6950 XT) and wondering if my CPU would be a good pairing with them. Furthermore, my motherboard is easily the top candidate for replacement, though I would probably be better off waiting until it's time to go to AM5 (or whatever follows it, depending on when I next perform a major upgrade, which is historically every 5-7 years).

On the subject of the motherboard, is there a GPU threshold where it matters that I have PCIe 4.0, as opposed to my current PCIe 3.0? From what I've read here and there, it seems that this different in PCIe generation has a negligible effect on GPU performance; it has far greater benefit for NVMe drives.
 
On the subject of the motherboard, is there a GPU threshold where it matters that I have PCIe 4.0, as opposed to my current PCIe 3.0? From what I've read here and there, it seems that this different in PCIe generation has a negligible effect on GPU performance; it has far greater benefit for NVMe drives.
Considering PCIe scaling on an RTX 4090, I think it's safe to say PCIe 3.0, at least for 16 lane cards, is fine for several more years.

The only issues I've seen with PCIe are AMD's lower end cards with 8 or 4 lanes, where if you're not on PCIe 4.0, they'll start suffering in performance much sooner. Especially when VRAM gets filled.
 
Furthermore, my motherboard is easily the top candidate for replacement, though I would probably be better off waiting until it's time to go to AM5 (or whatever follows it, depending on when I next perform a major upgrade, which is historically every 5-7 years).

On the subject of the motherboard, is there a GPU threshold where it matters that I have PCIe 4.0, as opposed to my current PCIe 3.0? From what I've read here and there, it seems that this different in PCIe generation has a negligible effect on GPU performance; it has far greater benefit for NVMe drives.
Motherboards are generally not worth upgrading, unless one is moving to a new CPU that requires a different board from what they have. For the most part, everything from a lower-end board to a higher-end one will perform very similar outside of scenarios involving heavy overclocking or very high power-draw CPUs. Aside from that, it's mainly just features that separate them, and much of that can be added to existing boards through expansion cards or USB devices.

And PCIe 4.0 is unlikely to provide any perceptible difference in performance, outside of a few cases like hotaru.hino mentioned, where a handful of lower-end graphics cards support fewer lanes, either halving or quartering the available bandwidth, which becomes more of an issue when the card doesn't have enough VRAM to avoid constantly swapping data to system memory over the PCIe connection.

Even for NVMe storage, the benefits tend to be nearly unnoticeable in the vast majority of today's common usage scenarios. Some PCIe 4.0 drives can theoretically offer up to double the sequential transfer performance over that connection compared to 3.0, but real-world performance at tasks like loading games and applications will tend to be nearly identical, since the 3.0 drive, or in many cases even a SATA SSD, will be fast enough to keep up with the rate at which the rest of the system can process the data that's being loaded. For most usage scenarios, the only time you are likely to notice a difference might be when transferring or copying very large files. It is possible that we will see more real-word benefits as software like games start to optimize their code for loading large amounts of data in the background during gameplay, but for now, the benefits of PCIe 4.0 tend to be limited.

As for "bottlenecks", the main reason to consider that would be for not wasting money on a piece of hardware that would be overkill for another piece of hardware in the system. For example, if one is deciding between a $200 CPU that can potentially output 120fps in a particular game, or a $400 CPU that can put out 140fps, it probably won't matter all that much if their graphics hardware can only output 100fps in that game at the resolution and settings they are targeting. In that case, it would likely make more sense to go with the cheaper processor and put the money toward faster graphics hardware instead. The demands on the CPU and GPU can differ drastically from one game to the next though, and even in different parts of a single game, so as others have pointed out, "bottleneck calculator" sites tend to not be particularly useful.

For upgrading existing hardware, you can check if you would see much benefit from a graphics card upgrade by simply lowering the resolution. If you are gaming at 1080p for example, try dropping your resolution in-game to something like 720p or lower. With less pixels to render, you will be roughly simulating the kind of performance you might get at 1080p with a card that's correspondingly faster. If the framerate increases substantially, then you could potentially see similar results with a much faster card in that particular game. If the framerate doesn't increase much though, then you may already be near the limits of what your other components, like CPU and RAM, can push in that particular title. Since you appear to have an Intel Arc card though, it's probably worth mentioning that you might experience some additional variability in performance due to the fact that this is Intel's first generation of modern dedicated cards, and their driver software is still a work-in-progress.
 
Bottleneck calculators are pure snake oil sales ads for whomever the sponsor happens to be. The problem with them is they try to apply a single number to an infinite amount of variables, many of which are pure conjecture as they are user thoughts or wants.

Play one game and the cpu isn't enough for the gpu. Play a different game and the gpu isn't enough for the cpu. Play a third game and you are ram bound. Which is the bottleneck? Are you just supposed to buy a 13900ks and 4090ti with 128Gb of ram so that no game can claim a bottleneck on any one component? That's not reality.

Reality is that in some situations ram speed is the bottleneck. In others it's ram size. In others it's ram timings. In others it's vram. In others it's cpu clock speeds. In others it's cpu Lcache. Etc etc etc. There's no end to the possibilities or probabilities as to what exactly constitutes a definable bottleneck because it's as simple as playing a different game and the whole equation and results change.

99% of bottleneck complaints start out with the same words 'I saw this video on YouTube and...'

Play your games. If you are happy with the results, then any imagined bottleneck does not exist. If you are not happy with the results, figure out what you'd require to make you happy. That's all that matters.

When I first built my prior pc, it had a i7-3770k and gtx970. Windows Performance calculator gave it a 7.9/8.0 (only thing better at the time was a gtx980). Screen said it was the fastest and strongest pc, would have no issues doing anything, it was as good as it gets. Day after the i7-4770k hit the streets, I was down to a 5.9/8.0 because my cpu was old, outdated, sluggish and I could expect major slowdowns and issues. Really. At the bottom was a recommended upgrade cpu, the i5-4560. Really.

As it was, the i7-3770K was a far batter cpu than the i7-4770k, I could hit 5.0GHz at 1.4v at 72°C something far beyond what the i7-4770k was capable of. That's what bottleneck calculators are good for, making you think you need to spend money on new products which realistically you probably don't need.
 
Last edited:
Are you just supposed to buy a 13900ks and 4090ti with 128Gb of ram so that no game can claim a bottleneck on any one component? That's not reality.

What's funny is that those bottleneck calculators no doubt would still say that you are bottlenecked somewhere. 🤣 Even worse is that people buy into that crap.

I'm really glad I'm not one of those "ZOMG I'm bottlenecked and losing 2 fps!" gamers. I actually saw a post yesterday where a guy said he was getting 143.97 fps on his 144hz display and wanted to know why.

🤣🤣🤣🤣🤣

Play your games. If you are happy with the results, then any imagined bottleneck does not exist. If you are not happy with the results, figure out what you'd require to make you happy. That's all that matters.

That's it exactly.