Just a few thoughts on the matter.
Bottlenecking itself is not a problem
This is the one thing I want to drive home to people who encounter this topic. Bottlenecking shouldn't be considered a problem, because there's
always a bottleneck somewhere in the system. A better way to look at this topic is ask yourself if you're getting the performance you want. And no, "as fast as possible" is not a performance requirement. Set a reasonable requirement e.g., 1080p 60 FPS on "Medium" settings. If you're not meeting that goal, then you start looking for bottlenecks in the system and upgrading components causing it until you meet or exceed the requirement.
Hardware utilization needs more analysis than just comparing a number to some threshold
If there's one indication that the CPU is definitely holding the system back, is that it's at 100% utilization or close to it. But there's a gotcha. Say this was on a 8-thread CPU and you upgrade to a 12-thread one. Utilization dropped to 66%, but you're still getting the same performance. What gives? It's because the application was just barealy saturating the 8-thread CPU with stuff to do. Adding more threads doesn't help because the application doesn't have any more work beyond an 8-thread average.
This is also the source of some confusion when people notice their GPU utilization (also not it being 100% or close to it is indication of a bottleneck somewhere) is pretty low, but their CPU utilization is also pretty low, like < 20%.
If you wonder why the original Crysis seemingly "crushes" modern platforms because no system can get it to perform over say 80 FPS, it's because the game was designed with a super fast single-core CPU in mind. A future that never happened.
While CPUs can bottleneck the GPU, the GPU cannot bottleneck the CPU
This is another big point I want to get across. The only way for the GPU to work is if the CPU is giving it work. The GPU can't draw a window if it doesn't know where to put it, what its size is, what colors the borders should be, etc. So if the CPU is too busy to give work to the GPU, the GPU stalls because it has nothing to do.
On the CPU end, if the GPU is too busy doing something by the time the CPU wants to send work to the GPU, it either queues it up to resubmit later or drops the work and continues on. The CPU is not going to stall just because the GPU is busy, the CPU has other things to do! In the context of games though, games tend to run on a fixed interval where work has to be done. If the work is done, the CPU uses this free time to send stuff to the GPU to render. While you could use this point as an argument that if the GPU is too busy, then the CPU isn't spending all of that free time sending it stuff to do as a bottleneck, I argue that this isn't a bottleneck for these reasons:
- The relationship I established between the CPU and GPU as far as how work is received
- The CPU already did what was required of the application. Technically speaking rendering graphics is not a requirement for the game to run and is only a benefit to the human.
- To elaborate further, say the game wants the CPU to perform all of the input, logic, AI, etc. processing every 10ms (100Hz). If the GPU is rendering at 30 FPS, the game's still going to run at 100Hz. If the GPU being too busy really did hold back the CPU from doing anything more with the game, the game would essentially slow down.