Question CPU and GPU usage low in Starfield

NeoGunHero

Distinguished
Mar 17, 2015
16
0
18,510
I’ve seen lots of threads on this but could not find any definitive answers for my situation here.

GIGABYE AORUS GAMING 5 WI-FI X470
Zotac RTX 2060 6GB
Ryzen 1600X
16GB DDR4 2666mhz RAM in Dual Channel
Installed on a WD Blue SN570 M2 SSD
Windows 10 Pro

Playing Starfield on ALL LOW settings and my usages for both GPU and CPU aren’t very high. They max out around 60-70%. I get pretty bad FPS in most areas which I understand the game is unoptimized but this happens in other games where I have lower frames.

In my head, I’m thinking “more CPU+GPU usage = better frames/performance”. Is that necessarily true?

For reference, I have all latest drivers installed including BIOS, chipset drivers, CPU+GPU drivers and Windows updates. I have Windows Game Mode turned off, power settings on High performance, Nvidia power settings Prefer Maximum Performance, just did a clean reinstall so no bloatware besides Steam open at all times, have done some tweaks to reduce CPU usage like disabled animations and disabled sleep mode. AFAIK, Ryzen 1600X maxes out RAM speed at 2666mhz, so I don’t have XMP turned on because of crashing with it on. The RAM supports up to 3200 (Corsair Vengeance) but the CPU bottlenecks it.

So is my theory correct that it should be closer to 90-100% for maximum performance? Or is it running as best as it could to its abilities for this specific game? Additionally, if you know any lesser known tweaks/fixes/tips I could do within Windows 10 to squeeze out even the smaller amount of performance, please let me know!

Thanks!
 
1)Non-X3D Ryzens are sensitive to ram frequency and timings; some performance is being left off the table. A 2933mhz kit or setting would be more optimal. The embedded memory controller on most 1000 series cpus couldn't push much higher than that, so it's not surprising 3200mhz doesn't work. Go in bios, manually change it to 2933 and see how it handles that.

2)"usages for both GPU and CPU aren’t very high. They max out around 60-70%."
While you can use those numbers for the gpu, that doesn't work well with the cpu.
IF a single cpu core is hitting its limit, then it doesn't matter what cpu usage reads as - it's misleading, even.

In my head, I’m thinking “more CPU+GPU usage = better frames/performance”. Is that necessarily true?
So is my theory correct that it should be closer to 90-100% for maximum performance?
No.
Too high utility on a single core is actually a detriment. That's your 'pre-fps', if you can call it that.
You don't really want a core to max out, particularly in areas high in cpu compute; if headroom is already low, then performance drops harder in those scenarios.

Gpu is the post-fps. It will either match the fps delivered by the cpu or be lower, depending on your resolution and eye candy settings, of course. It is never 'faster' than the cpu, as it's 'turn' never comes before it. Performance will still drop in heavy gpu scenes, but it isn't as big a deal Vs cpu core ones, IMO.

It isn't always a cpu/gpu/other hardware thing either. Sometimes it's lousy software.


That's probably why you couldn't find a definitive answer.
 
  • Like
Reactions: NeoGunHero
To add to explaining what CPU/GPU utilization means, it only means the amount of time in a second the part spent not running the "idle" task. This makes the metric rather useless for anything beyond just making sure the part is being used as it doesn't take into account the clock speed or how much of the part's resources are actually being used.

So you can have 100% utilization at 1GHz if all the cores are running something. But if they bump up to 4GHz with no change in workload, the utilization drops to 25%.
 

TRENDING THREADS