Fps is a function of the cpu. Not the gpu. It takes a certain amount of time for a cpu to decode the game code, place object addresses, assign dimensions, Ai, npcs and all the other stuff that goes into a frame. The amount of times a cpu can do that in 1 second is your fps limit.
That data gets shipped to the gpu, where the gpu takes all that frame info, creates a wire-frame, places the objects according to address etc, then colors that wire-frame according to detail levels, then renders the full pictured frame according to resolution. The amount of times the gpu can do that is the fps you get on screen and see in the little fps counters.
What the cpu can output and what the gpu can output are almost always 2 different numbers. The cpu might be shipping 100fps, but if details are too high or resolution then the gpu might only put out 30fps. Lowering details or resolution means less work for the gpu per frame, frames get finished faster, higher fps on screen.
But if the cpu only gives 30fps, that's all the gpu can output at best, no matter what the detail levels or resolution. Gpu cannot put more frames on screen than it receives.
And that's what it seems you are running into. Your Ryzen is running sub-optimal boosts, and has the disadvantage of slower than optimal ram, which sets the fclock lower = slower communication between the cores.
I'd look into Dram Calculator (need typhoon burner too) from guru3d.com and maximize your ram ability. In dual channel they need to be in A2/B2 slots.
I'd also look into cpu temps, and lowering VID as much as possible, Ryzens start dropping MHz on a core to core basis after 60°C, so getting temps down and voltages down helps boosts higher.
Usage is misleading. It's not how much of the cpu/gpu is used, but how much resources like cores, core bandwidth, memory controller, pcie lane bandwidth etc that the cpu/gpu uses. There's a difference. Since both are well below 100%, there's no bottlenecks to performance, just sub-optimal performance from the cpu = lower fps.