You are going in 2 different circles and trying to get a single result.
Cpu sets the fps limit according to game code
Gpu lives upto that limit or fails according to detail levels and resolution.
2 different things.
The 2080S isn't bottlenecking the cpu, or vice versa. Can't happen. Doesn't work that way.
The cpu pre-renders the frames, every single one. It takes the game code and sets every object, relationship, collision, variable, partical etc into that frame. It can only do this X amount of times per second, that's the fps limit. That gets sent to the gpu.
The gpu finish renders what the cpu sent it. Adds colors, depth, clarity etc according to whatever detail settings you chose at whatever resolution you specify.
Some games, they are so intense or have so much going on that you'll get mediocre fps limit, some games are so simple that you get outstanding fps limit. The 2080S is strong enough to paint that picture upto that limit and sometimes beyond, sometimes not, limited by the frequency of the monitor.
You apply generic Ultra settings in game. Some of those settings are cpu bound, they affect what the cpu has to work with in the game code. Like name tags, floating damage etc. The rest are gpu graphics settings, like AA or fog or lighting.
Turn on 4k DSR and the gpu renders in 4k, the tones it down to 1080p for output. Much higher usage. Turn off DLSS and that can affect fps onscreen. Turn On Ray Tracing and that can have a massive affect on the gpu, not hard to drop 100fps output in Minecraft.
Before spending money, I'd suggest you look into your settings, enable or disable Geforce Experience optimizations etc. There's almost always a cure for stutters and its usually in the User defined settings, either in game or global.