I'm not quite following you here. Yes, the CPU plays a part in how much FPS can be output by sending as much data (pre-rendered frames) to the GPU as possible, but to suggest the GPU ( or a GPU upgrade) has nothing to do with that or doesn't increase FPS output by itself is a little misleading.
Okie Lucy, I 'splain. 😅
You click on the start game exe. Cpu sends a query to storage and asks for the info, which gets moved to ram. Cpu uses the info, with stipulations from the detail settings in game, to create a list of instructions that include all data on objects, dimensions, vectors, Ai computations, etc that make up the entire frame. The amount of frames the cpu can put together in 1 second is the maximum amount of possible fps.
Those data packets are streamed to the gpu. The gpu reads those instructions, creates a wire frame, pre-renders that according to detail settings, adds all the pertinent data such as colors, lighting etc then final renders that according to output resolution. The amount of times the gpu can do that is the fps you see.
If the cpu can send 500fps, but the gpu at ultra can only render 250fps, that's considered a gpu bottleneck where lowering the detail settings to low may allow the gpu to render 400fps. The gpu doesn't 'increase' fps as such, it just allows more with a lowered setting.
If the cpu can only send 100fps, at ultra the same gpu can easily render all 100fps but lowering the detail levels will have little impact on the results, only those details that are cpu bound will raise the fps slightly. So even at low settings you are only getting 125fps, even though the gpu is capable of 400fps. That's considered a cpu bottleneck.
The gpu can only render what the cpu sends. It can't increase fps at all, that number is set by the cpu, the gpu only allows more fps to be rendered or less fps rendered, according to the detail levels and resolution.
Op has a 12600k. It's going to set a limit on fps, which Op is stating as closer to 500-600fps. That's at low details, ultra details putting it at closer to 220fps with bloom and all the other lighting conditions. That's what the gpu gets. Doesn't matter if it's a 3060ti or 4080, the gpu gets 220fps. You could run 4090 SLI, doesn't change the fact the gpu only has 220fps to work with. You put that cpu with a 750ti, it still gets 220fps, but the 750ti can only render 60fps, so you now get 60fps on screen.
Most assume that having a stronger gpu upgrade will raise fps, but that's only true as far as if the cpu is actually sending much more frames than the original gpu could render. So if the cpu was sending 500fps, and the 3060ti could only render 220fps, then a 4080 should be able to render a higher amount, like 400fps or more. But if the cpu is only sending 220fps and the 3060ti is capable of 250fps, and the 4080 capable of 400fps+, it's not going to show Any fps change, cuz both cards received only 220fps.
The difference in cards being that at 4k, the 4080 would still be rendering closer to 220fps, whereas the considerably weaker 3060ti would be struggling with 120fps, regardless of whether the cpu sent 220fps or 500fps.
Hope that helps a lil.