You misunderstand.
The cpu is the source of fps. It takes all the game code instructions, vectors, objects dimensions, lighting, shadows, colors, movement, Ai etc and puts all that into a data packet and ships it off to the gpu. The amount of times it can do that in 1 second is your maximum fps.
The gpu takes that packet of data, and uses the provided instructions to create a wire frame model. Then it adds all the colors, lighting, post processing affects, shadows etc and renders that into a picture according to game detail settings and resolution. Ships the picture to the monitor. The amount of times the gpu can do that in 1 second is your on-screen fps.
The more details and actual objects and Ai etc that the cpu has to deal with takes more time, fps goes down. Single screen has a narrow point of view, lowest objects, highest fps. By extending the screen wide, the point of view gets wider, more objects, fps to the gpu goes down unless the cpu is strong enough to compensate for the additional data.
The 7950X and 7950X3D are the same cpu except the 3D version has a lot more cache. If the game can take advantage of the additional cache, if it's coded to work that way, then fps goes up a lot. If it not coded like that, if it can't take advantage of the additional cache, then fps is the same as a 7950X as there's no realistic difference between the cpus.
Unless that particular game has been reviewed and tested with both cpus, side by side, you'll not know if it does or doesn't take advantage of the cache. Either way, either cpu will be fine, it's a possibility the 3D might be better, but have the same 50% chance of not being better. Coin toss.
Resolution is just the amount of pixels per square inch, a 1080p (1920x1080) picture looks exactly the same as a 4k (3840x2160) picture, same picture just less pixels. A wide angle point of view from using multiple monitors or super wide screen is not the same picture as a single monitor, there's a lot more objects, a lot more involved, regardless of the actual resolution.