It would help to clarify fps.
The cpu pre-renders a frame. It'll do that according to the game code, it's IPC and clock speeds. It takes a certain amount of time to place every object, quantify, give dimensions, AI, interactivity etc. The amount of times it can complete a frame in 1 second is the fps limit.
That info gets sent to the gpu, which then finish renders the frame into a picture, giving color, depth, shadows, 3d etc. It'll do that according to detail levels, post processing affects and resolution. That determines the fps you get onscreen, in the counter.
So if the cpu can do 100fps, and the gpu is set for ultra, it'll either succeed or fail. If it fails, lowering details, changing resolution or lowering post processing can raise onscreen fps, but is still blocked by the 100fps limit from the cpu.
You changed cpus in the same family. Nothing else. So the only thing done is raise the IPC slightly and clock speeds slightly. How well that translates is determined by the game and game code, which might be no gain at all to a few fps. But it won't be a drastic change on the limit. If detail settings etc are high enough that you are not reaching the fps limits, then you'll see no change basically as the cpus fps output isn't much different with such similar cpus.
The i7 can make a difference, but mostly in multi-player online games or single player games with mods. This'll raise the available thread count beyond 4, allowing for higher fps. You'll not see much difference with simpler, single thread heavy games as they are optimised for 4 threads or less.
Speeds don't usually translate in a 1:1 ratio. A 10% faster cpu may or may not average an extra 2-3 fps, and with as dynamic as game code is and the changes in view and details, 2-3fps will get lost in the translation. You'll only see such a minor change in a benchmark with a 0.1% low/max numbers in a side by side comparison.