Yeah, 4K isn't really a practical target for high refresh-rate gaming. The graphics card needs to render four times the pixels as it does at 1080p, and more than two times the pixels as at 1440p, so performance in pretty much any semi-recent graphically demanding game will be dictated primarily by graphics card performance at that resolution, even with a very high-end card. The more you turn up resolution or increase settings, the less of an effect CPU performance will have, since it will be waiting for the graphics card to complete its rendering more often than not.
In general, the graphics card handles rendering the visuals on your screen, while the CPU typically handles things you don't directly see, like physics calculations and animations, enemy AI, netcode, sound processing, general game logic and so on. Typically, increasing resolution does not increase demand on the CPU, only the GPU.
So, for example, if CPU A can perform its calculations 120 times per second in a particular game, and CPU B can perform them 150 times per second, that's not going to matter much if the graphics card can't keep up. If the graphics card were able push 150fps at 1080p, you would likely see that difference. However, at 1440p the card might only be able to push 110fps, in which case both processors would likely perform pretty close to one another, and if the card can only push 60 fps at 4K, either CPU would be waiting around for it to finish its rendering much of the time, and there would be almost no difference between the two.
Take for example, the summary charts in this recent review at TechPowerUp for the Ryzen 3600 (they didn't do a full review for the 9400F, but it's included nearby in the charts). Scroll down a bit for the gaming results, and compare how the 3600 (or 9400F) perform against the lowest-end models in terms of average frame rates, when paired with a very high-end 2080 Ti...
https://www.techpowerup.com/review/amd-ryzen-5-3600/22.html
At 720p (Which no one would use a 2080 Ti for, so it's more a synthetic benchmark than anything), there's a large difference in average frame rates between those CPUs and the low-end Ryzen 1200 at stock clocks, which only gets around 60% of their frame rates at that resolution. At such a low resolution, the graphics card ends up waiting for the CPU pretty much all the time. At 1080p, this difference shrinks a bit, but most of the benchmarked games are still being limited by CPU performance. We see the difference shrink even more at 1440p, and at 4K all the CPUs in the list see average frame rates within about 8% of one another, with most of the processors performing nearly identical. Plus, a 2070 SUPER, while decidedly high-end, is not quite up to the level of a 2080 Ti, so your 1440p performance at ultra settings would likely look rather similar to that 4K chart, with your 4K ultra results seeing even less of a performance difference between CPUs.
Of course, different games can place different amounts of load on the CPU, and I suspect you would see more of a difference in games that heavily utilize more than four cores, like Battlefield V, where the additional cores of the 9400F would likely help smooth performance. And that's likely to become more common in future games.
In general though, if you want higher frame rates for high refresh-rate gaming, you're going to need to drop the resolution. Even that 1080p with 150% resolution scale in Overwatch works out to over 25% more pixels being rendered than 1440p. Resolution scaling, DSR, supersampling or whatever name it goes by is a good-looking but very poor-performing form of AA, and is generally only worth using if you have lots of graphics performance to spare and are willing to take a big performance hit to smooth the visuals out a bit. On a 1080p 60Hz screen with a high-end card, it could make some sense. If you are targeting optimum performance on a high refresh-rate screen though, use a less-demanding form of AA instead.