+1 to your post.The problem with the FX is IPC (instructions per clock cycle). At the time, the FX had roughly 66% the IPC of a 3rdGen Intel. That puts your 9590 at a roughly 400% disadvantage to a Ryzen 5600x. And that's very important to FPS.
The cpu takes the code and pre-renders it into a frame instructions package for the gpu. Adds every object, size, detail, background, Ai, pre and post lighting affects, everything the gpu needs to create the frame picture. Every single step is an instruction. The lower the instruction count, the longer it takes to create a frame, the less frames per second that can be shipped to the gpu.
Resolution and detail levels is all on the gpu, that'll raise or lower in fps output, but cannot exceed whatever the cpu gives it. If the cpu can only send 45fps to the gpu, wouldn't make any difference if you had a 3090 in sli, you'd still max out at 45fps at any resolution or detail level.
Modern games are highly instruction intensive, exponentially more instructions just for stuff like AA, photo-realism, shadows, filtering, which didn't exist when the 9590 was new, as there wasn't a gpu that could realistically recreate that level of graphical intensity. You were looking at cards like the gtx690/gtx780ti at best.
Fun project, for sure, but do not expect miricles with fps, it'd be best used with a rtx3080 and 4k monitor, where FPS outputs are naturally lower due to resolution. Either that or a 1080p 60Hz.
I assume the 45 fps is based on image size per second vs either gpu memory clock or pci-e bottlenecking?