The clockspeed penalty is what makes it a useless metric for the X3D parts when comparing them to non-X3D. By the logic you're applying here a 13900K would have higher IPC than a 13600k (as it has 50% more L3 which can be leveraged for lower threaded workloads), but does saying that actually mean anything useful?
Yes, that is true. I think it's also true that the R9 7900X probably has higher IPC than R9 7950X, on some workloads, because AMD leaves all 32 MiB of L3 enabled on their CCDs, giving the 7600X and 7900X more cache per core than CPU models with fully-occupied CCDs. So, if you're measuring multi-threaded IPC, at a constant clockspeed that each CPU can hit, then I'd expect it to be a little higher for the 6-core and 12-core models.
It's totally fine that you think it's useful I just don't.
I think IPC is most useful when using it to characterize a microarchitecture. Minor variations, within the model lineup aren't so interesting, particularly due to IPC being a rather coarse metric, not highly-predictive, and not totally separable from clock speed (i.e. due to sub-linear frequency scaling).
However, I think this has been a worthwhile exercise to remind us what factors can influence it and therefore how much/little to read into it. Heck, graphs like this should be reminder enough of that:
If you're thinking "hey, Zen 5 gives me 16% more IPC!", you'll be disappointed when you fire up Far Cry 6 and get only a 10% improvement. Or, maybe you'll underestimate the CPU's efficiency on workloads like the GeekBench one that delivered 35% IPC improvement!
So, it's a coarse metric of relatively limited value, but still interesting and worth looking at for tracking the development & sophistication of microarchitectures.