First raction: what's the point of running Raspberry PIs in a cluster?
Or Doom inside a PDF?
As a parent and occasional CS lecturer, I'd say a lot of it is around encouraging critical thinking and do your own testing instead of just adding to a giant pile of opinions based on other opinions influenced by who knows who.
It's also around acknowleding that the bottlenecks and barriers in gaming performance are not a constant, but have shifted and moved significantly. And a lot of the current discussion is about seeking for new escapes from non-square returns on linear improvements.
For the longest time Intel has pushed top clocks as the main driver for the best in gaming performance. And it wasn't totally wrong, especially since single core CPUs were the norm for decades and single threaded logic remains much simpler to implement.
I bought big Xeons years ago, because they turned incredibly cheap when hyperscalers started dropping them. My first Haswell 18-core E5-2696 v3 was still €700, but at that time I needed a server for my home-lab and there was nothing else in that price range, especially with 44 PCIe lanes and 128GB RAM capacity.
Those were the very same cores you could also find on their desktop cousins, but typically lower turbo clocks, because they had to share their power budget with up to 21 others and it made very little sense to run server loads on a few high clocks: you want to be very near the top of CMOS knee for optimum compute/electrical power.
Of course, when you use them in a workstation with a mix of use cases, that tradeoff would make a lot more sense, which is why I was happy to use the Chinese variants of these E5 Xeons, which gave you a bit more turbo headroom.
I've upgraded it to a Broadwell E5-2696 v4 somewhat more recently, because that was €160 for the CPU, a bit of leftover thermal paste and one hour for the swap job. I guess it's my equivalent of putting a chrome exhaust on V8 truck or giving your older horse a new saddle, not strictly an economic decision, but not bad husbandry, either.
Yet it also has Windows 11 (IoT) and VMs run better, because Broadwell had actually some pretty nifty ISA extensions made for cloud use.
Synthetic benchmarks put both chips at very similar performance and energy use as a Ryzen 7 5800X3D that used to sit next to it (which became a Ryzen R9 5950X later), both with 128GB of ECC RAM.
Progress during the eight years between that first Xeon and the Ryzen was measured mostly in per-core performance improving to the point where 8 Zen cores could do the work of 18/22 Xeon cores, but
top turbo scalar performance of a Xeon core was also at around 50% of what a Zen3 can do. All core turbo on Xeons is much lower than for Zen, that's where the extra cores come in: essentially it gives you 18/22 E-cores in modern parlance, at 110 Watts of actual (150 Watt TDP) max power use... that's where modern E-cores will show
their progress, probably 1/4 power consumption at iso performance.
With the Xeon at around €5000 retail when new vs. €500 for Zen, these numbers are another nice metric for progress.
But since IT experimentation is both my job and my hobby, I recently used it to test just how much of an impact or difference there was between those two systems, when running modern games on a modern GPU with equal multi-core power, but vastly different max per core performance. In other words:
just how well had games adapted to an environment that had given them more but weak cores for some years now on consoles (due to manufacturing price pressure)?
And while I didn't do this exhaustively, my finding were very similar: modern game titles spread the load so much better that these older CPUs with extra cores are quite capable enough to run games at acceptable performance. And no, it doesn't have to be a 22 core part, even if it just happens to have 55MB of L3 cache, which makes it almost a "3D" variant. It also has 4 memory channels...
And that also explained why two to four years ago "gamer X99" mainboards were astonishingly popular on Aliexpress: X99 and C612 are quite the same hardware and like those Xeons it's quite likely these parts were given a 2nd life serving gamers, who, couldn't afford leading edge hardware and rather well served by what they got.
So the point: you may do better by not just jumping for the leading edge.
But because that takes the means to explore, naive buyers may not be able to exploit that and pay the price elsewhere.
Did I mention (enough) that Windows 11 IoT runs quite well on these chips? Still nine years of support and very low risk of being Co-Piloted.