News RTX 5090D tested with nine-years-old Xeon CPU that cost $7 — it does surprisingly well in some games, if you enable MFG

Well, this is their play.

When they need to emulate X86 in their ARM CPUs, they will just say "hey, look; even emulated runs pretty darn well in our SoCs!".

Not bad or good, but I'm pretty sure that's how it's going to play out.

Regards.
 
Interesting way to show some of the areas that cpu bottlenecks get stuck in but...without input lag testing I feel like it doesn't tell the whole story. I wonder if the fps was high but input lag was way off the charts?

Also kind of makes me wonder how cpu bottlenecked everything is right now.
 
The i3 12100 specs aren't listed.Cores/threads? 168 fps is "awful" for the xeon in CS2 where that game barely uses any of the 14 cores from it?It's great for people to try getting as many fps as possible,but this is way beyond what's needed for a fun time.Maybe pro gaming would make a difference but anyone getting one of these still can game on it for some games.
 
Pushing Fake Frames isn't doing well, it is faking.
FSR 3.1 has been a life saver for me.I thought it was nonsense for already hi end cards but it works great for me.Upscaling and FG are great for my low end rig.It sometimes adds graphics artifacts in stalker but that's normal for a stalker game.
I used to like and laugh along to the "fake frames" comments but no more. I call it frame gen now.I still hate ray tracing in general though.
 
TL;DR What's the point?
First raction: what's the point of running Raspberry PIs in a cluster?

Or Doom inside a PDF?

As a parent and occasional CS lecturer, I'd say a lot of it is around encouraging critical thinking and do your own testing instead of just adding to a giant pile of opinions based on other opinions influenced by who knows who.

It's also around acknowleding that the bottlenecks and barriers in gaming performance are not a constant, but have shifted and moved significantly. And a lot of the current discussion is about seeking for new escapes from non-square returns on linear improvements.

For the longest time Intel has pushed top clocks as the main driver for the best in gaming performance. And it wasn't totally wrong, especially since single core CPUs were the norm for decades and single threaded logic remains much simpler to implement.

I bought big Xeons years ago, because they turned incredibly cheap when hyperscalers started dropping them. My first Haswell 18-core E5-2696 v3 was still €700, but at that time I needed a server for my home-lab and there was nothing else in that price range, especially with 44 PCIe lanes and 128GB RAM capacity.

Those were the very same cores you could also find on their desktop cousins, but typically lower turbo clocks, because they had to share their power budget with up to 21 others and it made very little sense to run server loads on a few high clocks: you want to be very near the top of CMOS knee for optimum compute/electrical power.

Of course, when you use them in a workstation with a mix of use cases, that tradeoff would make a lot more sense, which is why I was happy to use the Chinese variants of these E5 Xeons, which gave you a bit more turbo headroom.

I've upgraded it to a Broadwell E5-2696 v4 somewhat more recently, because that was €160 for the CPU, a bit of leftover thermal paste and one hour for the swap job. I guess it's my equivalent of putting a chrome exhaust on V8 truck or giving your older horse a new saddle, not strictly an economic decision, but not bad husbandry, either.

Yet it also has Windows 11 (IoT) and VMs run better, because Broadwell had actually some pretty nifty ISA extensions made for cloud use.

Synthetic benchmarks put both chips at very similar performance and energy use as a Ryzen 7 5800X3D that used to sit next to it (which became a Ryzen R9 5950X later), both with 128GB of ECC RAM.

Progress during the eight years between that first Xeon and the Ryzen was measured mostly in per-core performance improving to the point where 8 Zen cores could do the work of 18/22 Xeon cores, but top turbo scalar performance of a Xeon core was also at around 50% of what a Zen3 can do. All core turbo on Xeons is much lower than for Zen, that's where the extra cores come in: essentially it gives you 18/22 E-cores in modern parlance, at 110 Watts of actual (150 Watt TDP) max power use... that's where modern E-cores will show their progress, probably 1/4 power consumption at iso performance.

With the Xeon at around €5000 retail when new vs. €500 for Zen, these numbers are another nice metric for progress.

But since IT experimentation is both my job and my hobby, I recently used it to test just how much of an impact or difference there was between those two systems, when running modern games on a modern GPU with equal multi-core power, but vastly different max per core performance. In other words: just how well had games adapted to an environment that had given them more but weak cores for some years now on consoles (due to manufacturing price pressure)?

And while I didn't do this exhaustively, my finding were very similar: modern game titles spread the load so much better that these older CPUs with extra cores are quite capable enough to run games at acceptable performance. And no, it doesn't have to be a 22 core part, even if it just happens to have 55MB of L3 cache, which makes it almost a "3D" variant. It also has 4 memory channels...

And that also explained why two to four years ago "gamer X99" mainboards were astonishingly popular on Aliexpress: X99 and C612 are quite the same hardware and like those Xeons it's quite likely these parts were given a 2nd life serving gamers, who, couldn't afford leading edge hardware and rather well served by what they got.

So the point: you may do better by not just jumping for the leading edge.
But because that takes the means to explore, naive buyers may not be able to exploit that and pay the price elsewhere.

Did I mention (enough) that Windows 11 IoT runs quite well on these chips? Still nine years of support and very low risk of being Co-Piloted.
 
For a gaming machine, perhaps the best Xeon of this vintage is the E5-1650v4. It turbos up to 4.0 GHz, has 6 cores, and 140W TDP. By contrast, the one in this article only turbos up to 3.3 GHz and has a 120W TDP. That said, I'd take a Coffee Lake i7-8700K over either.
 
Last edited:
For a gaming machine, perhaps the best Xeon of this vintage is the E5-2650v4. It turbos up to 4.0 GHz, has 6 cores, and 140W TDP. By contrast, the one in this article only turbos up to 3.3 GHz and has a 120W TDP. That said, I'd take a Coffee Lake i7-8700K over either.
slight typo: it's the 1640 v4 you refer to, not the 2640 v4...

But it very much depends on the game and those "turbo optimized" SKUs may not give you the real-world advantage that you seek, especially if you don't just run a single workload. They are also extremely rare to find, while those Chinese OEM variants were not only the truly best bins ever, but also quite easy to get en masse at low prices.

(Intel has a tradition of pumping out CPUs at heavily discounted prices very late in their life-cycles, when the processes had matured to their absolute optimums. A lot of these "last round" chips were quite a bit below official TDP rating at top or better than official clocks.)

If you really want to exhaust those turbos and observe their effect, you can play with BCLK a bit and on Haswell there are some firmware tricks which unlimit the all-core turbo clocks.

I did some of that, had my Haswell 18 core clock at up to 4 GHz and the results were hard to measure, impossible to actually notice in gaming. I left the Broadwell at 3.7 GHZ, because I value it for its stability.

Overclocking a game on its own, perhaps even more interesting than most computer games these days, but not mine.
 
Last edited:
  • Like
Reactions: bit_user
slight typo: it's the 1640 v4 you refer to, not the 2640 v4...
Yes, I put the wrong first digit, but I did mean 1650v4, not 1640v4.

If you really want to exhaust those turbos and observe their effect, you can play with BCLK a bit and on Haswell there are some firmware tricks which unlimit the all-core turbo clocks.

I did some of that, had my Haswell 18 core clock at up to 4 GHz and the results were hard to measure, impossible to actually notice in gaming. I left the Broadwell at 3.7 GHZ, because I value it for its stability.

Overclocking a game on its own, perhaps even more interesting than most computer games these days, but not mine.
The best feat of Xeon overclocking I've run across was probably this Sapphire Rapids Xeon-W overclock:

Slide93-1024x576.png


https://skatterbencher.com/2023/09/...00-mhz/#Xeon_w7-3465X_Voltage-Frequency_Curve

It involved "individually adjusting each P-core’s maximum allowed Turbo Ratio and core voltage". One big difference from what you're talking about is that select Xeon-W's do officially allow overclocking. I presume Intel sold some of these models unlocked, in lieu of having a separate line of HEDT CPUs, like they used to.

Personally, I'd never overclock anything other than a gaming machine. I value stability too much.
 
First raction: what's the point of running Raspberry PIs in a cluster?

Or Doom inside a PDF?

As a parent and occasional CS lecturer, I'd say a lot of it is around encouraging critical thinking and do your own testing instead of just adding to a giant pile of opinions based on other opinions influenced by who knows who.

It's also around acknowleding that the bottlenecks and barriers in gaming performance are not a constant, but have shifted and moved significantly. And a lot of the current discussion is about seeking for new escapes from non-square returns on linear improvements.

For the longest time Intel has pushed top clocks as the main driver for the best in gaming performance. And it wasn't totally wrong, especially since single core CPUs were the norm for decades and single threaded logic remains much simpler to implement.

I bought big Xeons years ago, because they turned incredibly cheap when hyperscalers started dropping them. My first Haswell 18-core E5-2696 v3 was still €700, but at that time I needed a server for my home-lab and there was nothing else in that price range, especially with 44 PCIe lanes and 128GB RAM capacity.

Those were the very same cores you could also find on their desktop cousins, but typically lower turbo clocks, because they had to share their power budget with up to 21 others and it made very little sense to run server loads on a few high clocks: you want to be very near the top of CMOS knee for optimum compute/electrical power.

Of course, when you use them in a workstation with a mix of use cases, that tradeoff would make a lot more sense, which is why I was happy to use the Chinese variants of these E5 Xeons, which gave you a bit more turbo headroom.

I've upgraded it to a Broadwell E5-2696 v4 somewhat more recently, because that was €160 for the CPU, a bit of leftover thermal paste and one hour for the swap job. I guess it's my equivalent of putting a chrome exhaust on V8 truck or giving your older horse a new saddle, not strictly an economic decision, but not bad husbandry, either.

Yet it also has Windows 11 (IoT) and VMs run better, because Broadwell had actually some pretty nifty ISA extensions made for cloud use.

Synthetic benchmarks put both chips at very similar performance and energy use as a Ryzen 7 5800X3D that used to sit next to it (which became a Ryzen R9 5950X later), both with 128GB of ECC RAM.

Progress during the eight years between that first Xeon and the Ryzen was measured mostly in per-core performance improving to the point where 8 Zen cores could do the work of 18/22 Xeon cores, but top turbo scalar performance of a Xeon core was also at around 50% of what a Zen3 can do. All core turbo on Xeons is much lower than for Zen, that's where the extra cores come in: essentially it gives you 18/22 E-cores in modern parlance, at 110 Watts of actual (150 Watt TDP) max power use... that's where modern E-cores will show their progress, probably 1/4 power consumption at iso performance.

With the Xeon at around €5000 retail when new vs. €500 for Zen, these numbers are another nice metric for progress.

But since IT experimentation is both my job and my hobby, I recently used it to test just how much of an impact or difference there was between those two systems, when running modern games on a modern GPU with equal multi-core power, but vastly different max per core performance. In other words: just how well had games adapted to an environment that had given them more but weak cores for some years now on consoles (due to manufacturing price pressure)?

And while I didn't do this exhaustively, my finding were very similar: modern game titles spread the load so much better that these older CPUs with extra cores are quite capable enough to run games at acceptable performance. And no, it doesn't have to be a 22 core part, even if it just happens to have 55MB of L3 cache, which makes it almost a "3D" variant. It also has 4 memory channels...

And that also explained why two to four years ago "gamer X99" mainboards were astonishingly popular on Aliexpress: X99 and C612 are quite the same hardware and like those Xeons it's quite likely these parts were given a 2nd life serving gamers, who, couldn't afford leading edge hardware and rather well served by what they got.

So the point: you may do better by not just jumping for the leading edge.
But because that takes the means to explore, naive buyers may not be able to exploit that and pay the price elsewhere.

Did I mention (enough) that Windows 11 IoT runs quite well on these chips? Still nine years of support and very low risk of being Co-Piloted.
Mixing that up with MFG to get 1000 fps instead of 300 it is what I was asking.