No, it doesn't. It shows ddr4-3600 is the "winner." And I put that in quotes, because it beat lowend 2666 by 2.8%.
Try not to be so obtuse. It makes you look like a troll.
Anyway, I'll break it down for you. Different benchmarks stress different things. Some are hardly affected by memory performance, while others are primarily bandwidth-sensitive and yet others are more latency-sensitive. If you look at the whole page, you'll see quite a bit of variation across the different tasks.
In the physics simulation, 4000 is the winner and beats 2400 by 7.4%. In the Game Development test, it pulls a win of 2.4% over the next best entry.
If that's your counter argument to why we should care about DDR5, it's not convincing.
It's a 12-core CPU @ 3.8 GHz base clock speed. What do you think happens when you crank up the core count, clock speed, and IPC even higher?
That's a 1900X - the lowest-end, first-gen threadripper. It has only 8 cores, big L3 caches and Infinity Fabric gets in the way. Try that with a 3990X.
And, again, different workloads have different sensitivity to memory bandwidth. Just because those rendering tests don't show much impact doesn't mean other workloads won't.
DDR5 for the home user isn't going to mean squat from a performance standpoint.
Nonsense. You didn't look at any iGPU benchmarks. Gaming-wise, that's where you're going to see the biggest impact of memory performance.
Also, you're ignoring everything besides bandwidth. The voltage drop will make it more power-efficient (meaning longer battery-life in laptops and phones) and the density improvements will mean higher memory capacities with fewer and lower-ranked DIMMs, which is a cost-savings.
And, if ECC is in even the consumer version, it could improve reliability and perhaps further reduce costs by improving yield.