High power usage in idle, really? One could draw some conclusions based on graphs provided, but definitely not that.
Why not? Are you missing these charts?
https://cdn.mos.cms.futurecdn.net/Wpec6VRKFNm8Yz2XX46WN3.png
In idle and light workloads, AMD's CPUs are sitting about 14~17 watts higher than the Core Ultra 9 285K. Is that a terrible thing that should prevent people from buying it? No, but it
is a weakness of AMD's design right now. It's something AMD should look into addressing with future designs. Because even though it's only 15W or so (that's two 60W LED bulb equivalents), it should be possible to get that power use down.
Imagine a phone or mobile device that used almost 3X more power at idle than the competition. That would be a major concern. And in light workloads like YouTube, if it had to use twice as much power, that would also be bad.
Some people seem to think the pros and cons are us screaming "THESE ARE AMAZING ASPECTS" and "THESE ARE TEH WORST ASPECTS" but they're really just a high-level summary of some key points — things that are going well, things that could be improved (without the shouting).
From reading and hearing several reviews (Gamers Nexus), i'm surprised that the 9950X3D and 9900X3D did not demolish the lesser and older 9800X3D. In fact Steve Burke from GN, said the 9950X3D was on par with the 9900X3D and 9800X3D. I decided for my gaming build to go with the 9800X3D, not because of price but the 9800 was the choice for a gaming rig. I recently snagged a new 9800X3D on sale from Amazon for $443.
I'm not sure why anyone would be surprised that, in gaming, the 8-core X3D and the 8-core X3D plus 8-core standard CCDs perform better in games than a 6-core X3D plus 6-core standard configuration. The whole point of the X3D is to shift cache-sensitive workloads to that CCD, away from the other CCD that lacks the extra cache.
It's a whitelist software solution (meaning, AMD explicitly lists which games should effectively disable the non-X3D CCD), and not everything benefits. But as we've seen plenty of times in the past, the dual-CCD (and quad-/hex-/octal-CCD Threadripper) solutions add latency to memory and cache transactions and that reduces gaming performance.
In fact, that's precisely the problem Intel has with Arrow Lake: higher memory latency means lower gaming performance.