He was talking about native memory frequency, the 6400 he said in the post is the official number and not xmp/expo.
Yes, I know Intel now supports JEDEC DDR5-6400 with the corresponding timings, but my point was that around high transfer rates and, in particular, CUDIMM making zero sense for JEDEC at 6400 MT/s, since I'd argue 100% of kits out there can do it (new ones; segmentation be damned), but they just don't allow it. Remember CUDIMMs have their own clock generator so they don't depend on the signal clock (IMC) and trace integrity (caveats apply).
AMD, for instance, can somewhat reliably go up to 7200MT/s and it's well documented. Performance doesn't vary wildly because of the IF clocks still being a bottleneck for the clusters (CCDs), so for Zen CPUs, the IF clocks and the effective ratio matters way more than pure RAM transfer rates (UCLK, MCLK and FCLK being 2:2:3 as the "golden" ratio, with some exceptions). This is more than likely why CUDIMM has no official support with AMD: it makes little to no sense with the current IMC (or I/O die process), but they've stated "you can try" without being commital, but I get it: official support or bust. No idea if AMD would encourage motherboard vendors to QVL CUDIMMs; who knows, perhaps some have done it?
Buildzoid has great videos on the topic and it's quite comprehensive for "fine tuning" almost any Zen CPU. AM4 has coupled UCLK and MCLK; AM5 with Zen4 decoupled them, but you still want to do 1:1 with them as much as possible. The Intel equivalent is the Gear 1 config being 1:1 MCLK and UCLK and Gear 2 being 1:2 MCLK and UCLK. No idea how impactful it is with their uArchs over time, but I think the idea is to always try to use Gear 1. Which reminds me! XMP in B chipsets comes with the caveat you can't change the Gear setting, as that is a CPU OC feature, so that caveat also exists.
And what about RDIMMs for this class of CPU that you can argue qualify as a "work CPU"? What do you think about that in particular?
Thank you for the considered and detailed feedback, much appreciated! Keeping it short...
1. I hear you, a good and valid point made, with my only response being that the article on this issue made no reference to overclocking, it simply stated "...the new LGA1851 socket exclusively with Z890 chipset...", although I see they have updated it to include the references to B860 and H810 options now.
3. Interesting point, and I'd only challenge you on your use of the word "force" about Intel and using CUDIMM. I may be wrong, but I think it's actually the JEDEC standard that requires DDR5 DIMM memory running at 6400MHz or higher to include a CKD? Not Intel. This is an industry standard AMD chooses not to adopt, and they're otherwise are a very active contributor to JEDEC standards. Prices for new technology are valways higher to begin with. I'm quite sure we'll all be using CUDIMM in 3-5 years time, bought at reasonable prices by then.
My last point to declare my own bias... I'm writing this to you on a machine with an Ultra 7 265K CPU and Radeon 9070XT 16Gb GPU, and this combo is a little rocket for my own work (not much gaming). I just wish AMD would hurry up and release ROCm for 9000 series graphics cards for Windows and Linux, they are well behind NVIDIA/CUDA for inference stacks.
No worries. It's always good to have scrutiny to both read about it and have a proper discussion. So, that being said:
1.- There is mention to Overclocking and even has it's own evaluation where Intel wins. With the new provided information, do you think that is correct? Just having "headroom" nowadays is rather poor in terms of criteria to determine "best overclocker", but I agree with the veredict, because more chipsets supporting equal level of OC as premium boards is a value proposition and not a technical superior one, but it is intertwined, I'd say. Having "better RAM OC" is a rather moot point when all that extra bandwidth is not utilised by all apps. Similar vein to VCache's caveat. For Intel it does give it that extra oomph it needs, but at a great extra cost which AMD doesn't really need as long as you know how to set up the RAM (read above's reply to Terry). My 9950X3D is running 6400MT/s CL32 with a 2:2:3 ratio (IF at 2133Mhz, which is stupid hard to get stable, but mine is!) and my overall platform performance is several points upwards from any "stock" and even "PBO'd" 9950X or X3D. I'm sure most Intel enthusiasts that tweak can say the same, but you need to compare baselines. Again: overclocking is never guaranteed. You need to compare baselines. Both AMD and Intel are super scummy for """"sugesting"""" (sending fast memory) for their CPU reviews. I'm glad Toms is still maintaining the line and do proper "baseline performance" analysis, which is what most people getting OEM PCs would see (it's much more representative).
3.- I used "force" in that context to explain that if you use JEDEC, at best you'll be close, but not surpass consistently the 9950X. Paul has already provided the links to the original articles and you can check their original review piece of the 285K and see how it stacks with the JEDEC clocks vs AMD:
https://web.archive.org/web/2025072...nents/cpus/intel-core-ultra-9-285k-cpu-review. TL;DR: 9950X still wins in productivity by a good margin. But the eternal caveat in productivity apps is: make sure your important workloads perform the best in the target CPU. I use software that actually leverages AVX512, so AMD just destroys Intel here (word aptly and intentionally used, yes), because Intel just shot themselves on the foot there, rather spectacularly. Cutting one of the ISA elements they've been pushing for so many generations was really dumb to me. They just gave AMD that niche. And as to why AMD is not fussed about CUDIMM, well, I gave that answer to Terry as well (kind of). TL;DR: their IMC can't really make good use of super high clocks and see better performance than lower clocked kits due to the ratios going to the crapper.
Regards.