Last things first: CXL doens't run over PCIe, but runs instead of it. They share the same PHY specification, but different protocols. Two big differences include CXL's support for cache-coherency and its lower latency.I am curious to see what will be the next gen Non Volatile Memory (NVM) options that will connect through CXL will appear on the market in a couple of years ?
And also what will be the latency because it will be usung the PCIe5 or PCIe6 bus, but Intel Optane DC PMEM were pluggable on the DRAM bus ?
With that out of the way, let's dig into the latency question. The first piece of this is how much latency Optane PMem modules have. According to this helpful article I found, the answer seems to be about 350 ns.
Just How Bad Is CXL Memory Latency?
Conventional wisdom says that trying to attach system memory to the PCI-Express bus is a bad idea if you care at all about latency. The further the memory
www.nextplatform.com
The same article also answers the question of how much latency CXL is likely to add. I'm not sure the article answers that directly, but they seem to quote a figure of 170 to 250 ns, which they claim is on par with accessing DRAM connected to a different CPU, in a NUMA system (presumably 2-socket). So, anyone using Optane PMem as a "cheap" or more-scalable alternative to DRAM should find the transition to CXL.mem fairly painless. With a good memory tiering implementation, you could even see a noticeable performance improvement.
For anyone who truly needs persistence, I think NAND-backed DRAM will be hard to beat, unless some of those technologies can really blow past the GB/$ of DDR5.Any guess ? Nantero carbon nanotube NRAM ? Everspin / Avalanche magnetic MRAM (STT-MRAM, SOT-MRAM, VG-SOT-MRAM,…) ?
No, that's insane. First, software is buggy and it's generally good to have the ability to start from a clean slate, every so often. Hardware is also buggy; same conclusion applies. And you want startup & shutdown to be well-tested, which would be less the case if it were intended to be an even rarer event.it is a very slow process, and some government [investment] and/or regulations (ex: a law that would mandate that by 2035 all IT systems must use bistable Non Volatile Memory (NVM) instead of DRAM…) would tremendously help to accelerate the transition from volatile to Non Volatile Memory (NVM).
I believe that NVM is the holy grail of any IT system as it would be really disruptive in the way IT systems are architectured (no more boot-up time) and really, really want it to happen as soon as possible.
Secondly, datacenters essentially have uninterrupted power, giving you as much uptime as you want. And with VMs that you can suspend and resume at will, you don't need persistent memory just to avoid rebooting when you don't want to.
IoT devices either need to stay on 24/7 and probably are slow enough they could even use existing NV memory, if they need the power savings badly enough.I had more in mind all IoT devices, laptop computers, tablets,
Laptops and tablets do fine with sleeping. In neither case would you probably want to take the performance hit of using NV memory instead of RAM. However, if it were fast enough, I guess it'd be cool to have a pseudo-hibernation where you page out all of DRAM to NV storage.
Autosave has finally been incorporated in most software. And gosh, if you really did suddenly lose power, maybe some of the program's working state would've been corrupted in the process. Did you ever consider that?desktop computers… that are also « at risk » in case of power failure (you lose everything you were working on).
I'm pretty sure they already manage this, with existing technologies.Also for IoT devices, when there is no activity, even during weeks or months, the memory in standby would not consume power (add some power harvesting technologies, and you might have a remote IoT sensor device that could run for months / years)
Last edited: