[SOLVED] NVMe as RAM?

iiSlashr

Reputable
Mar 10, 2019
380
41
4,840
This is probably just a stupid thought, but is there a reason why we don't use NVMe/PCIe SSDs as short term memory instead of RAM? Couldn't we use a 1TB PCIe drive for significantly cheaper access to high memory amounts? Speeds for said drives seem to be getting faster monthly, like even relatively cheap $200-ish drives are getting up to 3.5Gbps transfer speeds.

Again, I'm sure there's a simple explanation, I'm just curious.
 
Solution
There's a few problems with using flash memory as RAM:
  • Latency is one of the major problems. The fastest SSDs are still orders of magnitude slower than RAM. RAM can access stuff within the order of a few dozen nanoseconds in best case, but still be within the nanoseconds scale at worst case. Flash memory is still in microseconds scale at best case.
  • Flash memory has the problem of requiring erasure before reusing the space. DRAM doesn't have this problem, or at least, if it does, it's not very impactful on performance.
  • Flash memory also uses page-level erasing for performance reasons, and pages of faster drives may be measured in megabytes. Either way, increasing the precision of erasing slows down the performance.
  • Flash...
There's a few problems with using flash memory as RAM:
  • Latency is one of the major problems. The fastest SSDs are still orders of magnitude slower than RAM. RAM can access stuff within the order of a few dozen nanoseconds in best case, but still be within the nanoseconds scale at worst case. Flash memory is still in microseconds scale at best case.
  • Flash memory has the problem of requiring erasure before reusing the space. DRAM doesn't have this problem, or at least, if it does, it's not very impactful on performance.
  • Flash memory also uses page-level erasing for performance reasons, and pages of faster drives may be measured in megabytes. Either way, increasing the precision of erasing slows down the performance.
  • Flash memory has a wear limit, which it will probably chew through with even MLC in short order, but it will definitely kill QLC quickly.
3DXPoint doesn't seem to have most of these issues, though latency on that is still pretty substantial compared to DRAM.

Remember: most requests for memory are tiny. CPUs don't ask for dozens of megabytes at a time. Especially given their cache sizes still haven't reached the gigabyte scale yet.
 
Solution

Barty1884

Retired Moderator
IIRC, latency is the biggest argument.
You'd also have reliability concerns. SSDs are still finite in the number of writes - and RAM is constantly being written to.

Not to mention that RAM is still orders of magnitude 'faster' than any SSD.
I think any reasonable consumer SSD will top out around ~5GB/s sequential read/write (theoretically at least).

DDR4, depending on the speed will be anywhere from 20-30GB/s
 
In addition to all the above about latency and pure speed....redesigning the motherboard to do that.
You could add it on over the PCIe bus like how old IBM PCs could expand their RAM via ISA, but the problem is the CPU needs to be aware of this since the MMU is only wired to the DIMM slots.

Or you could do something stupid like an actual NVMe to DIMM converter, not that weird "we're gonna make an M.2 slot out of a DIMM" that ASUS did.
 

iiSlashr

Reputable
Mar 10, 2019
380
41
4,840
Thanks for all the responses, that's super interesting to me. I guess maybe in the future we'll see something like this perhaps with PCIe Gen 5 or something like that, definitely neat to think about. Another thought, once our SSDs get fast enough, is there a point in having RAM to intermediate between storage and usage? For example, say by PCIe Gen 6, R/W speeds are up past where we're able to achieve with (presumably) DDR5, what's the point in having a go-between like RAM? Like someone said before, it'd require a new way for the CPU to easily communicate with the storage with that part cut out, but would it not decrease latency if you could remove a step?
 
Another thought, once our SSDs get fast enough, is there a point in having RAM to intermediate between storage and usage? For example, say by PCIe Gen 6, R/W speeds are up past where we're able to achieve with (presumably) DDR5, what's the point in having a go-between like RAM?
I'm sure there is a goal among current hardware engineers to eliminate the memory hierarchy as much as possible. However, a sticking point I want to bring out is that for the most part, most data requests tend to be small, less than a few megabytes. Raw bandwidth is not a major factor here because most of the bandwidth potential of the memory device is wasted on servicing requests. So until storage latency is just as fast as RAM, it's never going to replace RAM.

Like someone said before, it'd require a new way for the CPU to easily communicate with the storage with that part cut out, but would it not decrease latency if you could remove a step?
The latency issue is inherent with the storage technology itself. Assuming the CPU is operating at 4 GHz, RAM has a latency measured in hundreds of clock cycles. The best SSD still needs thousands of clock cycles.