Their talking about energy cost per megabyte of memory which is orders of magnitude better for flash then DRAM. A high end server will come with 192 to 256GB of DRAM, you could add another 1 to 2TB of "flash memory" to that for significantly cheaper then what it would cost for another 1 to 2TB of DRAM (if the system could even handle that many chips on it's memory bus). The tier II memory wouldn't be nearly as fast as DRM but it would be much faster then SSD swap space (I/O and protocol limits) or even regular HDD.
See nothing can be run out of swap space, swap is only used for non-active process's, if a process needs to read something from memory it must first page it from the disk into RAM while paging something from RAM to disk. This is talking about allowing programs to read direction from the flash storage as though it was memory and not dealing with the swap system on the OS. The memory management unit would have to realize that there is a difference between DRAM and Flash RAM (FRAM?) and allocate it accordingly to priority.
This is useless tech for consumers but very interesting for hosting virtual machines where your limit quickly becomes memory and I/O not processor cycles.
See nothing can be run out of swap space, swap is only used for non-active process's, if a process needs to read something from memory it must first page it from the disk into RAM while paging something from RAM to disk. This is talking about allowing programs to read direction from the flash storage as though it was memory and not dealing with the swap system on the OS. The memory management unit would have to realize that there is a difference between DRAM and Flash RAM (FRAM?) and allocate it accordingly to priority.
This is useless tech for consumers but very interesting for hosting virtual machines where your limit quickly becomes memory and I/O not processor cycles.