News World's fastest Flash memory developed: writes in just 400 picoseconds

The article said:
But what exactly is picosecond-level memory? It refers to memory that can read and write data within one-thousandth of a nanosecond or one-trillionth of a second.

The newly developed chip, named “PoX” (Phase-change Oxide), is capable of switching at 400-picoseconds, substantially surpassing the previous world record of 2 million operations per second.
Ugh. You say 0.001 nanoseconds, only to then state that it's actually 0.4 nanoseconds. By calling it "picosecond memory", they probably just mean that it's sub-nanosecond scale.

Also, why did you switch units? 2 MHz corresponds to a period of 0.5 microseconds or 500 nanoseconds. Written this way, it shows that it's a roughly 3 orders-of-magnitude improvement.

The article said:
Traditional SRAM (Static Random Access Memory) and DRAM (Dynamic Random Access Memory) can write data in times ranging from 1 to 10 nanoseconds.
Hmm... from what I've read, I think CPU registers are SRAM. That means it's not the SRAM, itself, that takes on the order of a nanosecond, but rather the switching & muxing to read and write it from an on-die array. I somehow doubt this new PoX memory is solving that problem.

The article said:
flash memory like what's used in SSDs and USB drives is non-volatile, so it retains data even without power. The downside is that it’s much slower, usually taking microseconds to milliseconds.
Milliseconds is like... hard drives. IIRC, a fast SSD has a read latency of about 40 microseconds. The underlying NAND will be faster than that, but still in the realm of tens of microseconds.

The article said:
This speed limitation makes flash memory unsuitable for modern AI (Artificial Intelligence) systems, which often need to move and update large amounts of data almost instantly during real-time processing.
SanDisk would like a word with you:

I think the real win here is to get SRAM-level speeds with (hopefully) DRAM-level density and NAND-level power consumption. Equipping a machine with lots of RAM burns a lot of power. So, just reducing that power has immediate benefits.

As mentioned in the article, it also means that low-power devices like phones could stream in model weights directly from storage, without the model having to be resident in memory. This could save power and (if the density of their storage is similar to NAND), facilitate utilizing larger models than would otherwise be feasible.
 
Last edited:
someday I hope one of these supposedly amazing technological advancements we get every few yrs (for over a decade) ACTUALLY become a thing :|

Theory is fine but pointless if it will never reach real world market.
 
I'm somewhat excited by them having a functional chip and working with manufacturers to speed up the process. So it's not JUST a lab experiment.

As for specs, well, it sounds as if it could replace all types of memory at the same time. So no need for several levels of cache, DRAM, and storage, could just as well be all the same memory.

Now, there are two obvious caveats, one is obviously price of manufacturing, but if they mention smartphones already and not just AI farms, they're either very optimistic or it might be feasible to have relatively low cost. The other is controller... Will it be able to push data at speeds expected by these latencies? Will it be multi-channel to enable high bandwidth? Will the combination of bandwidth and latency cause controller to be too big or too hot? There's so much depending on this part of tech... Or ... Is it doable to embed controller into CPU the way north bridge with memory controller was absorbed into CPU. Now if they can make past all of that, get high bandwidth and low latency while embedded into CPU, we can say bye to RAM and SSD at the same time, making devices simpler and faster.
 
  • Like
Reactions: snemarch
In the article isn't mentioned capacity of this memory. On other websites is written that creators hoping to have capacity of tens of megabytes around 2030... If this information is right sounds like something from 2005 that they probably will realised 25 years later.
 
  • Like
Reactions: Thunder64
As for specs, well, it sounds as if it could replace all types of memory at the same time. So no need for several levels of cache, DRAM, and storage, could just as well be all the same memory.
I think you're getting ahead of the info we have. They didn't tell us about the amount of energy involved in reading or writing it, the density, or the endurance. Optane was originally envisioned to be used like DRAM, but it turned out to have insufficient endurance. Compared to NAND, it took quite a lot of energy to write and wasn't nearly as dense.

The other is controller... Will it be able to push data at speeds expected by these latencies?
If it needs anything more sophisticated than a DRAM controller, then it won't be a very practical substitute for DRAM. Basically, it can't require something as sophisticated as SSD controller, or else it would perform no better than the Optane SSDs we've seen (which is very good, but not nearly as good as DRAM).

we can say bye to RAM and SSD at the same time, making devices simpler and faster.
Somehow, I doubt we'll get rid of DRAM that easily, but we'll see. In the worst case, it might not be as dense as NAND, in which case it'd just be another memory tier, instead of replacing an existing one.


P.S. I just want to point out that this is one kind of research CHIPS helps support. You don't get innovations like this, simply by imposing tariffs.
 
  • Like
Reactions: snemarch
P.S. I just want to point out that this is one kind of research CHIPS helps support. You don't get innovations like this, simply by imposing tariffs.
Actually considering it's a Chinese innovation it could very well be US tarriffs that spurred the research.

You won't let anyone sell us stuff anymore? Ok, we'll just make it ourselves only better.