News Kioxia works with Nvidia to prep XL-Flash SSD that's 3x faster than any SSD available — 10 million IOPS drive has peer-to-peer GPU connectivity fo...

To add to this, I’ve benchmarked the Intel Optane P5800X (3.2 TB) with FIO and it’s capable of hitting 5 million IOps with 512-byte random reads. When the NAND SSDs hit with 10 million IOps and “read latencies in the range of 3 to 5 microseconds,” that’ll finally lay any doubts to rest that Optane will have been ancient technology. Though that’ll be technology released in 2026 versus technology that was released in 2022, four years will have been a remarkably small gap given how far ahead Optane was in 2017 when it was first available.

I’m assuming using SLC will be a big bottleneck on density unless this uses larger stacks than we’ve ever seen…
Given 128 TB-class QLC SSDs today, they could already do 32 TB SLC SSDs, but chose not to (largest capacity available currently being 3.2 TB). 32 TB would dwarf the amount of DRAM you could stuff into a server.

It’s also an Iron Triangle problem here. Pick any two: speed, latency, or density. Even Optane never had more than one bit per cell.
 
Last edited:
To add to this, I’ve benchmarked the Intel Optane P5800X (3.2 TB) with FIO and it’s capable of hitting 5 million IOps with 512-byte random reads.
In the latest I found of Jens Axboe's exploits, he managed to squeeze 13M IOPS out of a pair of P5800X drives. That was just on a single core of an Alder Lake CPU:

that’ll be technology released in 2026 versus technology that was released in 2022, four years will have been a remarkably small gap given how far ahead Optane was in 2017 when it was first available.
The P5800X started shipping in early 2021.

Given 128 TB-class QLC SSDs today, they could already do 32 TB SLC SSDs, but chose not to (largest capacity available currently being 3.2 TB). 32 TB would dwarf the amount of DRAM you could stuff into a server.
XL-NAND is optimized for data access, not density. I don't know how much overhead that adds, but it's not trivial or else you'd expect a lot more NAND would be structured the same way.

It’s also an Iron Triangle problem here. Pick any two: speed, latency, or density. Even Optane never had more than one bit per cell.
Optane's plan for density was to scale in the 3rd dimension. Except NAND got there first and turned out to be a lot more scalable in 3D than Optane was.
 
Is that confirmed? That it will just run as pseudo-SLC like every cache already does on NVME drives? Is Samsung still manufacturing the 970 evo? It’s the last MLC drive i remember.
XL-Flash is purpose-built to be low-latency and high-endurance. So, it's not just using standard NAND chips and running them in pSLC or pMLC mode. I think it's natively MLC.

There's not a lot of info about their new version, but here's a slide from their original 2018 presentation, explaining how it differs:

exbyYjYDZGSejjmmXE2mN6-970-80.jpg


Source: https://www.tomshardware.com/news/toshiba-3d-xl_flash-optane,37564.html

You can find a little more about it, here:

I'm not sure if that uses newer generation chips or not, but it's definitely fewer IOPS than whatever this article is talking about. The P5800X, Optane's swan song, was good for up to 6.5M IOPS, although that's a fair bit more than Intel claimed.