News Intel schedules the end of its 200-series Optane memory DIMMs — shipments to draw to an end in late 2025

Optane/3DXPoint is such a cool tech, it really is depressing it for dead-ended with a company that won't even use it.

Optane cache drives were great - insane endurance, latency and random R/W speeds, exactly what you need in a cache drive. I wish they were still making them, or even that the ones they did make were available anywhere at a remotely reasonable price.

My home server is too antique to use either of the DIMM types available, and only supports up to 512GB of RAM, which it has, so I wouldn't want to replace any in it.

But newer systems can run higher-capacity DIMMs, so I could run the same amount of RAM or more, and still use some high-cap Optane DIMMs in direct mode as a makeshift cache drive.

My current build is still holding up pretty well for my purposes, but it might slowly be getting time to upgrade to something just a wee bit newer (this decade), migrate the newer hardware to that (GPU, NIC, TPUs), and use the old one as a seriously overpowered direct-connected NAS - all 24 drive bays are full already anyway, and at about 40% free space. Then I could give the new box some Optane love...

Really just thinking out loud here though (or, well, out quiet since I'm typing). Started doing that way too often since the Long Covid Brain Fog hit me...
 
https://www.techpowerup.com/245256/...g-intels-optane-being-marketed-as-dram-memory

Anyone here used a consumer system that came with 4-8 GB DRAM and 16 GB of Optane?
Have not heard good things about systems using Optane as a RAM alternative, though I haven't used one personally, just word of mouth. Especially in the slot-limites laptops that they were sold in somewhere surreptitiously for a while, with configurations like you mentioned.

The better use, from my perspective, is in a system where you can put as much RAM as you need in a single channel, then loading up the other channel with high capacity Optane DIMMs, and run them in App direct mode, either as a secondary non-volatile memory pool, or as a low-latency cache drive or similar. That's assuming your system let's you do that. You lose a memory channel, which isn't exactly perfect, but depending on your needs, it can be a worthwhile trade-off.
 
Optane is fascinating in that it actually did ship, even if it failed to be competitive against HP's memristor, which was going to be better than DRAM in every which way...

I remember listening completely stunned to Martin Fink's presentation 10 years ago, and that included a session with a full round of industry leaders who he told that everything from tape to DRAM was going to be replaced with memristor modules at SRAM speeds and below Flash power requirements.

All those guys didn't raise a single doubt, although they had little to say...

The memristor didn't have any of the disadvantages of Optane, slower speed, still limited endurance, higher energy consumption, Intel-only support etc. so I never really considered Optane, since it was obviously only second best.

And Intel refusing to admit that it was phase change technology because they had patent issues in the back, didn't help.

I'm still sad the memristor turned out a phony, but I'm glad I didn't go the Optane rabbithole (or Apache pass) as otherwise I surely would have.

I can't think of another time when the public had been so cheated as to the potential of an IT technology...

...except AI, perhaps?
 
My home server is too antique to use either of the DIMM types available, and only supports up to 512GB of RAM, which it has,
OMG, why??

My current build is still holding up pretty well for my purposes,
Which are?

The better use, from my perspective, is in a system where you can put as much RAM as you need in a single channel, then loading up the other channel with high capacity Optane DIMMs,
Well, if you're not actually going to use it as RAM, then I think you might as well go with NVMe interface, due to all the syscall overhead that file I/O adds anyway. That would also leave your RAM slots free for actual DRAM.
 
  • Like
Reactions: Li Ken-un
expect 300-series Optane Persistent Memory modules to be available for at least two more years, provided that Intel has enough 3DXPoint memory to build these DIMMs.
lol what?

the 300-series was killed in its crib. I tried looking for them while entertaining the idea of a Sapphire Rapids server. These died the very same month they were launched. Nobody is selling them from what I can tell, so the last platform for PMem is effectively Ice Lake on the 200-series.
 
  • Like
Reactions: bit_user
https://www.techpowerup.com/245256/...g-intels-optane-being-marketed-as-dram-memory

Anyone here used a consumer system that came with 4-8 GB DRAM and 16 GB of Optane?
Yes, I specifically bought a motherboard for that purpose. It worked well enough.

The biggest issue was not performance but the fact that when it stopped working, then I would need to connect my drive to another computer to get it working. I'm a techie so if I have a problem it's a no-go for regular computers.

What they needed to do was actually make it like memory because even the Optane Memory modules had low enough latency to use as "Slow memory". So it should have been an extra slot that the system uses if it runs out of memory and uses as a pagefile, plus the caching portion. Then they could have marketed it to many more people.

Version 2.0 would be one where you could put it in your DIMM slots for true "Slow DIMMs". Without the support of the mass market, it would never reach the dreams of being cheap enough.

The thing died partially because of bad strategy on Intel's part. They have the ability and creativity to bring new technology into the market but not enough to actually make it successful.
Well, if you're not actually going to use it as RAM, then I think you might as well go with NVMe interface, due to all the syscall overhead that file I/O adds anyway. That would also leave your RAM slots free for actual DRAM.
Then actually use it like RAM. The Optane PMEM DIMM modules were only 2x the latency on read. 200nS, which is 500x faster than NVMe SSDs and 50x faster than Optane Memory.
Optane/3DXPoint is such a cool tech, it really is depressing it for dead-ended with a company that won't even use it.
Sorry but reports were even the $1000 905P drive didn't make much money for Intel. Expecting to be much cheaper is unrealistic without Intel subsidizing it for years, and then only coupled with brilliant strategies.

Anyway as soon as Gelsinger became CEO, it was bound to die. Gelsinger was under tutelage of Andy Grove, who steered the company away from Memory. Gelsinger said "I never want to be in memory". Because only the #1 company Samsung makes any sort of money on it and for others is a very difficult and low margin business.

Yea it's good that they got out.
 
Soon to be unOptanium
The writing’s on the wall for the NVMe SSDs too. A whole bunch of SKUs got delisted from my usual haunts a couple months back. We’re left with 905P/P1600X, courtesy of NewEgg, a few of the lower-capacity P5800X*, and the wild west that is eBay. It’s not entirely unobtainable, but you’re going to be rolling the die with unfamiliar onine vendors.

* The 3.2 TB P5800X specifically, which I’ve kept tabs on for a while, is mostly relegated to lesser known vendors.

Well, if you're not actually going to use it as RAM, then I think you might as well go with NVMe interface, due to all the syscall overhead that file I/O adds anyway. That would also leave your RAM slots free for actual DRAM.
lol this ^

When the file system becomes the bottleneck… that’s… :SMH:
 
  • Like
Reactions: bit_user
What they needed to do was actually make it like memory because even the Optane Memory modules had low enough latency to use as "Slow memory". So it should have been an extra slot that the system uses if it runs out of memory and uses as a pagefile, plus the caching portion. Then they could have marketed it to many more people.

Version 2.0 would be one where you could put it in your DIMM slots for true "Slow DIMMs". Without the support of the mass market, it would never reach the dreams of being cheap enough.

The thing died partially because of bad strategy on Intel's part. They have the ability and creativity to bring new technology into the market but not enough to actually make it successful.
Intel spent some effort on providing userspace access to it for specialized applications, but I'm not sure if they ever got to the point of making it a proper memory tier. Back then, memory tiering was in its infancy. We're going to see that come into the mainstream, as CXL.mem becomes more common. Your CPU's on-package memory will be the fast tier and the CXL.mem devices will comprise the slow tier. Direct-connected DDR DIMMs (if supported) would be a middle tier.

Sorry but reports were even the $1000 905P drive didn't make much money for Intel. Expecting to be much cheaper is unrealistic without Intel subsidizing it for years, and then only coupled with brilliant strategies.
What I think killed it is V-NAND (i.e. 3D NAND) scaled much better and faster than 3D X-Point. I think the Gen 1 Optane had only 2 layers. Gen 2 had only 4 layers. They needed to increase the layer count way faster, if they were ever going to keep up with NAND. I don't know why they didn't, but maybe ran into technical hurdles. In that case, it's hard to fault them for killing it.

Anyway as soon as Gelsinger became CEO, it was bound to die. Gelsinger was under tutelage of Andy Grove, who steered the company away from Memory. Gelsinger said "I never want to be in memory". Because only the #1 company Samsung makes any sort of money on it and for others is a very difficult and low margin business.
Last I checked, there are at least 3 companies doing quite well in the business: also SK Hynix and Micron. Yes, the PC downturn hit them hard, but now NAND is on a rebound and HBM demand is so fierce it's cutting into their DDR5 production capacity.
 
  • Like
Reactions: Li Ken-un
OMG, why??


Which are?


Well, if you're not actually going to use it as RAM, then I think you might as well go with NVMe interface, due to all the syscall overhead that file I/O adds anyway. That would also leave your RAM slots free for actual DRAM.
Why? Because the RAM was cheap, and I would much rather have too much than too little, especially when split across VMs.

Which are? Mixed - it's a Plex media server/HA server/Frigate NVR for 14 cameras/General home storage server/VM box for various random projects.

As for an NVMe version, last time I had checked, they were ridiculously expensive. But Newegg just had a 1.5TB 905P on sale, so I guess I have a new cache drive. Now I'm completely out of PCIe slots, though...
 
  • Like
Reactions: bit_user