3D XPoint's DIMM Prospects Lighten, Memory Sticks Shipping

Status
Not open for further replies.

Evil_Overlord

Distinguished
Nov 30, 2007
15
2
18,515
"Intel's speedy Optane storage devices promise to offer up to 4x the endurance and 10x lower latency than NAND-based SSDs, along with a 3x increase in endurance."

How much higher is the endurance rating compared to NAND-based SSDs?
 

InvalidError

Titan
Moderator
The industry does not really need that much of an overhaul to accommodate slower non-volatile RAM: simply let the OS use it as a dedicated swapfile, that will be good enough for most cases. For performance-critical software, you'll still want to have enough SRAM or RAM to fit the most performance-critical code and data.
 

takeshi7

Honorable
Nov 15, 2013
105
3
10,685
"Most new bleeding-edge technology spends its infancy in the data center and then trickles down to the consumer market"

I don't think that's always true. There's plenty of technologies that start out in the consumer market and are then only accepted into the server market later, after they've been more thoroughly validated and proven to be reliable.
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
Im struggling to see the need for this.

In the server world, you dont turn the servers off, you want them going 24/7, so why not just fast dram? I know cost, but dram cost isnt that bad. The only benefit i can see is lower power to ram, but that should already be a small fraction of the total cost. Either you need a lot of fast ram and this will make no difference, or you dont, and again it makes no difference. In the dont case, you just swap to ssd until you task switch back. If they want to just make it a faster cheaper ssd, then go for it, but why the extra complexity of tring to slot it in between dram and ssds.

In the consumer world, ok cost. But, a layer of compexity will need to be added, so is the cost vs complexisty trade off worth it.

I mean....if it was faster than dram, and non volitie, and cheaper....well then sure, bring it on... But we are talking about slower then dram, so there is draw back.

I can see some glints of edge cases where it might make sense. But mainstream, i just dont see it.

And i can see some mobile possibilities, but i dont give a crap about mobile.
 

TJ Hooker

Titan
Ambassador

1) I believe XPoint DIMMs will have larger capacities than DDR4, allowing for more memory
3) Cost per GB
2) Nonvolatile, so you don't have to worry about getting more expensive NVDIMMs or risking data loss in the event of power loss
 

problematiq

Reputable
Dec 8, 2015
443
0
4,810


Though power consumption is low, it is a big deal for data centers. every watt of power used equates to not just the cost of the power but also the cost to remove the heat. also ECC ram is not cheap. it also lowers the wear and tear on the drives, due to not having to refresh the data as often. You point makes a lot of sense if you are dealing with a few servers, but when you have a data center with 10's of thousands of servers it changes things.
 

InvalidError

Titan
Moderator

I imagine the biggest application will be memory-resident databases: with normal RAM, you need massive multi-processor servers to cram several TBs of RAM in the box. With XPoint, we'll likely see DIMMs with several TBs per slot, allowing the same or even larger memory-resident databases to fit in single-socket servers, resulting in massive space, power and hardware cost savings for that particular application.
 
With the increase in DRAM and Nand prices if your Intel it's not a bad thing to allow the market prices to climb higher before releasing your new Memory and storage products. You can release them at a higher price and still be competitive and make more revenue. I think only the Kaby lake motherboards are ready for them so it gives more time for that platform to grow and get more users that can use the new XPoint products. I wonder which older platforms (if any) will be able to use the new XPoint memory and storage products.
 
G

Guest

Guest
3D XPoint was supposed to be 1000x faster than nand, have 1000x the endurance, and be 10x denser. Now, it's 10x "faster", and has 4x the endurance. And with 3d nand, who knows if it's even denser. With the first sticks being 16GB and 32GB, I doubt it.

HUGE FAILURE.
 

InvalidError

Titan
Moderator

The longer-plan term is for TB-scale DIMMs.

Density-wise, SDcards are 100X smaller than DIMMs and can hold over 128GB, so there is no reason to doubt the potential density of future XPoint. Add parallel concurrency on the module itself and it isn't hard to imagine that scale to full DDR4 channel bandwidth. Right now though, the stuff is basically at the proof of concept stage - get working hardware in hardware/software developers' hands to get things started without breaking the bank by a ridiculous margin.
 

Vartiovuori

Reputable
Jun 30, 2015
4
0
4,510
"up to 4x the endurance than NAND-based SSDs"

So not suitable for DRAM replacement in the slightest, then, regardless of speed.
 

Elrabin

Distinguished
May 1, 2013
26
2
18,540
"Im struggling to see the need for this."

Massive in-memory databases.

3d Xpoint Dimms are going to be MUCH less expensive per GB than DRAM

You'll be able to get 6+ TB of Xpoint for the cost of 3TB of DRAM

Also, you won't need to load the database from disk to DRAM every time you load it up, only the differentials as it's non-volatile.

Any type of job that maintains most of the same data will benefit hugely from this.
 

bit_user

Polypheme
Ambassador
Yeah, I was going to point out that Intel has been launching their new CPU architectures into the mainstream. Similarly, (until Pascal) GPUs have also been launched into the mainstream, and then moved up into the cloud as reliability improves and mainstream demand is satiated.

I wonder if Pascal is a fluke, due to pricing issues with HMB2, or if it represents a fundamental shift in this trend.
 

bit_user

Polypheme
Ambassador
This is not news. If you want to read a more thorough treatment of the subject, check out Paul's earlier article, documenting the climb-down:

http://www.tomshardware.com/reviews/3d-xpoint-guide,4747.html

Also, just because a product or technology is initially over-hyped doesn't mean it's necessarily a FAILURE, let alone a HUGE one.
 

bit_user

Polypheme
Ambassador
For the large, in-memory database use cases, swapping could make this non-competitive. Remember, a page miss involves a pair of context switches + some kernel overhead, which adds up to at least several microseconds. At that point, you might as well be using a NVMe SSD.

However, it's certainly true that some users might see a benefit from using this as a faster SSD.
 

bit_user

Polypheme
Ambassador
The power RAM takes is non-trivial, especially when you load up servers with large amounts of it.

For data centers, they're looking to optimize performance per $, but with price measured in terms of total ownership cost (purchase + operating costs). And operating costs include not just power dissipated, but then the corresponding cooling. They're very sensitive to this equation, and improvements of just a few % add up to big $$$.

This depends a lot on the relative power-efficiency. For SSDs, the controller & PCIe bus should add significant overhead vs. a directly-connected DIMM.
 

bit_user

Polypheme
Ambassador
I'm skeptical that existing hardware is truly ready to use these as a non-volatile dynamic memory solution. It's one thing, if you're using it as a SSD. In that case, you've got a kernel & filesystem layer to help you out.

However, direct access means the hardware and software need to serialize writes in such a way that the contents won't be corrupted if power is lost part-way through. CPU memory controllers aren't traditionally designed with this consideration in mind. This is also going to be a learning curve for application developers, as it's normally just the kernel & filesystem folks who have to worry about such details.
 

bit_user

Polypheme
Ambassador
Wait, wut? Who said anything about ECC RAM? I'm not even sure what's your point here. Did you mean to say "wear and tear on the RAM"? ...because I haven't heard of anyone aging out their disk cache as a substitute for using ECC memory. That could partially address the bit fade failure case, but there are others.

FTR, I doubt datacenters are using any non-ECC RAM. Please present evidence to the contrary, if you have it.
 

problematiq

Reputable
Dec 8, 2015
443
0
4,810


I'm not sure why you are confused, but hopefully I can explain. ECC RAM is more expensive than regular DRAM. Xpoint is suppose to be a cheaper alternative. When you use DRAM, over time it loses information. To re-write that information to the ram you have to pull it from a non-volatile source, e.g. HDD, SAS, SSD, cache. The information you want is most likely NOT in the drives 64-256MB cache. If it's a SSD then you are not working with a cache. Every time you read/write to a drive it puts wear on it. Even with ECC ram you STILL have to re-write the data to the ram. ECC stands for "Error-correcting code" it's job is to insure the reliability of the data, not the availability.
 

TJ Hooker

Titan
Ambassador

...That's not how it works. DRAM (that is working properly anyway) does lose information over time. It would, due to leakage in the cells, but it is continuously being refreshed, and that does not involve reading from some other, non-volatile source. There isn't a copy of everything in memory being stored on your OS drive or something.

Also, reads have little to no effect on SSD endurance.
 

problematiq

Reputable
Dec 8, 2015
443
0
4,810


You are correct, ram just "refresh's" the voltage in the capacitors in DRAM. Most server's that use database's (Thats most servers now days) now days use IMDB's though. You do have a copy of the DB on disk and due to data corruptions/changes some parts have to be re-written to the memory and the disk. I suspect with the switch to NVMRAM we will see more if not all DB's switch to IMDB's. YAY!

Edit: added that it's not just data corruption but also any database changes that require writing/reading to the disk.
 
Status
Not open for further replies.