Intel Introduces Revolutionary 3D XPoint Technology: Memory And Storage Combined

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
They have not fixed their website yet, why would they fix it now?
Perhaps when this beautiful tech is out, prices go down and Toms will finally be able to afford a luxury of storing a higher resolution images...
 

Micron may have started the HMC train but Intel is an active development member of the HMC consortium. I think they are very much aware of HMC's existence and you can consider Intel's share of any licensing fees already paid in full.
http://www.hybridmemorycube.org/about.html
(Edit: Intel's name is no longer on the list but they are still there as Altera. In any case, Micron and Intel are working pretty close together on HMC since one of the first major chips to use HMC will be the next Xeon Phi: http://www.micron.com/products/hybrid-memory-cube/high-performance-on-package-memory)

HMC is about using DRAM in a new way (expose individual internal banks to an external logic chip for higher bandwidth and access concurrency) while 3D Xpoint is a completely different memory cell structure.
 


Agreed, it is important to remember that HMC stores bits as electrons, whereas 3D XPoint does not, which is a fundamental difference between the two memories.
 
Hopefully when this thing is out Toms will not have problems with image resolution. I can't see anything of what's written on that slide.
Been reading Tom's for over 10 years (good lord... has it been over 15 years?!?). They have not fixed their website yet, why would they fix it now?[/quote]

Thank you for your 15 year patience 🙂 We are trying to get things like this fixed, but our dev resources are being applied to a few bigger (company wide) projects that will eventually help us tackle issues and roll out features uniformly. Meanwhile this was just an issue with our image uploader. Paul (the author) worked with it and I think the diagram in question is much clearer now (in other words, you can read that little text even).

- Fritz Nelson, Editor-in-chief
 
They probably won't license the production capabilities, but I expect the "end products" they produce will be memory chips for cell phones etc and not the cell phones themselves unless Intel plans on getting into that market. Their patent is only good for 20 years and HP is still working on the memristor thing so this money making venture is still somewhat time sensitive.
 
Did they have to add the 3D to the name? 3D Xpoint sounds more like a no-benefit marketing name rather than a new memory technology. I can see a car maker talking about 3D Xpoint handling in the new 5400 XCL coupe or a pellet gun with amazing 3D Xpoint sights.
 
That sounds really cool. However, I'd like to see some hard numbers rather than "1000X the speed of ..." Stuff since that gives it a huge range of anywhere from 200 GB/s to 1-3 PB/s (PCI-E SSD's)
 

When Intel says that NVMe is lagging 10,000X behind CPUs, Intel is comparing bandwidth or latency against L1 cache. Comparing the aggregate bandwidth and latency of internal CPU structures against a board-to-board interface like NVMe is not exactly fair.
 
My question is this... if we're talking 1000x NAND performance and latency measured in "mere" nano-seconds, aren't we into the territory of RAM here? And if so, why are we talking about connecting this over the (vastly slower and higher latency) PCIe interface? How will a tech that sits "between NAND and DRAM in use cases" - but doesn't supplant either - actually revolutionise things?

Now if we are into RAM territory, and these things have 128gb (=16GB dies) as the article states... then 8 dies on a stick of DRAM would get you a 128GB stick of non-volatile RAM, connected via much faster and lower latency memory controllers. Now that would be a game changer!

But ultra fast solid-state storage over NVMe which is (according to article) priced between DRAM and NAND... that's not getting me quite so excited. We have crazy-fast drives like the Intel 750 series and they don't benefit "normal" computer users over an SSD in any measurable way. I realise this tech enables vastly faster drives again... but they're not the bottleneck anyway.

Is this not a solution in search of a problem? Maybe I'm lacking imagination, but I'm struggling to see the revolutionary effect of ultra fast storage (like hyper-speed SSDs). I sincerely hope I'm wrong - and I'd love to here other perspectives on this!

i look at this as a cache drive that doesn't wipe on reset. newer games are going for streaming textures more and more and on slower mediums, even ssds you notice the pop in unless you got those ssds raided.

personally, my thought is this, i could go with a lower end motherboard that had 2 pcie slots but only dual channel memory get 8gb and than drop in a 64-128gb of this thing.

cheaper motherboard that is still good,
cheaper ram due to lower amount,
but when you use more than 8gb, you don't run into issues because this would still be more than fast enough for normal computer use to not see much if any lag (when compared to a page file on a ssd or worse an hdd)

depending on how this preforms or is implemented, you may not need system ram at all, and most people never need that speed, so 32gb+ of this would probably be more than they would need/use anyway so a somewhat cheap high amount of ram, and in a laptop like environment, the ability to 100% shut down and restart probably as fast as putting it into hibernate.
 


Thanks for the thoughts. Interesting idea about treating this something like a second tier of memory. Much like we have multiple layers of cache on the CPU with increasing capacity at decreasing speeds. I still question what the impact of the relatively slow and (more significantly) high latency PCIe interface will be. I've done a bit of Googling and can't seem to get simply latency figures on the PCIe interface, but I'm guessing they're order(s) of magnitude greater than DRAM which is going to be an issue as soon as anything tries to treat this as "RAM". That means we'll be into the territory of needing software that's aware of and well optimised to utilize this tech, which is always a massive issue. Still, it'll be interesting to see where this goes.
 
I think for the average person just using an OS, you will get some boot-up time improvements. I hadn't realized that Intel had already added some instructions for NVM. I think it will have a big impact on robotics, AI and factory automation. Possibly to a disruptive level. At least IT will further alter the employment situation with both winners and losers.
 
We've seen many technologies that could replace DRAM in some way. They just haven't been used to do anything other than hype. Z-RAM, T-RAM, MRAM, P-RAM, the list goes on. In most cases, there are multiple concepts needed to make them work commercially and they are owned by different companies who refuse to work together.

Point is, I'll believe something will come of it when I see it in products because this is FAR from the first time we've been teased with another memory technology that orders of magnitude better than currently used technologies.
 
There is a factory. They have produced wafers. I presume that at least some of the chips cut from the wafers fully work. I think they would be in serious trouble with the stock market regulators if they cannot backup their claims about the product. It is not really a big deal for ordinary PC users (though extreme gamer's might benefit). I think it is a big deal for high performance computing and possibly for recall based artificial intelligence and robotics. It is not an indispensable technology. But I think actual availability makes exploration in certain research areas more likely.
 


Many of the technologies I mentioned in my previous post are in fact made- and used- for limite situations. PRAM/PCM and MRAM are used in small densities (made on old fab processes like 90nm). They just never went mainstream and really, the only one in decent usage is probably MRAM- it is used as a replacement for SRAM chips, especially in electronics that need non-volatile memory that is resistant to tough environments.

MRAM could be used in computers today if the several companies using it worked together. Thyristor memory and ZRAM are also viable replacements for DRAM (TRAM is much faster at similar density and ZRAM is much denser at similar speed), however, they never caught on for one reason or another. AMD even experimented with them supposedly, they bought a license, but nothing has come up yet.

Then there's IBM and others who come up with an amazing new memory technology they they're got ready for usage every year or two and we never see it again.
 
They were talking about a latency of 10's of nano seconds. Less than 20 ns is a green light to go for many high performance computing applications. And might be a paradigm changer. 20 to 50 ns is an amber light. Well, we don't live in an ideal world so I guess it is an amber light.
 
Thyristor memory sounds like it would be very robust. I love tryristors. If you want to feel negative about the new product then one thing they did not talk about was whether it can sustain infinite reads without getting bit flips. The situation would still be manageable if that was a possibility but it sure would make things messy if you had to do a refresh cycle on the memory every 1000 reads or whatever.
 
Status
Not open for further replies.