Intel Demonstrates 3D XPoint Optane SSD At Computex, Kaby Lake Details Emerge

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
So, Intel claims 1000 times the performance, then attempts to impress us with a demonstration that display 3.7 times the performance? That seems to be orders of magnitude below the claimed performance.
 


It is just a demo of course of a certain aspect. We have to consider that this will be both storage and RAM in the future which if we consider the current speed of the best PCIe SSD or even NVMe SSD we are barely pushing 2GB/s while the best DDR4 setup is at about 54GB/s.

Of course we also have to consider that the speed of current NAND is measured at a specific queue depth and sequential speeds.

Of course I don't expect 1000x faster but I am excited as I think it will be awesome to have storage that acts like RAM allowing the entire OS to literally sit in memory and thus load up almost instantly.
 
Of course I don't expect 1000x faster but I am excited as I think it will be awesome to have storage that acts like RAM allowing the entire OS to literally sit in memory and thus load up almost instantly.

That is the dream, innit? I've toyed around with VMs loaded onto ramdisks in the past. Nothing like booting an OS in under 3 seconds.
 
Let us hope that Intel adheres to the industry standard NVDIMM spec. I'd hate to see yet another proprietary pseudo-standard, like XMP.

I'll buy into this tech when it's standard and well-supported by OS's and software. 'till then, I think good 'ol DRAM + NVMe SSDs will suit me just fine.
 
PCIe lanes ?

DDR4 in dual channel has 34GB/s bandwith ... thats 34 PCIe 3.0 lanes ...

and if quad Channel , 68GB/s thats out of CPU 40 lanes specs ...

Why use PCIe lanes for this ? it is stupid ... make a dedicated link
 
First, tests involving real-world use cases are never as dramatic as benchmarks fabricated specifically to highlight a specific performance advantage.

Secondly, with a little bit of thought, you can perhaps see why it might be difficult to showcase quite what an improvement their new technology offers. Basically, existing systems are architected around the performance characteristics of recent & legacy technology. When disks were slow and mechanical, they were connected via slow, indirect links. As SSDs got faster, they needed to be pulled in closer and closer to the CPU, and connected with ever faster links, in order to provide the full benefits of which they were capable. The system architecture was continually playing catch-up. And as the interconnects improved, the storage devices could co-evolve to expose more of the performance potential of the underlying hardware.

But there's another relevant aspect of system architecture and performance, which is concerning various optimizations and application design principles & practices. Because disks are traditionally many orders of magnitude slower than RAM, operating systems gained increasingly sophisticated disk caching and buffering strategies, employing comparatively fast system memory to mask the long latency and bandwidth limitations of disks. Even at the application layer, programs and APIs have been forced to cater to the performance bottlenecks and limitations of legacy storage. Basically, software developers would typically spend a lot of time and energy optimizing their programs' I/O access patterns to perform reasonably well with existing storage technologies & operating systems.

So, it's really not surprising that, if someone manages to dramatically reduce or eliminate the performance bottlenecks around some part of the system, the relative improvement won't be readily seen in conventional/legacy workloads. This doesn't mean the improvement isn't worthwhile, it just means that applications need to catch up and evolve, in order to fully exploit the performance potential of the new technology.

Maybe because:

    ■Optane SSDs are a new product, that's shipping now (or imminently), which they want to sell.
    ■Kabylake & the associated form of 3D XPoint isn't presently demo-ready.

Either way, as Kaby nears launch, I'm sure you'll see plenty of demos involving their new 3D XPoint modules.
 
I don't undesrtand this part: "3D XPoint can be used as a normal block-addressable storage device with DIMM, PCIe, SAS or SATA devices, but users can only address it as memory (bit-addressable) if it rides the memory bus. 3D XPoint can be used as a block storage device when it is on the memory slot as well [...]"
It says that XPoint can only be used in bit-addresable, when in DIMM, but then claims it can be used as a block device (until I missed 'memory bus over sata' or 'memory bus over pcie' tech somewhere in between).

How it is, then?
 
I don't undesrtand this part: "3D XPoint can be used as a normal block-addressable storage device with DIMM, PCIe, SAS or SATA devices, but users can only address it as memory (bit-addressable) if it rides the memory bus. 3D XPoint can be used as a block storage device when it is on the memory slot as well [...]"
It says that XPoint can only be used in bit-addresable, when in DIMM, but then claims it can be used as a block device (until I missed 'memory bus over sata' or 'memory bus over pcie' tech somewhere in between).

How it is, then?

An NVDIMM, which in this case is utilizing 3D XPoint, can operate as either storage or memory. Some NVDIMMS allow you to do both simultaneously, you simply partition separate areas of the device to the desired task.
 


Actually I prefer HBM2 CPU with built in memory over using NVDIMM..

it has crazy bandwidth and will reduce the motherboard size dramatically ... will enable us to have 2 cpu in itx form factor and 4 sockets in micro ATX ... and maybe we will have smaler form factor just the size of CPU + slot (chipset in CPU as well)

imagine each cpu comes with 32GB RAM no need for 4 DIMMS at all !!! tinnny pc !!!

even in servers and workstations ... 4 CPU without 16 (4x4) DIMMS !!! WOW !!! very compact !!!
 
So, Intel claims 1000 times the performance, then attempts to impress us with a demonstration that display 3.7 times the performance? That seems to be orders of magnitude below the claimed performance.

I/O was not the only thing being used in the simulation. It was a fluid dynamics, so a lot of CPU was required. Even though the simulation was 3.7 times faster only because of the I/O improvement, that is really impressive!
 
Biggest benefit = no more texture pop ins in games...all textures will already be in the memory,no need for copying everything to ram first,don't look for transfer speeds that's pretty irrelevant as soon as it becomes system ram.
The 3.7 x improvement was because the CPU still has to render,so it's a CPU benchmark/bottleneck it just shows that your CPU will be able to work at much higher efficiency because it won't have to go through the whole file system routine of copying chunks of files to ram before working on them.
 
I would like to see the simulation benchmark compared with the same machine running it on a ramdisk rather than an ssd. This tech is not going to be about how much faster it is than flash, but instead about how much slower it is than DDR4 ram. This will be replacing a significant amount of standard volatile ram as a new layer to reduce some of the caching bottleneck, especially at the OS layer. The only bus on a standard PC today that is designed to take the low latency and throughput they are promising is the ram subsystem. Propitiatory or not until we see this stuff running in a Dimm slot we are not going to get real performance metrics. I can understand why Intel and Micron are keeping the demos pretty basic. If they just slap some of this stuff in a new SSD running on the PCI bus or worse yet SATA the performance is going to be only marginally better. You can tack some RAM on top of a USB stick but I wouldn't expect much better performance than flash even though it really is well over 1500x faster.
I see this stuff as a third memory type in the computer. As the technology matures could eventually merge RAM and SSD into one. But slower non volatile drive tech will remain to be cheaper for the foreseeable future and volatile ram faster with better write endurance so I don't expect to see it happen all at once.

My big question with NV memory though is how do you reboot the stuff when your system locks up?
 
So is the intent of having Xpoint DIMM slots in addition to existing DIMM slots, or instead of (either/or voltage supply or whatnot)?
Imagine a compute workstation with 8 slots now that can be populated up to say 256GB RAM running something really intensive like finite element simulation. I read that the Xpoint comes in much higher densities so the same slots might fit say 2TB, but replacing it all slower to use as addressable might not be an advantage...you really want to be able to keep your 256GB addressable DDR4 but also have a huge RAMDISK alongside using Xpoint for the stored simulation results each stage vs. schlepping it all from DDR to disk (even PCI-based SSD). HBM2 alongside processors as additional (level 4?) cache is also very intriguing for queuing up HT instruction paths faster.
 

Can you name a PC interface capable of 1000X 2GB/s?

With NAND, you need over a hundred of NAND dies (dozens of 2/4/8-die packages) spread across multiple controller channels to reach 2GB/s. With 3D X-Point, a single die may be able to saturate PCIe 3.0 x16.
 

If you run a simulation with a 1TB working data set, having 256GB of RAM may not do you much good since the CPU will end up waiting for data over PCIe/SATA/whatever most of the time anyway. Putting enough 3D X-point to fit the whole input and output data sets eliminates this altogether and whatever RAM is available can be set aside for working variables, use as cache, code, the OS, etc.
 

The memory controller is inside the CPU. Unless Intel arbitrarily locks X-Point compatibility based on chipset as it does for overclocking and multi-GPU, then X-point support should be only a BIOS update away.
 


Not even close to being the same idea though. HBM is a memory replacement, more specifically it is design for VRAM replacement.

NVDIMMs are to replace storage and system RAM with a much faster option.

If anything CPUs will probably go with HMC since it is more geared towards replacing system RAM.
 
Hey, Intel, are you reading this? You just laid off thousands of employees, and now you want to lock-up a proprietary architecture, hoping future users will be impressed enough and have money enough to switch to entirely new motherboards (again)? Aren't you the guys who invented PCI-Express, with its modularity and expand-abilities? Why you haven't already started manufacturing SATA-, SAS- and NVMe-compatible 2.5" SSDs with Optane is TOTALLY BEYOND ME! It's no wonder that journalists e.g. Forbes/Business blame the layoffs on poor management. By now, you could have had millions of happy Optane users, hungry for more of the same in a variety of different hardware settings. But, instead, we have to see ridiculous re-runs of unrealistic projections of a THOUSAND TIMES FASTER. Heck, I had upstream bandwidth of 4.0 GB/s 5 YEARS ago with a PCIe 2.0 chipset and an x8 Highpoint RocketRAID 2720SGL == SAME AS the current upstream bandwidth of the Intel DMI 3.0 link. But, I guess you will refuse to read this, refuse to listen to WHAT WE WANT, and stubbornly persist in telling us what we should be buying -- to keep your stockholders happy.
 

Simple: PCIe is already orders of magnitude too slow in both latency and bandwidth. There wouldn't be much performance gain in addressing X-Point memory over SATA/SAS/NVMe/etc. due to the interfaces already being bottlenecks for NAND.
 
Spot on MRFS!

So when are these Optane SSD's going to be available and at what outrageous price? There are only a few NVMe SSD's and M.2's available now and NONE of them reach the promised: "32Gbps data-transfer speeds"

We need NVMe 2.0

Intel has been failing on its promises:

March 19, 2014: Intel to renew commitment to desktop PCs with a slew of new CPUs
http://techreport.com/review/26189/intel-to-renew-commitment-to-desktop-pcs-with-a-slew-of-new-cpus

"I spoke briefly with [Intel's new GM and VP of its Desktop Client Platforms Group, Lisa Graff] at CES about her plans, and she observed that high-end desktop processor sales had been fairly flat in recent years—but when she looked at the performance numbers, the reason was clear. Intel hasn't given enthusiasts much of a reason to upgrade since Sandy Bridge."

And what ever happened to the promised PCIe 4.0 on Skylake from 3 years ago? It doesn't look like Kabylake will have it either - why does Intel consistently LIE about new stuff coming out that in reality they had no intention of ever coming out with?

July 3, 2013: Report: Intel Skylake to Have PCIe 4.0, DDR4, SATA Express
http://www.tomshardware.com/news/Skylake-Intel-DDR4-PCIe-SATAe,23349.html

200-Series Union Point Motherboards
http://www.tomshardware.com/forum/id-2983311/200-series-union-point-motherboards.html
 

Re-read that slide, it only says Xeon. There is no PCIe 4.0 spec yet: the PCI-SIG is still working on it and is not expected to be final until early 2017, which may mean no PCIe 4.0 for the mainstream Cannonlake either.

The small lettering at the bottom also says: "Potential future options, subject to change without notice." Since Intel decided to launch Broadwell-E/EP this year, Skylake-E/EP next year may yet have PCIe 4.0, along with that rumored beastly LGA3647 socket.
 
> There wouldn't be much performance gain in addressing X-Point memory over SATA/SAS/NVMe/etc.

You're begging the questions: all by itself "non-volatility" is extremely important;
plus, when you add the capacity increases, what's not to like about huge
RAID arrays of Optane? Furthermore, PCIe 4.0 should have a 16G clock.
As advertised, Optane has multiple applications, not just for DIMM slots.

 
Status
Not open for further replies.