Intel Demonstrates 3D XPoint Optane SSD At Computex, Kaby Lake Details Emerge

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Brian_R170

Honorable
Jun 24, 2014
288
2
10,785
Hey, Intel, are you reading this? You just laid off thousands of employees, and now you want to lock-up a proprietary architecture, hoping future users will be impressed enough and have money enough to switch to entirely new motherboards (again)? Aren't you the guys who invented PCI-Express, with its modularity and expand-abilities? Why you haven't already started manufacturing SATA-, SAS- and NVMe-compatible 2.5" SSDs with Optane is TOTALLY BEYOND ME! It's no wonder that journalists e.g. Forbes/Business blame the layoffs on poor management. By now, you could have had millions of happy Optane users, hungry for more of the same in a variety of different hardware settings. But, instead, we have to see ridiculous re-runs of unrealistic projections of a THOUSAND TIMES FASTER. Heck, I had upstream bandwidth of 4.0 GB/s 5 YEARS ago with a PCIe 2.0 chipset and an x8 Highpoint RocketRAID 2720SGL == SAME AS the current upstream bandwidth of the Intel DMI 3.0 link. But, I guess you will refuse to read this, refuse to listen to WHAT WE WANT, and stubbornly persist in telling us what we should be buying -- to keep your stockholders happy.

Using 3D XPoint as a replacement for NAND in SSDs is a waste of 3D XPoint's potential. They'd just end up with another me-too NVMe product that is limited by the interface and the protocol. The only advantage being endurance. If they're smart, they'll use 3D NAND for SSDs and use 3D XPoint on some interface that can actually take advantage of it, but they need to get driver and operating system software ready to take advantage of it, and that is what takes the most time and effort.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
And, speaking of "leadership" (AHEM), how hard would it have been for a gigantic manufacturer like Intel
to promote variable clock speeds for SATA and SAS controllers, and also support an option for the
128b/130b "jumbo frame" which the USB 3.1 spec adopted in a heartbeat?
We suggested both in a paper published at the Storage Developer Conference in 2012 (4 YEARS AGO):
http://supremelaw.org/patents/SDC/SATA-IV.Presentation.pdf
At least somebody at USB must have noticed.
Intel also surely notices that all major Nand Flash SATA SSDs have hit the glass ceiling imposed by the
6G clock rate and the 8b/10b legacy frame (read MAX HEADROOM of 600 MB/second).
And, neither of those 2 factors have seen any change for SEVERAL YEARS now.
Can you say S-T-A-G-N-A-N-T?
Yes, we want the option to OVERCLOCK storage, just like overclocking CPUs and DRAM
that drove progress with both many years ago:
http://supremelaw.org/patents/SDC/Overclocking.Storage.Subsystems.Version.3.pdf (6 YEARS AGO)
I'm very confident when I predict that marketeers and MSRP deciders will have a field day
with variable channel bandwidths.


 

bit_user

Polypheme
Ambassador
I'm sure we'll soon have HBM2/HMC + NVDIMMs. The former is incredibly fast, but expensive. So, the latter will be needed both to replace conventional SSDs as storage, and for swap space when the HMB2/HMC is exhausted. Disks might still be necessary for large capacity data storage, perhaps due to cost per GB and because conventional storage architectures do scale up a lot better than DIMM slots.

See above. That's where we're headed. But it's a good question: if you just do a straight replacement with DDR4, in a conventional PC w/ current software & OS, what's the performance impact of using this stuff as RAM.

That's easily resolved with some sort of hard-reset button. You can then trigger a non-maskable interrupt to wipe the volatile portion of RAM and re-load the OS from the boot sector.

And when software has more time to adapt to using NVRAM, the same approach can scale down to a more granular level, by telling the OS and individual apps to reset themselves to a known state.
 

bit_user

Polypheme
Ambassador
Isn't SATA officially finished? As in no more revs. I thought I read they decided to stop extending it, years ago, in favor of PCIe. At this point, SATA is only for HDDs in lower-end systems (high-end uses SAS, which has been extended to 12 Gbps).
 

bit_user

Polypheme
Ambassador
Since you down-voted me, let me clarify that the article doesn't state whether Intel's DIMMs will conform to the NVDIMM standard. We can hope.
 


Again, PCIe 4.0 has not been standardized. Sure that is probably what it should be but it needs to be standardized with more than just the speed.

And Intel is just showing off one aspect of Optane, who says this is the only use they have planned for it?



Because Intel doesn't have the final say? Because SATA is pretty much going the way of the DoDo like PATA did due to superior interfaces like M.2 and now NVDIMMs?

And did you miss SATAe? What about M.2?

Again SATA is not going to be around for high performance drives much longer.
 

jasonf2

Distinguished
One real application for this stuff will be in the mobile space. Everything there is dynamic enough that developing a totally homogeneous memory structure into the phone/tablet SOC and OS could pretty instantly spell major performance and power reduction gains. With the low density normally in phones in the first place this stuff could take off pretty quickly. Even if a little slower the complete removal of RAM and its power suck would allow the device to idle in a near off state.
 

InvalidError

Titan
Moderator

X-Point won't eliminate the need for RAM: the IGP and CPU will rewrite some areas of memory thousands of times per second while running applications and games, which could wear the cells out rather quickly. You would still want to have enough high-speed, low-latency RAM to hold all the immediate working data set to avoid this sort of greatly accelerated wear.

It could become a hybrid kind of deal like what is found in some phones and tablets though: 16GB of RAM combined with 64GB of X-Point on a single DIMM.
 

jasonf2

Distinguished
Agreed to the write wearing. All I see there though is a need for on die memory levels high enough to handle the necessary dynamic variable needs. If this stuff is truly non volatile most of the necessary movement from storage based memory into ram for caching would be eliminated. I kind of had to rethink memory when the non volatile came in because so much of the data movement now is from drive to ram to cpu and back down again to keep from bottle necking the cpu. If it is a direct read from the homogeneous memory we only need to write what has changed, not copy and move everything around for performance reasons.

X-Point won't eliminate the need for RAM: the IGP and CPU will rewrite some areas of memory thousands of times per second while running applications and games, which could wear the cells out rather quickly. You would still want to have enough high-speed, low-latency RAM to hold all the immediate working data set to avoid this sort of greatly accelerated wear.

It could become a hybrid kind of deal like what is found in some phones and tablets though: 16GB of RAM combined with 64GB of X-Point on a single DIMM.

X-Point won't eliminate the need for RAM: the IGP and CPU will rewrite some areas of memory thousands of times per second while running applications and games, which could wear the cells out rather quickly. You would still want to have enough high-speed, low-latency RAM to hold all the immediate working data set to avoid this sort of greatly accelerated wear.

It could become a hybrid kind of deal like what is found in some phones and tablets though: 16GB of RAM combined with 64GB of X-Point on a single DIMM.

X-Point won't eliminate the need for RAM: the IGP and CPU will rewrite some areas of memory thousands of times per second while running applications and games, which could wear the cells out rather quickly. You would still want to have enough high-speed, low-latency RAM to hold all the immediate working data set to avoid this sort of greatly accelerated wear.

It could become a hybrid kind of deal like what is found in some phones and tablets though: 16GB of RAM combined with 64GB of X-Point on a single DIMM.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
> officially finished

Who are those "officials", please?


> SATA is only for HDDs

There sure are a lot of SATA SSDs still being sold:
I believe I saw a recent article reporting that
"exabytes" of SSDs have now been sold worldwide.

One of my points is that the best of that lot
have hit a glass ceiling, which should have been
raised years ago e.g. to 8G + jumbo frames
-- to synchronize with PCIe 3.0 chipsets.


> high-end uses SAS, which has been extended to 12 Gbps

Yes: you just made my point.

Similarly, USB 3.1 increased the transmission clock to 10G
and it also implemented 128b/132b "jumbo frames".



 

InvalidError

Titan
Moderator

While you may ideally want to eliminate data copies, the reality is that the closer you get to the CPU core, the more performance-critical high bandwidth and low latency become and the smaller the memories that can meet both criteria become. That's why we have registers, L1, L2, L3, L4/LL cache, RAM, NVDIMM, SSDs and HDDs in the memory hierarchy with each serving its own range or roles.
 


Yes and when SATA first came out plenty of PATA driver were still being sold. Hell you can still buy PAAT drives today.

That doesn't mean that they should try to update a standard that will be on its way out.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
check out USB 3.1 benchmarks e.g.:

http://www.pcworld.com/article/3046549/storage/sandisk-extreme-900-review-10gbps-usb-31-performance-at-last.html

"For our first test, we trotted out AS SSD, which rated the Extreme 900 at 796MBps when reading sequentially. Because of Extreme 900’s TLC NAND, AS SSD write performance varied from 781MBps when the TLC-as-SLC (Single Level Cell/1-bit) cache was in play, to 598MBps when the cache was exceeded."

781MBps is a significant improvement over ~550MBps (e.g. best 6G SSDs currently available)

SATA should be allowed to compete with USB 3.1, particularly when
you realize that PCIe 3.0 chipsets already use an 8G clock
and 128b/130b jumbo frames, and PCIe 4.0 ups the clock to 16G.


I disagree with these anonymous SATA officials, to be perfectly blunt.


EVEN IF I am creating a drive image of an OS resident in DRAM or NVDRAM (fast READs),
I would like to know that such a drive image can be written reliably to a
relatively fast mass storage device, and not "snail" along at 550 MB/second when
much faster alternatives are available.

So, I'll stay with a Highpoint RocketRAID 2720SGL, for the time being,
in part because a single M.2 NVMe SSD has the EXACT SAME upstream bandwidth
as an Intel DMI 3.0 link, and that 2720SGL had the same upstream bandwidth
5 YEARS AGO, even though it works with PCIe 2.0 chipsets!

Moreover, we run into the same limit imposed by the DMI 3.0 link
even if we try to configure multiple M.2 NVMe SSDs in a RAID array.
Techs at ASRock and ASUS have already proven that, empirically.

The solution that I prefer is a robust NVMe RAID controller
with an x16 edge connector and PCIe 3.0's 8G clock and
128b/130b jumbo frames: the raw headroom of such an
add-on controller will satisfy my needs for many years to come,
particularly if it is enhanced to work with a 16G clock
when PCIe 4.0 becomes widespread.

In case you might be interested, see our WANT AD here:
http://supremelaw.org/systems/nvme/want.ad.htm

I'm also advocating four U.2 ports integrated onto
future motherboards: there is plenty of room if
all SATA-Express ports are removed completely:
http://supremelaw.org/systems/nvme/4xU.2.and.SATA-E.jpg

And, as a system administrator, I consider it very unwise to place so much
faith in a single point of failure, when RAID logic is already so mature
and reliable, and I honestly do not see very many reports of failure rates
for any of these NVMe SSDs being advertised in so many places
on the Internet.

Just my 2 cents: your situations are no doubt different enough
to warrant different solutions.

The observable lack of failure rate reporting is a serious concern of mine.

 
https://www.sata-io.org/

Then go ahead and let them know that you don't approve of them leaving SATA in the dust for superior connection methods.

SATA Express already beats USB 3.1, it can go up to 2GB/s.

Then we have M.2 using PCIe which can hit 32Gbps or roughly 3.2GB/s.
 

InvalidError

Titan
Moderator

There is no point in doing so since SATA is now considered a legacy interface, relegated to optical disc drives, low performance SSDs and HDDs. High performance SSDs will all be going NVMe to eliminate unnecessary SATA overhead in the OS and hardware stack.
 

Samer1970

Admirable
BANNED


Actually we should abandon SATA all together and use SAS ... which extends to more drives as we wish and supports 4 drives per one cable reducing the cables used by 75% !!! and will use less space on motherboards .. 1 plug instead of 4 ...

I remember back when we had IDE and SCSI ... we all used SCSI even for DVD/CD roms and abandoned IDE (in high end systems)

There is no point in using SATA when we can use SAS very easy ..

I wish u2 (4 lanes or more) replaces the SATA all together as well
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
> Then we have M.2 using PCIe which can hit 32Gbps or roughly 3.2GB/s.

Your math is slightly off, because you're dividing by 10 (i.e. 8b/10b legacy frame):

divide by 8.125 if the 128b/130b jumbo frame
is being used e.g. 32G / 8.125 bits per byte = 3.94GB/s.

divide by 8.25 if the 128b/132b jumbo frame
is being used e.g. USB 3.1.

Yes, I believe there's a thread at www.servethehome.com:

https://forums.servethehome.com/index.php?threads/cheap-nvme-performance-from-hp-9-000mb-s-for-900-or-so.7436/

... which has photos and discussion of 3 such PCIe devices:

one from HP http://www.storagereview.com/hp_reveals_new_z_turbo_drive_quad_pro
one from Dell http://www.servethehome.com/wp-content/uploads/2015/11/Dell-4x-m2-NVMe-Drive-PCIe-Card.jpg
one from Kingston http://www.tomsitpro.com/articles/kingston-e1000-ssd-nvme-liqid,1-3098.html

But, if you dig into the details, these are not robust
RAID controllers allowing all 4 M.2 SSDs to be
assembled in a RAID-0 array.

They're very close to our WANT AD e.g.
they all have x16 edge connectors.

So, each has one element of the NVMe RAID controller
described in our WANT AD.

What is also needed is a capable PLX-type chip
because PCIe sockets normally support only one device.

The other change is that we want U.2 ports
either on the add-in card, or integrated onto
workstation-class motherboards.

 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
http://www.infostor.com/disk-arrays/ssd-data-recovery-on-the-rise.html?utm_medium=email&utm_campaign=ESF_NL_SD_20160602_STR1L2&dni=330830887&rni=295215811

Quoting:

"In addition to performance gains, a significant number of users are also discovering that SSDs are also susceptible to drive failure. Thirty-eight percent of respondents told the company that they experienced an SSD failure. Of those that suffered a mishap, 23 percent reported that they lost data."


Thirty-eight percent of respondents told the company that they experienced an SSD failure.
================================================================
 

bit_user

Polypheme
Ambassador
I don't know, but what I meant is that it's superseded by NVMe and SATA Express.

As SATAe uses PCIe signalling, your point becomes moot.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
No. Not "moot". I'm trying sincerely to apply the proven advantages of RAID technology to NVMe devices,
but instead what I have to read at all these websites is OTHER reasons why I should buy a Z170 chipset
and/or a 10-core Intel CPU with hyperthreading.

After reading dozens of reasons, I am STILL not convinced.

And, with a little probing, I come to find out that only one ASUS motherboard integrates 2 U.2 ports,
and both are downstream of the DMI 3.0 link (4.0GB/s max headroom).

Well, maybe I don't want to upgrade motherboards every 3 years. Maybe I'd rather exploit
the original architectural justification for expansion slots -- by installing a well designed
NVMe RAID controller in an x16 slot wired directly to the CPU. And, if I can't have that now,
I'll just cable more 6G SSDs to our tried and true 2720SGL -- BECAUSE IT WORKS!!

And, believe it or not, Highpoint now offers a retail box that comes with a pair of 2720SGL controllers.
VOILA! two x8 edge connectors ~= one x16 edge connector (effectively) --> 8GB/s upstream bandwidth
withOUT buying yet another Intel chipset. And, even more unbelievable, Highpoint customers have
added comments at Newegg.com to confirm that one can do the same by purchasing two separate
2720SGL retail boxes, and those two work exactly the same as the bundled pair.

Or, Highpoint also offers a more expensive RAID controller with a full x16 edge connector:
http://www.newegg.com/Product/Product.aspx?Item=9SIA24G3FG6779&Tpk=9SIA24G3FG6779

Gee whiz: I realize this approach results in prolonging the useful life of our ASUS PCIe 2.0 workstation
motherboards far beyond the 3-year limit urged on us by Intel.

And, for that, I would rather not suffer abuses for expressing an honest opinion
about WHAT I WANT, particularly when that's NOT WHAT INTEL WANTS me to buy.

I hope this clarifies my position here.

WHAT I WANT is a non-volatile mass storage subsystem that operates
at the same speed and latency as our 12GB ramdisk hosted by 16GB of Corsair XMS DDR2 SDRAM.
 

Samer1970

Admirable
BANNED


you dont need all This ... HP already has a 16X PCIE 3.0 QUAD M2 RAID card !!! and you can use u.2 Adapter on that card in case you want to use U.2 intel SSD (the 750 or better) ... and its NVMe , al what you need is one 16x slot. and thats why I never buy stupid 16 lanes only CPU ... Go allways with the 40 lanes CPU.

here have fun (upto 16GB/s bandwidth!!!)

http://www8.hp.com/us/en/workstations/z-turbo-drive-g3.html

HPZTurboDriveQuadNVMem2SSDmodules.jpg


you can change it into 4x u.2 for intel SSD

20150904150334.jpg
 


SAS is too expensive for consumers much like SCSI was more expensive than PATA at the time. SAS was specifically designed with servers and network storage in mind and has advantages for that.

With M.2 and NVDIMMs we wont need either in the near future.



I said "roughly" for a reason. I understand how to convert bits to bytes and vice versa I was just giving an example of why SATA is no longer useful.



You are going so far off the topic that it is getting pretty insane.

The topic was that Intel is showing off the 3D Xpoint Optane technology for ONE specific use but has not said they will not apply it to other uses. Its currently being developed as a NVDIMM solution because that is a very logical replacement for current storage since memory lanes are vastly faster than anything else not directly connected to the CPU itself.

Even just dual channel DDR3 is over 20GB/s where it would take 32 lanes of PCIe to get to 16GB/s or PCIe 4.0. By the time this becomes mainstream enough to be affordable I would bet that they will be well past 20GB/s for normal users memory.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
One of the contributors to the thread at servethehome.com
found that the HP card has a BIOS lock that limits it to
certain HP workstations; also, all four M.2 slots cannot
be assigned to a single RAID-0 array.

Read the fine print (if you can find it :)

What I can remember about this (because it was several months ago),
is that only one of the M.2 slots is bootable. The remaining M.2 slots
can be configured as a software RAID which is NOT bootable:

http://www.tomsitpro.com/articles/hp-reveals-turboz-quad-pro,1-3022.html

"The Quad Pro can be configured to use one of its M.2 SSD modules as a boot drive,
while still being able to configure the remaining three drives in a RAID setup."

http://www.tomsitpro.com/articles/hp-reveals-turboz-quad-pro,1-3022.html

"UPDATE: See below. Chuntzu found that the card is BIOS locked to specific HP workstations."

UPDATE: Looks like they did:

"Thanks to a BIOS lock, the device is supported only on the HP Z440, Z640, and Z840 Workstations, and
cannot be used in any other OEM workstation solution."

http://www.tomsitpro.com/articles/hp-reveals-turboz-quad-pro,1-3022.html

 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
The topic is Optane: if Intel can pretend to have special knowledge of the future,
we can talk about potential applications that Intel may not be willing
to discuss at this time. If Intel listened to their users a little more,
maybe some of us might actually get what we want from Intel,
instead of being required to find solutions elsewhere and/or
being required to purchase features that we do not need or want.
It's really sad that they had to lay off so many employees recently:
this layoff could mean that some of us will NEVER get what we want
from Intel, at any time in the foreseeable future. I wrote to AMD
recently, and I'm going to wait for the release of Zen chipsets
before I purchase any more workstation hardware.

A little more competition in the CPU sector is a good thing, imho.

 

bit_user

Polypheme
Ambassador
Uh... I was talking about your earlier point about SATA signalling. I mean PCIe 3.0 has already adopted 128b/130b, so I assume SATAe will have it. And there you go.

If you're going to rant and rail about something, it would help to be informed about it. SATAe is almost 3 years old, now.
 
Status
Not open for further replies.