Intel Optane SSD 900P Review: 3D XPoint Unleashed

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I was most impressed with seeing the performance of the Optane memory +HDD combo.

Do these results mean I will "feel" more performance with this combo than I will with a non-optane SSD.

With $80 for the Optane memory and $110 for a 4 TB HD it seems like a steal.
 


Highly doubtful.
Look to a motherboard available in 2018 or 2019.

Just like NVMe drives can't be used for the boot drives on z97 or earlier.
 
It actually does with with Z170. It also works as a boot drive in Z97 but things get scetchy with Z87 and older. You need NVMe support in the UEFI/BIOS and not every manufacturer released one. There is a guy that modifies older board BIOS to boot with NVMe.

RAID, RAID, RAID... You will have to talk to Intel or my boss about that one. I don't mind putting in the work.

Optane Memory (cache) - YES! In the review I said it's very good. I still feel that way. A large number of people tested it and came away unimpressed. Those people get free 1TB NVMe SSDs once a month. For what it costs, Optane Memory is legit.

I haven't seen any Thug Life logos but there may be a Brazzers floating around.
 
Hi Chris,
With cost being a huge deciding factor,do you think there's a possibility of a cheaper
hybrid part optane part nand drive in the future?
Also do you see the lack of support for end to end data protection a problem?
 


It's still so early with this technology. We don't know where its headed. That's why we chose the image on the last page that we did. When the early USB 2.0-connected SSDs came to market who would have expected they would lead to M.2 pushing 3,500 MB/s? The PCMCIA card just looked cooler than the internal USB drive.

I don't see Intel lowering the product of Optane Memory type devices anytime soon. The next generation may increase the capacity size to 64GB and 32GB at similar price points. As people move to newer chipsets it may become more popular. Personally I think it should be but I understand the total cost of ownership with a new CPU, motherboard and Optane Memory module.

Intel didn't give us many details on the protection but the enterprise model supports end-to-end protection. Security is great for those that require it. I'm pretty relaxed on this topic. My PCs are always secure so I tend not to think about it too much.





 
As your site says, Intel's 900p is an expensive proposition, even for its 280GB-variant...

My question, therefore, is this: how does the 900p compare with Corsair's NX500 series (in particular the 800GB edition that costs the same price) in terms of performance?
 
This is awesome. But what excites me most is using XPoint to replace RAM.

I wonder, can we get an approximate simulation of what that world could possibly be like, by making a system with a deliberately minuscule amount of RAM, installing an Optane module (whether 32 GB, 900P, or DC P4800X), and setting the Windows page file to be on that Optane module? In other words, deliberately install so little RAM that you're guaranteed to need the page file, but put the page file on Optane. I'd be interested to see some benchmarks.
 
> If solid state drive developers built PCI-e 4.0 drives today, would they scale to 2x the performance of modern 3.0 drives

Not likely, because total transmission time of a single 4,096-byte IOP
is only a small fraction of total time, when latency is also included.

Just run the numbers yourself, to confirm the above:

8 GHz / 8.125 bits per byte x 4 PCIe 3.0 lanes = 3,938.4 MB/second

4,096 / 3,938.4 MB ~= 1.04 microsecond

Divide the latter time by 2 when PCIe 4.0 becomes available.

Compare the 900P's published latency = ~10 microseconds

 


There are available measurements of the ASUS Hyper M.2 X16 card
with 4 x M.2 SSDs in RAID-0, on Intel and AMD Threadripper systems.

Allyn Malvantano compared 4 x M.2 Optane with 4 x Samsung 960 Pro
installed in that AIC.

But, the VROC "dongle" feature is meeting with a lot of
justifiable criticism by many reviewers.

Also be aware of the Highpoint SSD7101, SSD7110 and SSD7120:
multiples of some or all of those AICs can be installed in the
same motherboard.

AMD's free NVMe RAID is available for X399 chipsets,
and they report excellent scaling up to 6 x NVMe SSDs in RAID-0.

The focus of RAID-0 performance presently is the ASUS DIMM.2 socket
and AICs that plug into an x16 PCIe 3.0 slot wired directly to the CPU.

Intel's DMI 3.0 link is a known bottleneck because it has the
same upstream bandwidth as a single M.2 NVMe SSD
(8G / 8.125) x 4 = 3,938.4 MB/second.
 

I think "end-to-end data protection" is referring to error detection & correction.

It's normal for them to differentiate their higher-end SSDs with this feature. You can even search on it, in their product database:

https://ark.intel.com/Search/FeatureFilter?productType=solidstatedrives&EndToEndDataProtection=true

BTW, the fact that the specs don't mention it tells me the 900P is probably lacking this feature:

https://ark.intel.com/products/123628/Intel-Optane-SSD-900P-Series-280GB-12-Height-PCIe-x4-20nm-3D-Xpoint
 

Easy: just look at the charts!
 

It's quite a bit slower than DRAM and has worse endurance. Endurance is probably the main reason we won't see these products replacing RAM anytime soon.

The scenario I see is pairing this with HMB2. For things like mobile, that should be a game changer, since each are power-saving technologies and they're quite complementary. A single 4 GB stack of HBM2 should be enough, especially if the 3D XPoint is connected via a fast interface like DDR4.
 

First, the size is limited by the fact that these 1st generation chips are relatively low-density:

https://www.anandtech.com/show/11454/techinsights-publishes-preliminary-analysis-of-3d-xpoint-memory

In fact, I think they're saying the storage cells in this generation are actually planar (contrary to the name including the term "3D").

Second, you'll find the power dissipation specs here:

https://ark.intel.com/products/123628/Intel-Optane-SSD-900P-Series-280GB-12-Height-PCIe-x4-20nm-3D-Xpoint

It's comparable to some PCIe NVMe SSD's I've seen, but I don't know off hand how it compares with different M.2 NVMe drives.

Their long-term game plan is to put it in DIMMs. So, I'd be surprised if we didn't also see it in M.2 form factors, somewhere along the way. Though, not this generation of silicon, as you point out (aside from the caching products they've already introduced).
 
I dont get it, OS being the bottleneck? The main thing with OS is due to accessing numerous small files and low QD. But I/O is highly random in nature rather than sequential. Unless you pack everything into a single file that can be read sequentially.

Also, OS has a well developed cache structure to minimise HDD reads. By reading from DRAM insread of HDD, everthing is much faster. This is still true today even with optane. Optane is still much slower than DRAM. Some have resorted to creating DRAM drive with their main memory to get lightning fast access. Of course, thats rather costly and DRAM is violatile.
 
Many of the retailers we contacted quoted us a two-to-three month wait time. That obviously causes some concern with Optane SSD 900P availability.
Between Coffee Lake and Optane, this must be the year of the Intel paper launch. : P

Also, I like how when you get to the real world software performance charts, you find performance is nearly identical to any other SSD, despite the drive costing three to four times as much. Sure the drive might be faster, but all these drives are ultimately limited by the rest of the hardware and how software is optimized for other forms of storage, so you're not likely to see much of a difference under typical usage scenarios.

Installing an Optane SSD in a modern high-end system is a bit like installing a 1080 Ti in a 10 year old computer with a 720p monitor. Sure, the card might technically be fast, but everything else will limit its performance under most usage scenarios to the point where you would see little difference going with a 1050 Ti instead, at a fraction of the cost. And by the time hardware and software catches up, the price of drives in this performance segment will likely be much lower, and the capacities much higher.

Even for an enthusiast system where someone might be willing to spend that kind of money on an SSD, unless their storage requirements are extremely low or they have specific needs that could actually make more use of that performance, they would probably be better off putting the money toward other SSDs with three to four times the capacity instead.
 

NVMe is a standard. It should absolutely work with other chipsets & CPUs. The main thing that requires any support from the motherboard/BIOS is whether it can boot from NVMe devices, but that's a generic capability and not specific to this model.
 

Clearly, you didn't read the article.

Most parts of most charts look similar, but some key parts around low QD and steady-state writes differ. The author states this results in a qualitative difference in experience.

What they probably need to do is find some sort of benchmark that captures the "instantaneous" feeling they describe.
 

Yes and no. First, even if you put everything in one file, there will still be random access. But SSDs don't mind random access nearly as much as HDDs did. Especially if it's concurrent with other I/O.


Now, you're on to something. Operating systems are designed around very slow storage, relatively speaking. The APIs they provide to applications reflect this, as do the majority of application software out there. So, when the storage becomes nearly as fast DRAM, now the overhead of archaic APIs, filesystems, etc. prevent the true speed of the storage from being harnessed. NVMe was a big step in the right direction, but even that is too slow for 3D XPoint.


Not really. 3D XPoint is within about an order of magnitude of DRAM speeds. If you put it in NV DIMMs, then you might find that disk caching overhead is actually slower than reading directly from persistent memory, not least because it's byte-addressable while disk caches are probably sector or block-based.

But that requires a whole new set of APIs and software ecosystem. It'll take a while to get there.


IMO, this is kind of pointless when you can load up a machine with as much as 1.5 TB of DRAM (depending on the CPU). For most purposes, I'd say just get a fast SSD and maybe install some extra RAM in your mobo and you'll be happy. If need be, you can still configure some of that RAM as a software ramdisk, which should be even faster than if it were sitting out on PCIe or over SATA.

The kinds of hardware RAM disk devices you describe made more sense, back before SSDs came of age.
 

I read it. Their impression seemed to be that they think it feels a little faster, even if they weren't able to back that up with any real-world benchmarks. That doesn't exactly seem like a very convincing argument for paying multiple times as much compared to other performance SSDs though. If they felt it made the experience noticeably better, they should have found some way to benchmark that difference in actual applications. The only real-world benchmarks they performed don't reflect that at all. So either their testing methodology is flawed, or it really doesn't make that much of a difference. And even the reviewer seemed to conclude that most people would be better off waiting a few years.

And again, unless someone has a completely unlimited budget for their system, the money used for that barely perceptible improvement in performance would probably be better put toward something else. Like maybe more SSD storage capacity, so they don't have to rely as much on the far-slower performance of hard disks to cover the rest of their storage needs. This product might be more flexible if it were offered in lower "boot drive" capacities though. If it's just to improve the performance of the OS and applications, you shouldn't need anywhere near 280GB. A lower capacity product in the $100-$150 range might be more practical for such a purpose.

Overall, I think it's a cool technology, but today's software just isn't designed with this kind of storage in mind. Software is optimized to avoid doing things that will result in poor performance on a platter-based drive, so even regular flash-based SSDs often aren't being used to their full potential. And developers aren't going to drop everything to rewrite their code for this newer hardware, especially when most people are still running their software from hard drives, so it will likely be years before this kind of storage sees significant performance benefits on the desktop. And by then, similar products with lower prices and higher capacities will be available.
 
I sort of agree with you, except I think the random IOPs sort of hints at what Optane has to offer. And based on my, admittedly, half understanding of this part of computer tech, isn't that the whole point? 99.9% of users can't get enough things going on(queue depths) to ever push drives enough to where their sustained read/write speeds ever really come into play, where as the random IOPs is a very clear indicator of the performance those 99.9% will in fact see and feel.
 

I hope they can address that in future, but let's be realistic about what they can accomplish under deadline. All the raw data you could want is in there, at least, along with a good amount of meaty commentary.


Can't you use their 32 GB M.2 product in that way? Admittedly, 32 GB would be quite small for a Windows install, but comfortably fits a modest Linux desktop install.

To me, 280 GB feels like a generous size for an OS drive, but not outrageous.
 
Status
Not open for further replies.