Flash Industry Trends Could Lead Users Back to Spinning Disks

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

none12345

Distinguished
Apr 27, 2013
431
2
18,785
"How about the idea of emulating a MLC using a TLC chip wherein only the first 2 layers are used? In doing so they save money by not having to have another assembly line. Maybe this is why they don't plan on making MLC much going forward???"

You cant emulate mlc(nor slc for that matter). Its not the number of layers that make it mlc or tlc. SLC stores a 0 or a 1 in a single memory cell, 1 bit of data in a floating gate transistor. It does this by using 2 different voltage states, a low voltage level is a 0, a high voltage level is a 1(or vice versa, doesnt matter). MLC stores a 0/1/2/3 in a single memory cell, 2 bits of data. It will using 4 different voltages to store the data. TLC stores 3 bits of data, 8 values, 8 voltage levels.

The memory cells and all the read/write/etc logic are fundamanetally different.
 

DrakeFS

Reputable
Aug 11, 2014
95
0
4,640
My question to the author, are any of the TLC SSDs being touted as enthusiast products? I doubt companies like samsung will leave a gap in the market that will be able to be fulfilled by other companies. That being said, my next PC build should be around the point Intel\Micron start releasing Optane based drives large enough to handle a windows install. So I see this as a moot point. I am already fine with installing most software to 7200 rpm HDD, on my laptop. So, cheaper, bigger, slower SSDs would still be an upgrade for me.
 

bit_user

Polypheme
Ambassador

I realize @john_cr (welcome, BTW) was confusing multi-bit per cell and multi-layer, but the point remains valid.

Here's a prototype SSD that uses MLC NAND to emulate SLC. This is what I was referring to, at least. The mixed workload benchmarks are where it really shines.

http://www.tomshardware.com/reviews/phison-ps5007-e7-slc-double-ddr-ssd,4913.html

I expect this sort of thing will probably be more common. Enterprise customers and enthusiasts will demand drives that continue to deliver the sort of performance and endurance to which we've become accustomed, even if at elevated price levels. Unless/until the industry switches completely over to 3D XPoint, I think this represents a far better alternative than reverting back to HDDs.

My use of HDDs is only going to decrease, going forward.
 

drwho1

Distinguished
Jan 10, 2010
1,272
0
19,310
Just a Thought: I like to see an SSD the size of a Hard Disk 3.5".
Imagine how much they could fit in there! An SSD of 20 TB or more could be possible... and possibly at a lower cost.
 

bit_user

Polypheme
Ambassador

Well, you'd have to cool the thing. Heat buildup has been an issue (mostly minor) in both 2.5" and M.2 form factors.


No. The the NAND in such a capacious drive is going to drive your costs. If it needed to include multiple controllers to address all of the memory chips, it could work out to be even more expensive.

I don't love M.2. It was clearly designed as a mobile-first form factor. It has heat issues, and its size does limit capacity. But I also regard 3.5" as going in the wrong direction.

At current NAND pricing, I think 2.5" is big enough. The thing limiting capacity of 2.5" drives is probably the number of channels on modern controllers. That and cost.

Update: check out this 2 TB 2.5" SSD review. You can see the board doesn't even fill the entire case!

http://www.storagereview.com/crucial_mx300_ssd_review_2050gb
 

GTrahald

Prominent
May 20, 2017
2
0
510
I had the funny feeling this read like an advert too. I had to go back and check, is this a sponsored link?

This doesn't make sense, the only times, we have gone backwards in performance was to concentrate on a new attribute, such as price or portability, and start the climb over again. As long as people have the money and space, there will always be room for the highest performance tier. It isn't going anywhere. And as long as our date centers continue to grow, there will be so much volume produced that prices for high performing components will continue to be reasonable.
 

kawmic

Distinguished
Jan 5, 2013
10
0
18,510
They were supposed to lower the prices. As far as i can see, they have been the same the last year and a half. They have even risen about 10-15%.
 

Christopher1

Distinguished
Aug 29, 2006
666
3
19,015
Jaber2, things that do not move eventually break as well. Personally I have never had a hard drive 'break' before I got 6 years minimum out of it. Even the drives in my older desktops were moved to the new computer because they had not 'worn out' or broke.
 

Valantar

Honorable
Nov 21, 2014
118
1
10,695
This is an ... odd article. The angle is at the same time conservative, sensationalist and elitist, and it seems to be making a huge issue out of the flash industry adjusting to the wants and needs of the average consumer.

To recap, the article says:
-SSDs are moving to a TLC majority
-Controllers are losing cores
-MLC and more cores are becoming premium features, with premium pricing
-TLC+dual core SSDs offer better performance than SATA in most cases, and far better in read-heavy, average end-user like workloads, but worse in write-heavy power user workloads.

Now, the part the article fails to actually explain, even if it harps on it endlessly: how is this a problem? The average PC user - including heavy gamers and hobby enthusiasts who might play around with Photoshop or other semi-demanding software - has a very, very read-heavy drive usage, with plenty of idle time to make use of for garbage collection and the like. That drives with fast sustained performance, especially writes, are moving to "pro"-level products is honestly only natural, as it's only in pro-level use cases where that would actually make a noticeable difference. I get that the reviewers here are high-end power users. They still review products for the average user as well as the pro ones, and if there are ways to make products for the average user cheaper and with higher capacity without performance losses noticeable in those use cases, I say that's a win for everyone in that user group. This argument is like Formula 1 drivers complaining that Nissan is making the Micra as a cheap, slow car not suited for racing.

Claiming that this trend will push users back to HDDs is beyond ridiculous. Fast sustained speeds are not really the thing that has given SSDs their sterling reputation - access times and random IOPS are the reason for that. For launching applications or accessing files spread throughout the OS and drive in general, those numbers matter far more - and HDDs are still terrible at this, regardless if they can approach 250MB/s sequential transfer speeds. As such, the Optane+HDD solution does have some merit, but the size of the cache drive is a serious limitation here for anyone who uses more than a web browser and Office. 16 or 32GB is very unlikely to be enough for, say, gaming unless the caching algorithm is truly amazing.

Now, Optane looks fantastic due to its performance at low queue depths - where it's truly needed for consumer usage. How often does the average gamer or home PC user Honestly, I'd like a 32+GB Optane-like cache drive plus a cheap-as-possible TLC ~1TB SSD. That would be pretty much ideal. If Intel launches an SSD series with a built-in Optane cache (it could then even be DRAM-less!), I would buy one. Or more. That would be the ideal combination of responsiveness and capacity.

That high-performance SSDs are moving into a higher pricing tier is simply a sign of a maturing market. Previously, there were no ways to make cheaper SSDs, so all SSDs were high performance. Now, there are alternatives, and the cheaper ones are plenty good enough for the average user. Whining over having to pay premium prices for premium performance is just silly. Of course, there are limits to how far in the direction of high-density TLC with low core-count controllers - a single 256GB die with a single-core controller would most likely be utterly awful for any usage - but chances are that won't happen, at least not for mainstream products. Of course, reviewers and users still have to call out manufacturers who make bad SLC caching and garbage collection algorithms and similar performance-killing junk, but is that really worth writing a massive article over? Isn't that obvious?

Tl;dr: this article boils down to whining over pro-level users having to pay pro-level prices for pro-level performance, and there actually being a difference between "good enough for the average user" drives and pro-level drives.
 

AgentLozen

Distinguished
May 2, 2011
527
12
19,015
Nice write up Valantar. I read the entire thing btw.

I visit Tomshardware frequently and I really respect the work that the writers and editors (who else is involved in the process? /shrug) do here. With that said, you can't post SSD apocalypse articles and get everyone frenzied without really good justification. I don't think you have it here.
 

bmguyii

Reputable
May 31, 2014
4
0
4,510
From the test results in most reviews "real world" performance delta's are minuscule compared to the micro-specific tests. Take the Samsung 960 Pro SSD vs the Intel 600p. Cost is 1.6X for ~1.5% perf diff even when the charts show huge deltas. You can argue better for better's sake, but in reality it doesn't move the needle.
 

bit_user

Polypheme
Ambassador

Why do you say that? Just because it's the usual trend?


Demand is increasing, while 3D has taken longer to ramp up than expected. Paul described it in far more detail, here:

http://www.tomshardware.com/news/ssd-hdd-shortage-nand-market,33112.html

His article, written 6 months ago, ends with this prediction:
Many analysts are predicting SSD prices to increase 20-25% over the next few months, so if you plan to buy an SSD or HDD, the time is now.
 

crankbird

Prominent
Jun 15, 2017
1
0
510
Disclosure.- I work for a large storage array vendor.

Even though the single drive personal use market is a lot different to the enterprise market where you typically see 24 drives between 0.9 - 15 TB (soon 30+TB), the chances of hitting a drive wear problem is remarkably small (to the point of non-existence) unless you're using them as targets for high def video surveillance of busy traffic areas or IoT logging of thousands of devices. I've spent way too much of my time going through the math with customers to explain why .. drive wear was an issue when you had 100 and 200 GB drives, which is why SLC and cell overwrites 10x that of TLC was required .. at the Terabyte level and above we're are more or less equivalent device level endurance in terms of overall write endurance. Bigger DRAM buffers and smarter controllers help (especially ones that can make effective use of multi stream writes) reduce write amplification, but then again it might be a bigger problem with consumer filesystems that were built to optimise spinning rust rather than flash, I'm lucky I get to work with storage operating systems that mask that inefficiency. The new(ish) log structured filesystems like ReFS on Windows, APFS in OSX and BTRFS on Linux will probably get tweaks to make them friendlier to the vagaries of high density NAND over time, relieving the relatively limited CPU's in the SSD controllers of having to do so much of the heavy lifting so the net result should be better real world performance outside of the world of synthetic microbenchmarks.

Also if you're looking to HDD saving you from the evils of SSD wear out, the figured I've seen for the next generation of HAMR/SMD drives suggests their overwrite endurance will be significantly worse than even the cheapest of TLC drives. The reason for this is that the drive manufacturers aren't designing for the consumer or enterprise market any more, they're designing for the hyperscale cloud buyers (AmAzureGCELayer etc ) .. those guys don't plan on overwriting much of the data you put into their clouds, espcially the object stores, because it will be cheaper and to leave the data where it is than pay the access fees and decision time to delete the old stuff. Thats also why the drives are all tending to extreme density configurations and lower power controllers .. rack space and electricity are the dominant costs for hyper scale cloud.

Lastly the access speed of the cheap hard drives (which will mostly be spun down and when they do spin up will probably spin slower than the drives you use today) will probably be in the order of 20 - 30 milliseconds .. chances are the latency to your nearest cloud vendor will be around 5 - 10 milliseconds. Net result is that you may as well tier directly to a CDN fronted cloud object store (S3 or ABS) which currently measure time to first byte in similar timeframes. If there will be heirarchical storage in your laptop / desktop it will probably be a combination of some form of persistent memory (NVDIMMS, possibly Optane based, possibly ReRAM, or Phase Change Memory) along with some capacity optimised SSD with inline dedupe and compression thanks to improvements in the NAND optimised log structured filesystems mentioned above.

From the perspective of the desktop, Im convinced that in the immortal words of Jim Gray .. "Tape is dead, Disk is Tape, Flash is disk and Ram locality is King"

The other thing I'm kind of dubious about is combining
 
crankbird, you do have a good perspective. Wherever the market is going we are not heading back to spinning disks. The DRAM buffers on your large arrays will be comparable to optane type of device on a desktop as we move forward. I really suspect you are right optane / NVDIMS or the likes with 64 to 128GB of storage will buffer most writes. The cheaply made SSD's will get writes flushed to them at a much slower pace to level out endurance. It really makes no since today to toss an optane type device on top of a M.2 NVMe drive however if endurance does get worse in favor of capacity I could easily see a hybrid solution of very high endurance optane sitting in front of a high capacity SSD in the consumer desktop space.

 

bit_user

Polypheme
Ambassador
Crucial MX300 drives currently feature DRAM and power-loss capacitors. Probably also some SLC, to buffer writes to TLC. It would make sense for their MX400 to replace all three with a bit of 3D XPoint, in front of some 64-layer 3D Nand. I'd buy that.
 


That was in the back of my mind when I responded, LOL , mind reader. Speaking of the MX300 is a nice little gem its near impossible to find power loss protection outside of very costly enterprise drives.
 
Status
Not open for further replies.