With HDDs On The Ropes, Samsung Predicts SSD Price Collisions As NVMe Takes Over

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

3ogdy

Distinguished



You're right. There is a CLEAR difference between a system running on an SSD and another one running software off the HDD. I wouldn't need to check Device Manager out to tell the difference. Boot time is the first to notice, then comes launching web browsers that pack (hundreds of) tabs and extensions. Then comes overall system snappiness - what makes a system with an SSD feel faster is response time, not necessarily those 550MB/s it can push in sequential workloads. Spin up time is non-existent.

There is something that did disappoint me at first: antivirus scans don't seem that much faster, so Kaspersky / Bitdefender would still take the entire middle ages to finish a system-wide scan - even if it was all happening on an all-SSD PC. Having a lot of data isn't a good thing at all in this case, but that's the reason why companies such as Bitdefender don't rely on full-system scans to defend a computer anymore - firewall and scanning files when they're written / modfiied only is what makes system-wide scans unnecessary, with a few exceptions.

Other than that...well, there are a few more things such as performing OCR and working with WinRAR, but those are likely going to be more bottlenecked by the CPU, than the SSD / HDD.
 

InvalidError

Titan
Moderator

As you already said earlier, system performance takes a nose dive as soon as RAM gets tight and that's where SSDs make the biggest difference to general usability. An SSD is no substitute for having vastly sufficient RAM to handle all typical use and once you have that much RAM, the SSD is of little to no benefit beyond quicker boot and initial application launch time after rebooting.

I'm an SSD-(mostly-)don't-care too: my system is on 24/7, I reboot it less than once per month and put it on sleep instead of off when I need extra quiet. With 32GB of RAM, I can leave all of my stuff open and not worry about HDD speed since I have practically zero swapping and all of my software's files stay in the file system cache. The only reason I have an SSD in my PC (480GB Sandisk Extreme Pro) is because someone offered to ship me a used one for free. The thing I ended up liking the most about it? Hearing seek noises every ~10 seconds due to Chrome and Firefox saving cookies and other junk even when the windows are minimized was driving me nuts, no more of that with the SSD. Performance-wise, yes, stuff loads 3-4X as fast the first time I open it after rebooting, but that's only a fringe benefit to me as I see it maybe every other month.
 
Oh there's no competition, ssd's are much quieter without heavy seek noises. I'm just stuck needing the space that isn't readily available with ssd's. That and cost, the premium for the space I use in ssd form would be a serious drawback. I think they're headed in the right direction but still a ways off from maturing into a standby for reasonably priced storage solutions.

Maybe I notice less issues with hdd's since I tend to partition them off, generally a smaller primary partition that keeps the os and programs at the outermost area of the platter. Then when it comes time for maintenance, defrag or whatnot I can deal with a smaller partition. Any new files created go on the other partition which reduces fragmentation in the first place. Downloaded files or newly created files aren't jumbled up in my os/apps files. The larger secondary partition doesn't really need the same level of attention since it's mostly bulk storage.

I also notice less issue over the years due to change in habits. Back in the win95/98 days I was more of a software junky, always checking out this program and that program, installing, uninstalling. I do much less of that these days because I have a basic set of programs I use.

That and software itself has improved along with things like web page development. There's no need to have ie, chrome, firefox, opera and a host of other browsers. "Best viewed on xyz browser" isn't much of a thing anymore like it used to be and I don't have to open a new browser to view someone's website because it's "optimized for this browser". Communications have also changed, there are a variety of multi messengers now. I no longer need a separate install of yahoo im, icq, aol im, trillian, msn messenger, skype etc. All those things added to clutter and unnecessary files.

The bit about gamers needing space for their games that might not be affordable on ssd really kind of hit with the recent article covering Gears of War, when they said the 11gb update wouldn't fit on the disc along with the 50 some odd gb's of data included in the original game. If games keep trending this way with a 40-50gb+ presence on the drive it will quickly stuff the current commonly used ssd's in 128-250gb capacities. Even though 480gb ssd's are dropping into the $100-120 range I'm sure there are a lot of gamers who would rather grab a $50 1tb drive and put that extra $50-70 toward their gpu or something.
 

bit_user

Polypheme
Ambassador
First, why can't the drive keep executing any other pending commands (which there must be, for it to "keep going").

Secondly, whether the blocking happens in the drive or host, you're still going to wait for that particular read to complete.

I really think this doesn't improve anything. It's just a cost optimization.
 

josejones

Distinguished
Oct 27, 2010
901
0
18,990


Remember, VGA got pretty cheap too and it took like 10 years to get rid of it. We certainly need SATA for now but, it should start being considered obsolete by 2020 and begin to disappear from at least some motherboards to make more room for more NVMe SSD connections. I just hope SATA doesn't turn into the next VGA and linger for 10 years. No point clinging to old obsolete, outdated and super slow technology that is capped at super slow speeds with an ACHI interface that will never ever get beyond 600MB/s. SATA has been a huge bottleneck keeping HD's and storage super slow for years. Why anybody would want to cling to that dated super slow SATA interface is beyond me but, to each his own.

"The SATA 1.5Gb/s bottleneck was quickly exceeded, followed by SATA 3.0Gb/s, and within a year of SATA 6.0Gb/s there were drives that could saturate even that interface. Faster alternatives were needed, but the interface was only part of the problem."

http://www.pcgamer.com/best-nvme-ssds/

"Starting in October, the DemoEval lab will be hosting clusters for Silicon Valley startups using all NVMe SSDs. A year ago, these were SAS/ SATA clusters so the change is clearly upon us."

https://www.servethehome.com/going-fast-inexpensively-48tb-of-near-sata-pricing-nvme-ssds/

NVLINK Unified Virtual Memory (UVM) = 5 to 12 times faster than PCIE 3.0
http://wccftech.com/nvidia-pascal-volta-gpus-sc15/


 
For now the sata interface is plenty fast for mass storage. Sure it would be nice to have ssd speeds on all storage but until ssd comes down further in price I don't see it happening. Prices have fallen for sure but if sata goes away too quickly and we're forced into ssd only the way they did with crt's to flat panels then it's going to cost a small fortune. The cheapest ssd's with 1tb at the moment start around $230 and quickly escalate to $300+.

For someone building on a budget who wants room to store several games and other files, the standard's been 1tb. It's a matter of choosing between a 1tb ssd or choosing between a 1tb sata hdd + gtx 1060. That's not a difficult choice to make for most people. Many folks pick a $25 cooler over a $45 cooler to save money, imagine the sticker shock when they're confronted with a storage drive that costs nearly as much as their gpu.

There's no way of knowing for certain, the industry speculated that lcd would eventually replace crt but when it didn't happen fast enough they just forced it on everyone. At least in this case the modern replacement is decent. In many ways the lcd's available when crt's got shoved off the shelf were inferior and cost more. It's as if people are actually fairly satisfied with something and when they fail to progress along the way the industry thinks they should, they're shoved into it. The removal of pay phones forced people to get cell phones, taking crt's off the shelf forced people to get lcd, discontinuing incandescent bulbs in the most common forms forced people to get cfl.

I just hope sata isn't ended before ssd reaches competitive price/capacity. Having a choice is nice, being herded like cattle not so much.
 

bit_user

Polypheme
Ambassador
Wow, you completely missed my point.

Again, what are people supposed to do who want to run RAID? HDDs still offer the cheapest price per GB, into the foreseeable future. If someone wants to run a media server, how is M.2 going to handle that? It would burn through PCIe lanes too quickly, for one thing. And the speed of even SATA 2 is more than enough for HDDs. I guess we'll have to buy PCIe SATA controller cards. Maybe there'll even be some that fit M.2 slots.

As for SSDs, no argument there. In my next PC, I'll definitely used M.2 NVMe. And for my current PC, I just got a NVMe PCIe add-in-card.
 

josejones

Distinguished
Oct 27, 2010
901
0
18,990
"what are people supposed to do who want to run RAID?"

How many actually use RAID - very few so, care-factor = zero. One can now use RAID with NVMe too - pushing SATA closer to extinction and we will all be better for it.

ssd-interface-comparison-v2-en.png


6468_31_defining_nvme_hands_on_testing_with_the_1_6tb_intel_p3700_ssd.png


6468_18_defining_nvme_hands_on_testing_with_the_1_6tb_intel_p3700_ssd.png


"The SATA 1.5Gb/s bottleneck was quickly exceeded, followed by SATA 3.0Gb/s, and within a year of SATA 6.0Gb/s there were drives that could saturate even that interface. Faster alternatives were needed, but the interface was only part of the problem."
http://www.pcgamer.com/best-nvme-ssds/

"Starting in October, the DemoEval lab will be hosting clusters for Silicon Valley startups using all NVMe SSDs. A year ago, these were SAS/ SATA clusters so the change is clearly upon us."
https://www.servethehome.com/going-fast-inexpensively-48tb-of-near-sata-pricing-nvme-ssds/
 

bit_user

Polypheme
Ambassador
Just to be clear, I was talking about RAID for capacity, which means HDDs. That renders your slides moot.

If we're talking about non-rackmount PC's on the whole, then probably fewer than 1% have a HDD RAID of > 2 disks. If we talk about the DIY PC and workstation users, then the percentage of people with > 2 disk RAID might go above 10%. So, there's a small but influential group of us that don't want to see SATA completely disappear. No, I don't need it in all my boxes, but I want > 2 ports in some.

I fear the answer will be PCIe-based SATA controller add-in cards. The main problem is that I have a few of these and they're often more problematic and less well-supported than chipset SATA.

BTW, M.2 is not the ideal form factor, for desktop users. It eats a lot of board real estate, and it's not very cooling-friendly. On this point alone, I think SATAe has something to offer.
 
I'm not sure getting rid of tech makes people 'better off for it'. Assuming someone were using m.2 and nvme pcie ssd's right this very minute, are those 4-6 small sata ports hurting anyone? Generally things like sata can share pcie and it's already being done with m.2 where it's one or the other, sata express or m.2. Saying care factor is 0 is a personal opinion and not valid in the least beyond someone's own personal ideals. That's a bit like saying 'blue is a terrible color, no one cares simply because I don't like it or use it so strike it from the color wheel'. Sata isn't holding anyone back, if you don't like it don't use it.

By going with m.2 or pcie ssd's the drive capacity is going to be limited not only by the capacity of the individual drive itself but how many physical drives you can pile on top of the motherboard. They're direct attach regardless which method you go with and there's only so much motherboard real estate. Something ssd can't do as of yet is provide capacity. My case has room for 6 drives, if all slots were full with 2tb hdd's that's 12tb. Try and fit 12tb of ssd's onto the motherboard somewhere along with gpu's etc. It's not going to happen.

Nas or external usb drives are ways around that but then it means a cluster of external stuff everywhere. If we go by the 'future' of pc's, no optical drive, forced to use nas or external usb drives, eventually everything is going to become an addon. If I wanted a big modular cluster of parts junking up my desk I'd go with a laptop and do the same thing. Thanks to a pc tower I don't have to have an nas, external drive, my optical drive is built into the case right there when I need it.

Something I've noticed become more prevalent on modern hardware is more usb ports. Some may ask who needs sata, I ask who needs 10 usb ports. A board like the z170x evga classified has 8 on the back of the board alone plus 2x usb 3.0 headers internally. So you've got the kb/mouse, there's 2 ports, a controller, a flight stick, speakers maybe, a usb external hard drive, we're up to 6. Still room to plug in a wireless modem or bluetooth drive, thumb drive and leaves a port left over for the webcam. So sure, they could eventually all be used up if someone tries hard enough but that's not most users either.

I'll be happy to give up spinning drives when ssd's match them on price/capacity and the ability to connect more than a couple drives. Even standard sata based ssd's make more sense than m.2/pcie since they attach to a relatively small header and the drive itself can be placed elsewhere besides planted right on the motherboard.

From what I gather from those charts they must have been referring to drive speeds? At the moment a single nvme ssd only offers up to 2tb capacity on newegg. Sata based ssd's come in sized up to 4gb, so no, 1 nvme drive does not replace 6 sata ssd's in terms of capacity. 2tb doesn't equal 24tb. There are more than one aspect of why raid exists, yes raid0 is striped for speed but there's raid1, raid5, raid6 etc that deal with redundancy for drive failures. Hence the reason why sata providing connections for a handful of drives is also important. Sure a single nvme may be faster than 6 striped sata ssd's but when that 1 nvme drive decides to die due to age, manufacturing defect or anything else, it's gone. If it'd been part of a raid1 array another drive could be used as a mirror and it's business as usual (with the need to replace the failed drive and mirrored again).

There are several reasons why sata shouldn't be so quick to disappear. Not fitting one person's use isn't a good reason.
 

InvalidError

Titan
Moderator

M.2 does not eat any board real-estate in a mezzanine setup where the M.2 footprint overlaps other board components. On ATX motherboards, most put their M.2 slot between PCIe x8/x16 slots where there often isn't anything anyway. Heat-wise, M.2 SSDs only draw ~5W and won't be an issue in most consumer applications.

As for the percentage of DIYers having RAID arrays, I'd be surprised if even 10% had RAID of any sort, never mind arrays with more than two drives.
 

bit_user

Polypheme
Ambassador
They can't be very hot components, or else the heat problems will be compounded. I wouldn't want them sandwiched between graphics cards, either.

It's not hard to encounter thermal-throttling with M.2 NVMe SSDs. They have a pretty low thermal ceiling. If we're talking about enthusiast users, then it's enough of a concern that Samsung had to develop a special heat spreading label for their 960 Pro.

And the other problem with M.2 is that it's so small that chips for larger-capacity drives really have to be packed in, which limits capacity, increases prices (since lower-density chips can't be used) and compounds heat problems.
 

InvalidError

Titan
Moderator

The thermal ceiling only becomes an issue if you are reading or writing somewhere in the neighborhood of 100GB in one continuous chunk. That would be extremely uncommon in most people's everyday desktop use where launching programs will cause well under 10GB worth of initial storage traffic at far less than 100% duty cycle.
 
I'm curious why m.2's decided to go with pins on the end and orienting them so they lay flat against the board rather than use a slot with pins along the edge similar to ram. Or why they couldn't have taken a playbook from laptops where the ram pcb still connects along the edge but sort of lays over itself and folds into place rather than direct insertion like traditional desktop ram. Seems they could fit more devices on the board that way.

I have a feeling many will still be using their sata ssd's and even if cases do away with the additional room in the front of the case typically reserved for 5.25" and 3.5" bays it won't result in cases being too much smaller. A gpu still has to be able to fit and many enthusiast gpu's are still using part of that space. In certain scenarios m.2 may be a great advantage, mini itx for htpc builds and the like. Sata appears to be sticking around for some time though. Even if using a sata ssd like the 840 evo, 850 evo etc and mounting it elsewhere, behind the mobo tray with the wire organization, on top of the psu cover, it will still need a sata header to connect.

Thousands of people own sata ssd's and they're supposed to last people a long time. More and more people are upgrading and adding sata ssd's as part of their build. If the means to connect them suddenly vanish, what's the point of a long endurance drive. Well sure, the drive will last you 7yrs but we're pulling the connectivity for them in 3-4 rendering your purchase useless. That's sure to make folks happy.

Outside of burst speeds most people don't make huge data transfers to and from their drive as InvalidError pointed out. If people aren't in dire need for even faster data transfer as it is in most ordinary situations, there's not much there to verify that faster and faster drives fills a 'need'. Those in actual need or benefit of pcie ssd's then 2-4gb/s transfer rate is about as much a 'need' as dual 16 core hyperthreaded xeons with 128gb of ram to check email and play cs:go. For a select group it will satisfy a need, for the common folks not so much.
 

InvalidError

Titan
Moderator

M.2 uses an edge connector exactly like RAM, it is simply along the narrow edge instead of the long edge to avoid needing a long connector and being able to accommodate multiple length form factors. You don't need many pins to accommodate 5V/12V power and four PCIe lanes and it does not make sense to make the connector wider than necessary to accommodate that. If a motherboard manufacturer wanted to use M.2 slots that stood up vertically like desktop DIMMs do, it could do that, but it would need to come up with some other way of providing mechanical support.
 

bit_user

Polypheme
Ambassador
It's highly-dependent on the drive, and it certainly doesn't need to be continuous.

The fact that it can happen at all is a strike against M.2. If people could overheat a non-overclocked CPU, with the boxed cooler, the system would be considered to have a defective design.

M.2 was not designed to be a performance desktop form factor. It was designed primarily for mobile.

I guess you don't edit video or do anything with deep learning?

Most people don't build/upgrade their own PCs ...or read Tom's Hardware.
 

InvalidError

Titan
Moderator

Most people don't do deep-learning or edit videos that require sustained throughput of 2GB/s for minutes at a time either. If you use M.2 drives in such applications and cannot tolerate "limp mode", just stick a 40-80mm fan next to it..
 

bit_user

Polypheme
Ambassador
And most of those same people wouldn't readily notice the difference between SATA and NVMe. See, you're argument is actually pretty close to synphul's.

That's like telling someone that if their factory-clocked CPU overheats, just add water cooling. It's technically correct, but doesn't it seem wrong to you that it should be so? That, instead of CPUs never overheating, when installed as directed, they "mostly" won't overheat? How is this not lowering the bar?
 

InvalidError

Titan
Moderator

CPUs are expected to be under potentially up to 100% load for extended periods under normal use. That's extremely uncommon for storage. Also, unlike CPUs which suffer no degradation from extreme workloads, SSDs are going to die in short order if you write 1GB/s continuously for days on end - at that rate, it would take less than four months to burn through 10PB of write endurance.

A more apt analogy would be telling people that their 1.6L stock engine block with a peak output at 4000RPM and a rev limit at 6000RPM is not designed to do 6000RPM continuously. It can do it in short bursts when you need to downshift for a short acceleration burst but running it at that RPM continuously will drastically reduce the engine's lifespan.
 
Status
Not open for further replies.