Discussion Does anyone use SSHDs? I have never seen anyone use one. Why would you want to use them now that SSDs have become so much cheaper?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Think uptime vs data backup. Professional RAID configurations will keep operating smoothly with a drive or two down. But generally even that array is backed up to another independent array or Tape backup.

NAND Flash is basically already in striped array, with each chip acting as 'drive' When you apply raid 0 you are increasing potential maximum bandwidth at the expense of drive latency and response time.
 
  • Like
Reactions: artk2219
I have seen the PCIE SSDs, but why aren't they used more today? wouldn't the extra speed provided by a full-length PCIe slot be useful?
Any extra speed is mostly irrelevant.

We are deep into diminishing returns.

Lets say performance doubles with each line below.

30 secs (HDD)
15 secs (fast HDD)
7 sec
3.5 sec
2 sec (SATA III SSD)
1 sec
0.5 sec (PCIe 3.0)
0.2 sec (PCIe 4.0)
0.1 sec (PCIe 5.0)

With the last few lines, the difference is only seen in benchmarks.

My spouses system has a 500GB 850 EVO (SATA III). If I were to swap in a PCIe 4.0, she absolutely would not notice. Most people would not.
Benchmarks (and advertisements) show it as "10 times faster!" In real world use, not so much.
 
why is RAID 0 so bad? I have heard that in RAID 0 if you lose 1 drive, you lose all the data across all the drives, is this true?
Yes, you are simply storing , effectively, every other bit on drives. If one drive fails, you lose all the bits on that drive with no way to recover the bits on the other drives.

RAID 1 just copies the data to both drives simultaneously, so if one drive fails, the other can carry on as normal.

Other RAID types consume some space on each drive for parity data. Meaning that if a drive fails, the data can be reconstructed from the other drives. But there are limits. Depending on the drive configuration and how much space you want to leave to parity you can only lose so many drives before there is data loss.
 
why is RAID 0 so bad? I have heard that in RAID 0 if you lose 1 drive, you lose all the data across all the drives, is this true?
Absolutely true. Any drive, AND the RAID controller.

RAID 0 sounds like a great idea.
But is is so badly misused.
It was never meant for the OS drive, or for your only drive.

But Clueless Friend A tells Clueless Newbie B about it, and off they go.

And with SSDs (of any type) it can be even slower than individual drives.
 
why is RAID 0 so bad? I have heard that in RAID 0 if you lose 1 drive, you lose all the data across all the drives, is this true?
RAID 0 isn't really RAID. RAID is an acronym (Redundant Array of Independant (or Inexpensive) Disks)

So called RAID 0 violates the redundancy requirement. If a single disk in such an array fails the entire array fails with no recovery possible because the data is split between the drives in an alternating byte/word fashion. One drive fails then half the information is lost.

RAID is for those applications where down time cannot be tolerated, online shopping or banking for example where downtime costs money. RAID will allow continued operation, in a reduced capacity until the failed drive can be replaced and rebuilt. It serves no useful function in home use and only creates unneeded complexity. RAID is not and never was intended as a backup of your data.

Though today RAID isn't the panacea it once was with drive sizes having reached the level that it can take days or even a week or more to rebuild a failed drive, which puts more stress on the remaining drives and increases the chance of another failure in one of the other drives before the rebuild finishes, thus taking the entire array down.

RAID, as said above is NOT a backup in itself.
 
Last edited:
  • Like
Reactions: artk2219
My X25-M 160GB drive cost $449 in August 2009, and I used it in my primary machine until mid 2016 (it still works fine) and the drive for my new system was a 240GB Extreme Pro which was $105. I had a RAID 0 HDD setup for games in that old machine due to the cost of SSDs, but I've been using SATA SSDs ever since and my new system will be all M.2 and wouldn't touch RAID on them period.
 
  • Like
Reactions: ThomasKinsley
ssd being only 32gb in size max meant they were too small to install windows on but were good for cache.
Lots of laptops used to be released with 32gb ssd and 1tb HDD and used a program that used Intel Rapid Storage Tech, that hid the 32gb ssd from windows but used it as a cache drive. This used to trip people over when they reinstalled windows and the installer decided to use that 32gb ssd as C drive.

SSHD might have been a way to avoid that and reduce costs. It was the same idea, a small fast cache to make hdd look faster. Optane was just another version of the same idea... at least unless you look at its uses in Enterprise where they actually got Optane drives.

Laptop makers must have had a huge stack of hdd, so using a 32gb ssd as cache to let them keep selling the drives worked well until ssd grew big enough to actually hold windows. Just like Dell have a huge stack of old cases they keep wrapping in plastic to make look new.

Making drives appear faster than they are continued if you used RAPID mode on Samsung ssd. PCIe is running as fast as it can so it also means that RAIDing nvme won't gain you any speed.
 
This thread brings back some rather fond memories. I recall that Seagate made an SSHD. The reviews were mixed. People back then really wanted to use the SSD for their OS drive (it cut the boot time in half), but the SSHD didn't work well for that task. Because it used a cache that you could not manually control (as others have pointed out here), the controller would only put your data in the cache if you completed a task repeatedly. Windows would confuse it by doing a lot of different random background tasks. As the cache was very small, it could never master the boot cycle the way true SSDs could, so you would get middling performance at a much higher price compared to a normal HDD. It was worth it to invest in an SSD, even a very small one, and then purchase a secondary HDD and use that for your files. That also had the added benefit of protecting your files in case the SSD died abruptly or failed (which many did back then).

On a side note, SSDs have reduced in price today because they managed to move from SLC (one bit per memory cell) to MLC (two bits), TLC (three), and beyond. We're up to four and five bits now. Controllers have greatly improved over the years, and M.2 provides the bandwidth necessary to produce extremely fast transfer speeds.

On a secondary side note, I remember OCZ going crazy during this time producing SSDs. They thought flooding the market with mid-level, expensive SSDs would save them. Nope!
 
Last edited:
  • Like
Reactions: Order 66
I tend to think of SSHDs as the couch-car problem. You can make a couch you can use as a car and fulfill your needs for comfort and transportation with one item. And SSHDs were a pretty cool idea, like the couch-car concept (which a number of people have played with on their own).

However, at the end of the day, an SSHD just ended up being a mediocre HDD *and* a mediocre SSD all at once, so that the savings didn't feel like all that big a deal. Multi-use products have to, whe combined, combine more of the strengths of each of their components than their weaknesses. Else you end up with stuff like AIO PCs, which combine the price and lack of upgradeability of a laptop with the portability of a desktop.
 
My last pc few have had a smaller fast drive for windows and a slower hdd for everything else. Too many windows installs had taught me to keep everything on the slow hdd as it was easier to rebuild after a reinstall.

Build dated 2008 had a 500GB Velociraptor as boot drive, think its other hdd wasn't much bigger.
Next pc from 2015 had 256gb ssd & 1 tb hdd as storage
Current is where it started to change, 1tb nvme as Windows and 3tb hdd as storage. Adding a 4tb ssd so soon I may not have any spinning drives. I used the hdd for less on this pc than I had before.
 
  • Like
Reactions: Order 66
My last pc few have had a smaller fast drive for windows and a slower hdd for everything else. Too many windows installs had taught me to keep everything on the slow hdd as it was easier to rebuild after a reinstall.

Build dated 2008 had a 500GB Velociraptor as boot drive, think its other hdd wasn't much bigger.
Next pc from 2015 had 256gb ssd & 1 tb hdd as storage
Current is where it started to change, 1tb nvme as Windows and 3tb hdd as storage. Adding a 4tb ssd so soon I may not have any spinning drives. I used the hdd for less on this pc than I had before.
is there any point in having hard drives to store anything when SSD storage is so cheap? I mean more so if you're gaming not storing masses of photos or videos.
 
not anymore.

not everything needs to be on fast storage but the need to have a hdd is diminishing.
Might still leave my music on hdd as it doesn't need speed.
I had planned on just having 2 nvme drives but i weakened and tried to save some money when making PC.
 
Is a hard drive faster than EMMC? I had EMMC in my old laptop and it was extremely slow. I think the crystaldiskmark result was about 29Mb/s sequential reads.
 
Not really. an eMMC is a type of solid state.

I've had a couple. They were "slow", because the cheesy laptops were slow.
How fast is EMMC in the best-case scenario? I always thought it was terrible but maybe it was the laptop that was holding it back. I remember going from EMMC to a SATA SSD was a huge jump in responsiveness. I think while the SSD helped, I think it was really the much more powerful i5-6500 compared the intel celeron n3350 I had before that really contributed to the performance boost. I even notice a major difference going from SATA to M.2. I just love the fact the PC boots up in less than ten seconds, not to mention game loading times are significantly shorter. People say that the difference between SATA SSDs and M.2 SSDs are negligible, but I disagree because I can certainly notice a difference in boot and loading times, not to mention a huge difference in benchmark scores.
 
How fast is EMMC in the best-case scenario? I always thought it was terrible but maybe it was the laptop that was holding it back. I remember going from EMMC to a SATA SSD was a huge jump in responsiveness. I think while the SSD helped, I think it was really the much more powerful i5-6500 compared the intel celeron n3350 I had before that really contributed to the performance boost. I even notice a major difference going from SATA to M.2. I just love the fact the PC boots up in less than ten seconds, not to mention game loading times are significantly shorter. People say that the difference between SATA SSDs and M.2 SSDs are negligible, but I disagree because I can certainly notice a difference in boot and loading times, not to mention a huge difference in benchmark scores.
Be careful about boot time differences.

Many things can impact that. The difference between SATA III SSD and NVMe SSD is not as much as you think.

For instance, the first half of the boot process has nothing to do with the drive at all. That is all BIOS and motherboard.
 
  • Like
Reactions: Order 66
How fast is EMMC in the best-case scenario? I always thought it was terrible but maybe it was the laptop that was holding it back. I remember going from EMMC to a SATA SSD was a huge jump in responsiveness. I think while the SSD helped, I think it was really the much more powerful i5-6500 compared the intel celeron n3350 I had before that really contributed to the performance boost. I even notice a major difference going from SATA to M.2. I just love the fact the PC boots up in less than ten seconds, not to mention game loading times are significantly shorter. People say that the difference between SATA SSDs and M.2 SSDs are negligible, but I disagree because I can certainly notice a difference in boot and loading times, not to mention a huge difference in benchmark scores.

eMMC flash can be as slow as 25MB/s so about the same as a cheap USB flash drive. Still has the benefits of no seek time, so they aren't horrifically bad for an OS, just don't expect things to install quickly. Takes a good half hour for my little stick computer to do a normal Windows update, barely fits in 32GB.
 
  • Like
Reactions: Order 66
eMMC flash can be as slow as 25MB/s so about the same as a cheap USB flash drive. Still has the benefits of no seek time, so they aren't horrifically bad for an OS, just don't expect things to install quickly. Takes a good half hour for my little stick computer to do a normal Windows update, barely fits in 32GB.
That is why running Fortnite off a micro SD card felt fast, probably because it was faster than the EMMC in my laptop. I had to run Fortnite off a micro SD card because my old laptop only had 64GB of storage. Not that it was worth it anyway, it ran at 18fps at 800x600 lowest settings with a 33% resolution scale so really 266x200.
 
SSHD's would speed up the OS Boot time significantly because that data is accessed frequently and therefor ended up on the SS part of the drive. Otherwise they performed identical to a regular HDD. SSD's used to be mega expensive per GB while HDD's were really cheap, it was a compromise between them that lasted until SSD pricing got cheap.
 
RAID 0 isn't really RAID. RAID is an acronym (Redundant Array of Independant (or Inexpensive) Disks)

So called RAID 0 violates the redundancy requirement. If a single disk in such an array fails the entire array fails with no recovery possible because the data is split between the drives in an alternating byte/word fashion. One drive fails then half the information is lost.

RAID is for those applications where down time cannot be tolerated, online shopping or banking for example where downtime costs money. RAID will allow continued operation, in a reduced capacity until the failed drive can be replaced and rebuilt. It serves no useful function in home use and only creates unneeded complexity. RAID is not and never was intended as a backup of your data.

Though today RAID isn't the panacea it once was with drive sizes having reached the level that it can take days or even a week or more to rebuild a failed drive, which puts more stress on the remaining drives and increases the chance of another failure in one of the other drives before the rebuild finishes, thus taking the entire array down.

RAID, as said above is NOT a backup in itself.

RAID0 was really about raw read performance for when you need to pull a ton of data off the older spinning HDD's and you didn't care about redundancy because you already had the data archived elsewhere. RAID3/5/6 was to accomplish the same goal, pulling lots of data out of a system while the system still being operational with a drive failure. The being operational part was the important part, the drive rebuild was just a nice benefit because like you said, RAID is not a backup solution, the data should already be stored elsewhere so data recovery objectives have already been met without the RAID.

If the scenario is "a drive just died" what is the time to recovery of the service? With a RAIDx solution, it's zero because the system is still going and while slow, the service is still being provided while the team executes a recovery plan. With JBOD/RAID0 the data is inaccessible and the service goes down while you execute the recovery plan. It's all about ensuring maximum service uptime, which is why it's not really that useful for home folks.
 
RAID0 was really about raw read performance for when you need to pull a ton of data off the older spinning HDD's and you didn't care about redundancy because you already had the data archived elsewhere. RAID3/5/6 was to accomplish the same goal, pulling lots of data out of a system while the system still being operational with a drive failure. The being operational part was the important part, the drive rebuild was just a nice benefit because like you said, RAID is not a backup solution, the data should already be stored elsewhere so data recovery objectives have already been met without the RAID.

If the scenario is "a drive just died" what is the time to recovery of the service? With a RAIDx solution, it's zero because the system is still going and while slow, the service is still being provided while the team executes a recovery plan. With JBOD/RAID0 the data is inaccessible and the service goes down while you execute the recovery plan. It's all about ensuring maximum service uptime, which is why it's not really that useful for home folks.
With larger drive and data sizes, the rebuild time (and downtime) for a RAID 5 can be problematic.

When I changed from 4x 3TB to 4x 4TB, RAID 5, 6.5TB data in it...
The rebuild for each new drive was 6-7 hours.

I already had a good full copy, I just wanted to see how the QNAP handled a single drive replacement.
 
With larger drive and data sizes, the rebuild time (and downtime) for a RAID 5 can be problematic.

When I changed from 4x 3TB to 4x 4TB, RAID 5, 6.5TB data in it...
The rebuild for each new drive was 6-7 hours.

I already had a good full copy, I just wanted to see how the QNAP handled a single drive replacement.

There is no downtime and the rebuild time is irrelevant. The point was to ensure the service was still available. Notice I said service, it's the only thing that really matters. The data is already backed up somewhere else, it's already safe.

So two systems doing the exact same thing, one with RAIDx and the other with JBOD. Both experience a catastrophic drive failure at the same time. Which system is now DoA and which one is simply degraded? The JBOD is dead and will be dead until the data restoration is complete. The RTO is however long it takes to restore the data from archive. The RAIDx is still operational, whatever business service it's supporting is still available while the data is restored, therefor we have a RTO of zero.

If rebuild time is an actual concern, you don't rebuild and instead restore the data from archive to a secondary system, do a real time diff to update it with any recent updates from the still online but degraded system, then have a momentary outage where you switch them out.
 
Status
Not open for further replies.