News SSD Prices Have Fallen 15 to 30 Percent Since January

peachpuff

Reputable
BANNED
Apr 6, 2021
690
733
5,760
d83.jpg
 
  • Like
Reactions: bit_user
D

Deleted member 14196

Guest
Get them now before they skyrocket. Samsung is losing billions on memory this year so they're cutting back, other manufacturers are doing the same.

There’s no need for high pressure sales tactics. Should the economic downturn continue they still won’t be able to demand the price they were asking. Even if they cut supply because most people have what they need already.
 

PEnns

Reputable
Apr 25, 2020
702
747
5,770
Let them drop, it's still not enough for me.

The sweet spot that defines cheaper SSDs for me is when a 2 TB SSD costs no more than $100.
 

abufrejoval

Reputable
Jun 19, 2020
601
433
5,260
I want cheap cheap crucial mx 500 sata ssd. Have Ten slot's to put a good use. Cold storage :)
I understand the want, but it may not be such a terribly good idea: electrons don't actually like being trapped and tend to wander eventually: all the wonderful software which makes multi-level cells work reliably by compensating ECCs and even rewriting drifting blocks, can't manage, when the power is off.

Consumer SSDs need to retain data for five years, when powered off, I believe it's much shorter than that for data center drives, but that I don't know if they are tested against that after they have reached single digits on remaining life expectancy and temperatures >60°C, both of which can be rather detrimental to proper data retention.

I've not yet lost any data to any older SSD lying around for years. But then I've also generally tried to avoid leaving them unused. Perhaps it would be interesting to check some of the 1st generation drives, some of which are actually still IDE, 120GB but also SLC or MLC, certainly no TLC or QLC.

In any case spinning rust is really much better for truly cold storage, these Crucial drives are more suited to a standby archive, perhaps with some versioning.

I also use SATA in RAID0 as Steam game cache, where an eventual failure just means having to reload from the Internet: RAID5/6 are too much of an SSD killer because of the write amplification, another reason why I stick with conventional HDDs there.

Too bad all those SATA ports are disappearing from mainboards and sacrificing one of 1-3 NVMe PCIe 5.0 slots on a 250GB NVMe 3.0 drive, seems out of question: recycling older NVMe stuff still technically viable becomes an economic no-go.
 
  • Like
Reactions: Amdlova

abufrejoval

Reputable
Jun 19, 2020
601
433
5,260
I wish there was affordable PCIe switch chips...

4TB NVMe 4.0 drives are nearing a price range in TB/$ that isn't that different from SATA SSDs: all the typical markups for NVMe and PCIe 4.0 have moved to PCIe 5.0 or > 4GB capacities.

But that leaves me with plenty good 1TB NVMe 3.0 drives, which are still far too good to throw away, yet bursting at their seams.

With SCSI/SATA/SAS that was all easy, just put an HBA or even a RAID adapter in the middle and reusing capacity was just an issue of a big enough enclosure, noise and power.

But with PCIe storage, we really need those switches, ideally something like a 4x PCIe 3.0 to 1x PCIe 5.0 variant that actually goes into one of those M.2 slots, while the NVMe drives go into some type of cheap hot-swap caddy thing, like we have with SATA. I wouldn't even mind 2:1 or 4:1 oversubscription, PCIe 5.0 x4 is plenty fast even if 16 PCIe 3.0 source NVMe drives could deliver more.

But any switch chip out there seems to cost more than a 4TB NVMe 4.0 NVMe drive, enclosures aren't cheaper, too, AFAIK. Is there really nobody out there to pick up this gauntlet?
 

bit_user

Titan
Ambassador
I want cheap cheap crucial mx 500 sata ssd. Have Ten slot's to put a good use. Cold storage :)
Depends on how cold. If you go a couple years without plugging them in or turning them on, you run a serious risk of data loss.

SSDs are not designed to be a cold storage medium. HDDs are better for that, but tape and optical are best.

Consumer SSDs need to retain data for five years, when powered off,
Absolutely not. You're spreading some really bad information, here.

First, almost no consumer SSDs even come with a 5-year warranty.

Second, show me even a single datasheet or manufacturer website which says they do this! I haven't seen a spec for power-off data retention on a consumer SSD for nearly 10 years - and I've been looking!

I've personally seen a 2018-era TLC drive, from a top-tier brand, have bad blocks when I powered up a machine that had been sitting in storage for about 2 years.

I believe it's much shorter than that for data center drives,
The difference is that some data center drives actually have a specification for power-off or EoL data retention. The standard spec seems to be 90 days. In practice, you'll probably get much more than that, but the manufacturer just wants you to know that they only guarantee that much.

but that I don't know if they are tested against that after they have reached single digits on remaining life expectancy and temperatures >60°C, both of which can be rather detrimental to proper data retention.
The spec is the spec. If they don't make any such qualifications, then as long as it's still under warranty and you haven't exceeded any of the other limits, it should still retain data according to what they've stated.

I've not yet lost any data to any older SSD lying around for years.
"Older" is the key. That means larger cells which can hold more charge. Also, helps that they're lower-density with only 1 or 2 bits per cell.

RAID5/6 are too much of an SSD killer because of the write amplification, another reason why I stick with conventional HDDs there.
Only if you're doing zillions of little sub-megabyte writes. RAID-related write amplification is nothing compared with what SSDs themselves do.

For wearing out SSDs, modern web browsers are probably the worst offenders. According to Windows, the amount of host writes they generate can be really insane.

But, the key point about RAID-related write amplification is that you never have more writes-per-drive than you would if you just used a single one of those drives standalone. So, as long as you're not pushing the single-drive write limit, then there's nothing to worry about with putting it in a RAID-5 or RAID-6.

Too bad all those SATA ports are disappearing from mainboards
All? I haven't see a board that doesn't have at least 2, but most boards still seem to go with 4 or more.
 
Last edited:
  • Like
Reactions: Amdlova
My cold storage has 3 months for full dump. Nand cell can have way more time before loose some bits. Have some sd cards here with mp3 way beyond 2010 and still manage to get data with out errors. Spinings rust drivers can die I have lots of them put in the system and lock before you can extract something.
 

bit_user

Titan
Ambassador
Nand cell can have way more time before loose some bits. Have some sd cards here with mp3 way beyond 2010 and still manage to get data with out errors.
Once you power them on, the embedded controller might do a refresh. Certainly, that happens with newer storage devices.

Don't use retention characteristics of older NAND to set your expectations for newer NAND. The compute & phone market overwhelmingly cares about GB/$. For the performance tier (which is probably less of the storage market than you might think), there's an added focus on performance. However, there is almost no incentive for NAND manufacturers to prioritize cold storage duration. By the time end users even hit this case, the device it's in is usually well beyond the warranty period.

I have a 32 GB eReader from 2017-2018 that will lose all its contents, if I go more than about 3-4 months without recharging it.
 
  • Like
Reactions: Amdlova