News Solidigm Launches 61.44TB PCIe SSD: Up to 7,000 MB/s

Intel Solidigm conveniently left out Kioxia's 30.72tb SSDs of course.
And it's definitely not because Kioxia can't do QLC but QLC is still riddled with issues.
 
This was earlier than expected. I had thought they’d launch in a few more months. Interestingly, while I can find some D5-P5336 drives listed for sale, none of them are 61.44 TB, the product of interest. For 32 TB-class SSDs, the Micron ones would be far superior with a 4 KiB indirection unit and TLC endurance/performance.

The 30.72 TB one is a mere $1,800 which is quite a drop from the D5-P5316’s $2,400. Assuming $3,600 for the 61.44 TB one when it gets listed, the dollars per TB effectively gets cut in half since last year.

Obviously that is still double the price of consumer SSDs. The bottom of the barrel SSDs could be had for $32/TB now. At the same ratio, a 61.44 TB SSD could be a mere $2,000. One can dream. 😏
 
Last edited:
Yeah, I'm betting these will cost more than 4000 dollars each. Anything new, or extremely large capacity, commands higher than average price per TB figures simply due to the fact that people and companies will pay for the convenience and need for the large capacity. Dream all you want, it will likely be a while before we see anything that large come in at anything near those prices.

But even so, obviously, these are going to all be going to data centers at immensely large locations anyhow. And maybe, a few very rich fools who don't care about the cost. For the average person, or even very much enthusiast grade users, it will be years and years before any products like this have any relevance for us. Which I know isn't going to come as news to any of you.
 
  • Like
Reactions: bit_user
I think it's cute how they compare against industry-standard TLC drives, while continuing to sell their own TLC drives as a more premium product than their QLC models.
I don't know why, other than price, which for an enterprise user or data center probably doesn't even apply in the same way it would for a consumer, you would create a huge, fast and still expensive drive like this and then use QLC rather than TLC. Anybody that's going to pay that kind of money for a single drive will surely expect to get something with a very good endurance rating and clearly we know QLC is going to die faster so the choice is pretty baffling for this product. For consumers looking for the cheapest SSD options they can find, fine, but for applications where up time and longevity are probably far more important to those implementing them, you'd think it would be a no brainer. IMO.
 
I don't know why, other than price, ... you would create a huge, fast and still expensive drive like this and then use QLC rather than TLC.
Size matters. If someone has a large, read-mostly database that they want to keep on SSD, then this could be just the thing. Sure, you can always RAID together multiple smaller drives, but that's both more expensive and more engergy-intensive.

Also, bigger drives -> higher-density (i.e. more storage per rack unit, meaning you might get by with fewer servers for the same storage).

Finally, consider that AMD's new Bergamo means the standard 2P workhorse server can now have up to 512 vCPUs. Assuming the number of vCPUs per VM stays constant, that means you need more storage (and RAM) to support more tenants per machine.

That said, I have no real cloud computing experience to speak of. These are just some thoughts that came to mind. Maybe some are invalidated for reasons I haven't considered.
 
My question wasn't why you'd want or need a drive this size, but why why you'd use QLC NAND that doesn't have nearly the endurance/longevity as TLC NAND, on a drive that somebody is going to have to pay this much money for. Sure, TLC is more expensive, but if you're already paying that kind of price for a drive anyhow, I doubt anybody is going to balk at paying the difference between QLC and TLC if it means a drive with much better endurance.
 
My question wasn't why you'd want or need a drive this size, but why why you'd use QLC NAND that doesn't have nearly the endurance/longevity as TLC NAND,
Currently, I think the largest TLC drives are only 32 TB? Of course, you'd expect the capacity increase to be < 33%, going from TLC to QLC. And, as I said, there are "read-mostly" applications involving online storage, where write-endurance isn't a major consideration.

Furthermore, they claim to offer very good endurance on these models, with up to 0.58 DWPD on the 64 TB version. Standard 5 year warranty, as well.
 
Well, that's true. If it's a read mostly application then these being QLC isn't as much of a concern at all. 5 year warranty isn't bad, but then again, if you're a data center or enterprise user, you don't WANT to be using any product where you have to take advantage of the warranty anyhow honestly.

Definitely makes more sense though as you say if it's primarily for read operations.
 
Meanwhile, PCIe 5.0 burns more power, which is a negative for datacenter operators.
Not to mention PCIe 4.0 is a more mature specification, that doesn't have THESE problems. And if a data center has to ALSO, ADDITIONALLY, purchase drive specific coolers for every PCIe 5.0 drive they put in service, that could get terribly expensive at some point when you add the cost of additional cooling to the cost of the drives themselves, assuming of course that we are talking about a facility that would be utilizing a bunch of them. But even aside from the cost, you can't afford to have drives crashing. Throttling is somewhat acceptable. Crashing, can't be happening for cloud, data center or enterprise users.

 
if a data center has to ALSO, ADDITIONALLY, purchase drive specific coolers for every PCIe 5.0 drive they put in service, that could get terribly expensive at some point when you add the cost of additional cooling to the cost of the drives themselves, assuming of course that we are talking about a facility that would be utilizing a bunch of them.
These drives have 5 W idle and 25 W active power, as is. So, they already require substantial airflow, which a server chassis would typically have. Believe it or not, 5 W idle power is actually good, for a datacenter SSD!

They aren't even available in M.2 form factor, either. Take your pick from: U.2, E3.S, or E1.L.
 
My question wasn't why you'd want or need a drive this size, but why why you'd use QLC NAND that doesn't have nearly the endurance/longevity as TLC NAND, on a drive that somebody is going to have to pay this much money for. Sure, TLC is more expensive, but if you're already paying that kind of price for a drive anyhow, I doubt anybody is going to balk at paying the difference between QLC and TLC if it means a drive with much better endurance.
Indeed. With DC uses you're really going to need the better endurance.
Currently, I think the largest TLC drives are only 32 TB? Of course, you'd expect the capacity increase to be < 33%, going from TLC to QLC. And, as I said, there are "read-mostly" applications involving online storage, where write-endurance isn't a major consideration.

Furthermore, they claim to offer very good endurance on these models, with up to 0.58 DWPD on the 64 TB version. Standard 5 year warranty, as well.
Just like every QLC drives offering exactly round sizes 😉 And also, since we know they claim that because they've bought Intel which Intel used to claim very good endurance (because of CTP or something) but I've seen less than stellar results running Intel QLC drives with them actually WEARING OUT rather than the controller dying first.
After I lost the 10th one in my array in a year from premature wear (it didn't even reach 50%!) I got them RMA'd and sold them and got TLC drives ONLY from then on.
Also, I got more VMs from TLC too so it didn't ACTUALLY save me (or my customer) any money. Wound up having to take the server back throw on new SSDs... What a nightmare
Well, that's true. If it's a read mostly application then these being QLC isn't as much of a concern at all. 5 year warranty isn't bad, but then again, if you're a data center or enterprise user, you don't WANT to be using any product where you have to take advantage of the warranty anyhow honestly.

Definitely makes more sense though as you say if it's primarily for read operations.
My old VM setup used 12 Intel QLC drives and even if its read only, the performance was still notably worse than comparable TLC drives. Ditto for the Intel Optane H10s I have. Actually unreliable garbage for those! And ironically for those if the optane section dies, the whole drive dies?? Seems to die frequently too. I already had 1 DOA 512GB H10 and 2 slow responding 256GB H10s
These drives have 5 W idle and 25 W active power, as is. So, they already require substantial airflow, which a server chassis would typically have. Believe it or not, 5 W idle power is actually good, for a datacenter SSD!

They aren't even available in M.2 form factor, either. Take your pick from: U.2, E3.S, or E1.L.
I actually run a Samsung PM1733 7.68TB SSD in my PC and it runs so hot sitting in the drive bays of my 5000D that I moved it right in front of my fans (had to turn up the speed of them too) to cool it down. I'm probably switching to a 4TB Lexar NM790 soon so I can get rid of the jank adaptor mess I have going on in there now too

However I suspect its gen 4 because it's a old design. They could have gone to PCIe 5.0 but 2 lanes and that wouldn't increase the power budget one bit and leave a lot on the table for newer servers, unless their customer base is purely older servers... which would be funny because Ice Lake wasn't particularly popular. Most PCIe G4 servers out there would be Rome/Milan
 
Last edited:
  • Like
Reactions: bit_user
Well, I wasn't trying to agree that I thought QLC is a good idea, for ANY drives, but was trying to not be an asshat and disagree with the idea that for mostly read operations the potential negatives are largely diminished. That doesn't mean there are none, just less.
 
  • Like
Reactions: bit_user
Well, I wasn't trying to agree that I thought QLC is a good idea, for ANY drives, but was trying to not be an asshat and disagree with the idea that for mostly read operations the potential negatives are largely diminished. That doesn't mean there are none, just less.
Of course, I agree but unfortunately my experience with QLC drives somehow didn't match up to "you'll be fine with mostly read operations" that I was promised
 
Does it make a difference in terms of, well, anything?
Yes. Even if the underlying NAND chips and controllers are the same (let's say they are, for the sake of argument), Datacenter SSDs differ in the amount of overprovisioning they have, as well as the algorithms used by their controllers.

If you read the article and click through the slides, Solidigm made some fairly specific and seemingly impressive claims about these QLC SSDs. However, I did just notice that their comparison table references competing drives of lower capacity, which renders the endurance value somewhat less meaningful.

Without knowing the specific model(s) @DaveLTX used or specifics about the operating conditions, we can't get much insight into whether or how much the drives truly underperformed. But, if he used consumer drives in a server, then pretty much all bets are off. So, I think it's relevant to at least confirm these were non-consumer/M.2 drives.

It's very interesting to compare Intel QLC consumer and Datacenter drives. These models are both 2021 vintage:

One thing that immediately jumps out at me is the appalling 4k random write performance of the DC drive, whereas the client 670P seems to offer very respectable random write performance. It's explained by the following disclaimer:
"The P5316 results are expected, however, due to the drive’s larger indirection unit (IU) of 64KB. Anyone using these SSDs should be sure that their software accounts for this, it’s suggested best practice to issue writes that is IU aligned. As seen here, the P5316 will take writes that are smaller than its IU, but the results are not desirable. This is why drives like this are often put behind a cache or software that can handle write shaping."​

So, if he wasn't aware of this, that could help explain why @DaveLTX had such a poor experience with them!

I wonder if Intel/Solidigm revisited that decision, in these latest-generation QLC drives.
 
Last edited:
Thanks for sharing. Were these Data Center SSDs or consumer models?
consumer 670p (bear in mind there's even a separation between consumer, Pro drives and DC drives there's possibly 4 tiers or 5 even), but the expected write duty was on 5% and sure enough in that six months it was only 10TB per drive

10TB in six months of uptime and they crapped out. I've written more to some of my personal drives on the first day!

The WORKSTATION's issue wasn't even of write, it was with read
 
consumer 670p (bear in mind there's even a separation between consumer, Pro drives and DC drives there's possibly 4 tiers or 5 even), but the expected write duty was on 5% and sure enough in that six months it was only 10TB per drive

10TB in six months of uptime and they crapped out. I've written more to some of my personal drives on the first day!

The WORKSTATION's issue wasn't even of write, it was with read
Thanks for the specifics. I wonder about temperature, though. Did you check their typical operating temperature against the manufacturer's spec? According to this, the max operating temperature is 70 C.


BTW, that also specifies a TBW figure of 185. So, you were at less than 5% of that. I do wonder if that was host writes... it seems to me a whole lot of tiny logfile updates can combine with write amplification to result is disproportionately higher wear. Presumably the drives have ways to mitigate that, but I do wonder.

I'm just trying to see if we can find some insight into why the failure rate was so bad, but maybe it was just a bad batch of drives or of NAND chips.
 
Thanks for the specifics. I wonder about temperature, though. Did you check their typical operating temperature against the manufacturer's spec? According to this, the max operating temperature is 70 C.

BTW, that also specifies a TBW figure of 185. So, you were at less than 5% of that. I do wonder if that was host writes... it seems to me a whole lot of tiny logfile updates can combine with write amplification to result is disproportionately higher wear. Presumably the drives have ways to mitigate that, but I do wonder.

I'm just trying to see if we can find some insight into why the failure rate was so bad, but maybe it was just a bad batch of drives or of NAND chips.
It hardly did write operations, cooled at 40c.
Quite a few enterprise drives have the same exact hardware as consumer drives albeit the firmware is redone for steady performance than peak but all bets are off when it's QLC, can't be saying the steady write speed is under sata drives can it? 🤣