Solidigm's D5-P5336 SSD weds extreme capacity with solid performance.
Solidigm Launches 61.44TB PCIe SSD: Up to 7,000 MB/s : Read more
Solidigm Launches 61.44TB PCIe SSD: Up to 7,000 MB/s : Read more
Look at the data transfer rates. It's still not above what PCIe 4.0 can support, so it wouldn't provide any significant benefits. Meanwhile, PCIe 5.0 burns more power, which is a negative for datacenter operators.Isn't PCIe 5.0 out and if so why did they use 4.0 for these?
I don't know why, other than price, which for an enterprise user or data center probably doesn't even apply in the same way it would for a consumer, you would create a huge, fast and still expensive drive like this and then use QLC rather than TLC. Anybody that's going to pay that kind of money for a single drive will surely expect to get something with a very good endurance rating and clearly we know QLC is going to die faster so the choice is pretty baffling for this product. For consumers looking for the cheapest SSD options they can find, fine, but for applications where up time and longevity are probably far more important to those implementing them, you'd think it would be a no brainer. IMO.I think it's cute how they compare against industry-standard TLC drives, while continuing to sell their own TLC drives as a more premium product than their QLC models.
Size matters. If someone has a large, read-mostly database that they want to keep on SSD, then this could be just the thing. Sure, you can always RAID together multiple smaller drives, but that's both more expensive and more engergy-intensive.I don't know why, other than price, ... you would create a huge, fast and still expensive drive like this and then use QLC rather than TLC.
Currently, I think the largest TLC drives are only 32 TB? Of course, you'd expect the capacity increase to be < 33%, going from TLC to QLC. And, as I said, there are "read-mostly" applications involving online storage, where write-endurance isn't a major consideration.My question wasn't why you'd want or need a drive this size, but why why you'd use QLC NAND that doesn't have nearly the endurance/longevity as TLC NAND,
Not to mention PCIe 4.0 is a more mature specification, that doesn't have THESE problems. And if a data center has to ALSO, ADDITIONALLY, purchase drive specific coolers for every PCIe 5.0 drive they put in service, that could get terribly expensive at some point when you add the cost of additional cooling to the cost of the drives themselves, assuming of course that we are talking about a facility that would be utilizing a bunch of them. But even aside from the cost, you can't afford to have drives crashing. Throttling is somewhat acceptable. Crashing, can't be happening for cloud, data center or enterprise users.Meanwhile, PCIe 5.0 burns more power, which is a negative for datacenter operators.
These drives have 5 W idle and 25 W active power, as is. So, they already require substantial airflow, which a server chassis would typically have. Believe it or not, 5 W idle power is actually good, for a datacenter SSD!if a data center has to ALSO, ADDITIONALLY, purchase drive specific coolers for every PCIe 5.0 drive they put in service, that could get terribly expensive at some point when you add the cost of additional cooling to the cost of the drives themselves, assuming of course that we are talking about a facility that would be utilizing a bunch of them.
Indeed. With DC uses you're really going to need the better endurance.My question wasn't why you'd want or need a drive this size, but why why you'd use QLC NAND that doesn't have nearly the endurance/longevity as TLC NAND, on a drive that somebody is going to have to pay this much money for. Sure, TLC is more expensive, but if you're already paying that kind of price for a drive anyhow, I doubt anybody is going to balk at paying the difference between QLC and TLC if it means a drive with much better endurance.
Just like every QLC drives offering exactly round sizes 😉 And also, since we know they claim that because they've bought Intel which Intel used to claim very good endurance (because of CTP or something) but I've seen less than stellar results running Intel QLC drives with them actually WEARING OUT rather than the controller dying first.Currently, I think the largest TLC drives are only 32 TB? Of course, you'd expect the capacity increase to be < 33%, going from TLC to QLC. And, as I said, there are "read-mostly" applications involving online storage, where write-endurance isn't a major consideration.
Furthermore, they claim to offer very good endurance on these models, with up to 0.58 DWPD on the 64 TB version. Standard 5 year warranty, as well.
My old VM setup used 12 Intel QLC drives and even if its read only, the performance was still notably worse than comparable TLC drives. Ditto for the Intel Optane H10s I have. Actually unreliable garbage for those! And ironically for those if the optane section dies, the whole drive dies?? Seems to die frequently too. I already had 1 DOA 512GB H10 and 2 slow responding 256GB H10sWell, that's true. If it's a read mostly application then these being QLC isn't as much of a concern at all. 5 year warranty isn't bad, but then again, if you're a data center or enterprise user, you don't WANT to be using any product where you have to take advantage of the warranty anyhow honestly.
Definitely makes more sense though as you say if it's primarily for read operations.
I actually run a Samsung PM1733 7.68TB SSD in my PC and it runs so hot sitting in the drive bays of my 5000D that I moved it right in front of my fans (had to turn up the speed of them too) to cool it down. I'm probably switching to a 4TB Lexar NM790 soon so I can get rid of the jank adaptor mess I have going on in there now tooThese drives have 5 W idle and 25 W active power, as is. So, they already require substantial airflow, which a server chassis would typically have. Believe it or not, 5 W idle power is actually good, for a datacenter SSD!
They aren't even available in M.2 form factor, either. Take your pick from: U.2, E3.S, or E1.L.
Of course, I agree but unfortunately my experience with QLC drives somehow didn't match up to "you'll be fine with mostly read operations" that I was promisedWell, I wasn't trying to agree that I thought QLC is a good idea, for ANY drives, but was trying to not be an asshat and disagree with the idea that for mostly read operations the potential negatives are largely diminished. That doesn't mean there are none, just less.
Thanks for sharing. Were these Data Center SSDs or consumer models?unfortunately my experience with QLC drives somehow didn't match up to "you'll be fine with mostly read operations" that I was promised
Does it make a difference in terms of, well, anything?Thanks for sharing. Were these Data Center SSDs or consumer models?
Yes. Even if the underlying NAND chips and controllers are the same (let's say they are, for the sake of argument), Datacenter SSDs differ in the amount of overprovisioning they have, as well as the algorithms used by their controllers.Does it make a difference in terms of, well, anything?
consumer 670p (bear in mind there's even a separation between consumer, Pro drives and DC drives there's possibly 4 tiers or 5 even), but the expected write duty was on 5% and sure enough in that six months it was only 10TB per driveThanks for sharing. Were these Data Center SSDs or consumer models?
Thanks for the specifics. I wonder about temperature, though. Did you check their typical operating temperature against the manufacturer's spec? According to this, the max operating temperature is 70 C.consumer 670p (bear in mind there's even a separation between consumer, Pro drives and DC drives there's possibly 4 tiers or 5 even), but the expected write duty was on 5% and sure enough in that six months it was only 10TB per drive
10TB in six months of uptime and they crapped out. I've written more to some of my personal drives on the first day!
The WORKSTATION's issue wasn't even of write, it was with read
It hardly did write operations, cooled at 40c.Thanks for the specifics. I wonder about temperature, though. Did you check their typical operating temperature against the manufacturer's spec? According to this, the max operating temperature is 70 C.
Solidigm 670p Series SSD | Solidigm SSDs for Laptops and Desktop Computers
Supercharge your PC with the Solidigm 670p Series. This consumer SSD powered by QLC NAND technology is perfect for mixed read/write workloads. Get the right balance of performance and efficiency in a reliable SSD for your laptop or desktop computer.www.solidigm.com
BTW, that also specifies a TBW figure of 185. So, you were at less than 5% of that. I do wonder if that was host writes... it seems to me a whole lot of tiny logfile updates can combine with write amplification to result is disproportionately higher wear. Presumably the drives have ways to mitigate that, but I do wonder.
I'm just trying to see if we can find some insight into why the failure rate was so bad, but maybe it was just a bad batch of drives or of NAND chips.