News Gigabyte Unveils 4-Way PCIe 5.0 SSD Adaptor With Up to 60 GBps

Status
Not open for further replies.

teodoreh

Distinguished
Sep 23, 2007
315
13
18,785
Original Samsung commercial with 24 SSDs had a performance of 2GB/sec (although in theory it should had 13.2Gbps/s).
Now you can create a CHEAP array with 60Gbps. What a world we live in!
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Original Samsung commercial with 24 SSDs had a performance of 2GB/sec (although in theory it should had 13.2Gbps/s).
Now you can create a CHEAP array with 60Gbps. What a world we live in!

60GBps is possible only during sequential read/write and in raid 0. Since nobody in their right mind will use raid 0 (raid 5 is next best) you will get around 45 GBps max. This is provided you are reading to ram and writing from ram.
 

jp7189

Distinguished
Feb 21, 2012
332
189
18,860
60GBps is possible only during sequential read/write and in raid 0. Since nobody in their right mind will use raid 0 (raid 5 is next best) you will get around 45 GBps max. This is provided you are reading to ram and writing from ram.
Presumably the xor would happen off card for RAID5 which would kill performance. Likely RAID10 at half the capacity and performance is the next best thing to RAID0 for this card.
 

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
60GBps is possible only during sequential read/write and in raid 0. Since nobody in their right mind will use raid 0 (raid 5 is next best) you will get around 45 GBps max. This is provided you are reading to ram and writing from ram.
Well, I'm probably not in my right mind...

I've always run SSDs in RAID 0, mostly because I was afraid that write amplification would be faster to kill them than a typical HDD-type single drive failure.

It's an old discussion and SAN vendors like Pure have simply moved on to far more sophisticated flash redundancy algorithms than RAID.

But for the more consumer type setups the issue hasn't been properly resolved to my knowledge.

SSD controller failures are chip failures. They inevitably do happen, but just like CPU failures, they should be extremely rare and they are not what typically killed HDDs.

Yet the type of media or head failures that created the demand for RAID simply can't happen on SSDs, where the media is flash chips, which are already run as 'flash aware RAIDs' within the SSD.

Adding another, non-flash aware RAID outside risks adding the write amplification which can very easily exhaust flash chips when it's not aware of them. Even hardware RAID controllers which already are rather helpful at reducing write amplifications on HDDs via battery buffered caches that will prefer to only write back full stripes weren't helping much, either because those caches were too small to accomodate the far larger erase blocks or because they quite bypassed their BBU cache for SSDs.

Now in-place single block updates, which are the worst case for write amplification, may not really be all that frequent unless you happen to run a file-based DB, so perhaps I'm too worried. But on SSDs writes reliably deteriorate media. If your SSD is full and single block updates trigger full erasure blocks being rewritten, they hurt badly. And when the media failure is already covered by the SSD controller, I just don't see why I should mistreat them.

It helps that most of my SSDs are really just caches, which can be rebuild from cloud or HDD data.

But yes, given that very few desktop computers will actually manage 60GB/s sustained transfers to RAM, these numbers are hard to obtain even with synthetic tests.
 
Status
Not open for further replies.