News Entry-level GPU RAID card enables mind-bending storage speeds — 80 GB/s of throughput from eight SSDs with SupremeRAID SR-1001

Status
Not open for further replies.

purpleduggy

Proper
Apr 19, 2023
162
42
110
I wonder what the price is to get to 80GB/s. From what I can see it would need around 7+ gen5 nvme drives (12GB/s theoretical) minimum unless there is a massive increase in gen6 drives.
 
"...targeted at home servers and gaming PCs"

Someone is going to buy this silly thing, and then come here and complain that their FPS in CounterStrike did not go up.
Yes.

Storage speeds even in the budget range drives are well past what a gaming rig requires. Even Direct storage, touted as the killer app for high transfer rate storage shows large gains with "slow" drives. The law of diminishing returns kicked in around Gen3 NVMe in my opinion. Maybe in 5 or 6 years stuff like this (or at least a drive with the advertised transfer rates) will be of use, but not today.

*Stutters, it's gonna be stutters in CS2. They'll also have a 5 year old AIO with no water left in it and the pump in the wrong orientation. But it was totally the drive...

Sorry I'm getting increasingly cynical the farther past 40 I get..
 
  • Like
Reactions: bit_user

ezst036

Honorable
Oct 5, 2018
599
515
12,420
This uses the host GPU right? Like a 4090/Arc or onboard APU? Or does this actually have an Nvidia or AMD chip onboard the card?

If using the host GPU that means you must have at least 2 16x slots in order to accommodate the card? That could be problematic as motherboards these days seem to have a decreasing amount of PCIe slots on them.
 
Last edited:
  • Like
Reactions: bit_user

HideOut

Distinguished
Dec 24, 2005
569
88
19,070
This uses the host GPU right? Like a 4090/Arc or onboard APU? Or does this actually have an Nvidia or AMD chip onboard the card?

If using the host GPU that means you must have at least 2 16x slots in order to accommodate the card? That could be problematic as motherboards these days seem to have a decreasing amount of PCIe slots on them.
Sounds to me like it includes the GPU. my guess is it doesn't need a 4090 or anything near that beastly. If the GPU is dedicated just to raid calculations a much much weaker GPU would hardly break a sweat.
 

Findecanor

Distinguished
Apr 7, 2015
271
186
18,860
Apparently these RAID cards don't actually connect to the drives. They only do the data processing, but leave buffers in main memory to be DMA'd to/from the drives like usual.

This means that it should theoretically be possible to do the same processing on a regular GPU.

And it looks to me that it would also be theoretically possible to make a RAID card that uses the same kind of GPU tech but connects directly to the drives and get even higher performance ... only that nobody has done that yet.

Or am I missing something?
 
The high end PCIe 4.0 version with 32 drive support they put out used an A2000.

I like the idea and implementation for the most part, but this card is PCIe 3.0 x16 so to get any real use out of it you need Xeon W/Scaleable/Threadripper/EPYC or sacrifice all of your CPU PCIe lanes.
 
  • Like
Reactions: bit_user
The high end PCIe 4.0 version with 32 drive support they put out used an A2000.

I like the idea and implementation for the most part, but this card is PCIe 3.0 x16 so to get any real use out of it you need Xeon W/Scaleable/Threadripper/EPYC or sacrifice all of your CPU PCIe lanes.
Definitely a huge consideration. It's spelled out right in my mainboard manual that these kinds of devices drop my PCIe slot down to x8. I'm also pretty sure that the true target market for these devices is HEDT, NOT gaming. They're probably hoping to gain some whale sales in the gaming market is all, so they add it in there because why not..
 
  • Like
Reactions: bit_user
Gen 6 (when it arrives) headroom in a PCI-e 6.0 x 4 slot should be double that of PCI-e 5.0x4, so, about 28 GB/sec, assuming drives' throughput able to progress to those ludicrous speeds within 3 years of PCI-e 6.0 adoption (even that is a quite large assumption)

I predict Win 12 will boot 1/4 sec quicker than with a top Gen 5 drive....
 
Gen 6 (when it arrives) headroom in a PCI-e 6.0 x 4 slot should be double that of PCI-e 5.0x4, so, about 28 GB/sec, assuming drives' throughput able to progress to those ludicrous speeds within 3 years of PCI-e 6.0 adoption (even that is a quite large assumption)

I predict Win 12 will boot 1/4 sec quicker than with a top Gen 5 drive....
Right? Ffs my BIOS splash screen is up longer than it takes Windows to cold boot.
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
Gen 6 (when it arrives) headroom in a PCI-e 6.0 x 4 slot should be double that of PCI-e 5.0x4, so, about 28 GB/sec, assuming drives' throughput able to progress to those ludicrous speeds within 3 years of PCI-e 6.0 adoption (even that is a quite large assumption)

I predict Win 12 will boot 1/4 sec quicker than with a top Gen 5 drive....
Not long ago, a guy here told me a Windows system with a Gen 4 SSD will boot in 1/2 the time as with a Gen 3 SSD.
Because..."simple math"

He was very confident of this.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
This article is so severely lacking in key details that it's a rather glaring problem.

One detail we're missing is exactly how a GPU connected at a mere 32 GB/s (i.e. PCIe 4.0 x16) is meant to do anything at 80 GB/s. Even PCIe 5.0 wouldn't get you there, with just a single card.

BTW, regarding their SW RAID numbers:
4K Random Read/Write (IOPS)1M Sequential Read/Write (GB/s)Throughput (GB/s)Maximum SSDs Supported
Software RAID
2M / 200K​
9 / 2​
9​
32​
I'll bet that's single-threaded. I remember seeing benchmarks on Linux SW RAID performance from over a decade ago that was already in the GB/s range. That's even without AVX2.
 
Last edited:

bit_user

Polypheme
Ambassador
And it looks to me that it would also be theoretically possible to make a RAID card that uses the same kind of GPU tech but connects directly to the drives and get even higher performance ... only that nobody has done that yet.

Or am I missing something?
What would you expect to gain by it? PCIe is a full-duplex protocol, meaning a device can simultaneously read & write in both directions, at full speed. So, as long as the NVMe drives can write directly into GPU memory, you probably wouldn't get any greater throughput by having the SSDs physically connected to the SSD.
 

DSzymborski

Curmudgeon Pursuivant
Moderator
  • Like
Reactions: carl_carlson

DaveLTX

Prominent
Aug 14, 2022
99
64
610
Graid supremeraid actually does not write anything to the GPU, it's simply there to calculate checksum. But it does not verify anything either. Data rot would probably still be a issue like the previous supremeraid
 

DonQuixoteIII

Commendable
Aug 16, 2021
32
18
1,535
Quote unquote Gamer Motherboards and CPUs only give the user one x16 slot for the GPU and one x4 NVME slot, and that is possibly shared.... You pay twice for the 'gamer' label. There is no real reason that the 'gamer' can't have more PCIe from the CPU, but if 'Gamers' are willing to pay the premium without the frills, then why give the more? After all, they have shareholders, right?

On a less serious note, this article needs a close look. No specs, no prices, unfounded benchies spouted... Pure marketing fluff.
 
One detail we're missing is exactly how a GPU connected at a mere 32 GB/s (i.e. PCIe 4.0 x16) is meant to do anything at 80 GB/s. Even PCIe 5.0 wouldn't get you there, with just a single card.
This device is PCIe 3.0 not 4.0! The accelerator handles everything with the RAID array but doesn't connect drives at all. So you'd need to have a total of 48 lanes available to use to max it out. If this can be run at x8 you could swing 4 drives on a desktop type platform but you can't really do anything else. The only reviews I've seen of the other two were in server platforms so I'm not sure if cutting lanes is viable, but I'd guess not.

This is a pretty interesting piece: https://www.storagereview.com/review/graid-supremeraid-gen5-support-lets-ssds-fly
 
Last edited:
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Graid supremeraid actually does not write anything to the GPU, it's simply there to calculate checksum.
Well, it needs to see all of the data, in order to compute parity.

But it does not verify anything either. Data rot would probably still be a issue like the previous supremeraid
That could explain why reading is so much faster than writing.

Data rot would probably still be a issue like the previous supremeraid
The normal solution is to either perform routine "scrubbing" (sometimes called "consistency check") or to have the RAID controller continually performing a "patrol scrub", in the background.

Also, backups.
 

bit_user

Polypheme
Ambassador
Quote unquote Gamer Motherboards and CPUs only give the user one x16 slot for the GPU and one x4 NVME slot, and that is possibly shared.... You pay twice for the 'gamer' label. There is no real reason that the 'gamer' can't have more PCIe from the CPU, but
It's not only gamers. The LGA1700 socket, which I assume you're referring to, has x16 PCIe 5.0 and x4 PCIe 4.0 direct CPU lanes. Everything else on the board is coming through chipset lanes, which are then connected to the CPU via a PCIe 4.0 x8 link.

We already know the next gen socket (i.e. for Arrow Lake) will upgrade that CPU-direct x4 link to PCIe 5.0 and add another PCIe 4.0 x4 CPU-direct lanes for the Thunderbolt controller.

if 'Gamers' are willing to pay the premium without the frills, then why give the more? After all, they have shareholders, right?
Woah, pump the brakes there, bud. More connectivity isn't free. It adds cost to the CPU and motherboard. It's understandable for them not to add a lot more than most people need, because that additional cost would end up being eaten by consumers, not Intel.

IMO, a legit complaint would be that their Xeon W-2400 CPUs and platform is too expensive, for those who really need/want additional connectivity.

BTW, these boards really never needed that x16 slot to be PCIe 5.0. I still don't know what they were thinking. If Meteor Lake-S had launched this year (as planned), then the new socket would've arrived almost in time for the first PCIe 5.0 SSDs. So, I/O-wise, these CPUs are already overkill.

On a less serious note, this article needs a close look. No specs, no prices, unfounded benchies spouted... Pure marketing fluff.
They're telling us about a press release the company made. This is news, not a review. It can & should have more background information about these devices, but I think you're setting the bar too high. I value hearing about such announcements, because that makes me aware of them before they reach reviewers hands - and Toms quite possibly won't ever review it. So, if they only published reviews, then we'd be needlessly surprised by products already on the market and there's a lot out there we'd never know about.
 
  • Like
Reactions: carl_carlson

DaveLTX

Prominent
Aug 14, 2022
99
64
610
It's not only gamers. The LGA1700 socket, which I assume you're referring to, has x16 PCIe 5.0 and x4 PCIe 4.0 direct CPU lanes. Everything else on the board is coming through chipset lanes, which are then connected to the CPU via a PCIe 4.0 x8 link.

We already know the next gen socket (i.e. for Arrow Lake) will upgrade that CPU-direct x4 link to PCIe 5.0 and add another PCIe 4.0 x4 CPU-direct lanes for the Thunderbolt controller.


Woah, pump the brakes there, bud. More connectivity isn't free. It adds cost to the CPU and motherboard. It's understandable for them not to add a lot more than most people need, because that additional cost would end up being eaten by consumers, not Intel.

IMO, a legit complaint would be that their Xeon W-2400 CPUs and platform is too expensive, for those who really need/want additional connectivity.

BTW, these boards really never needed that x16 slot to be PCIe 5.0. I still don't know what they were thinking. If Meteor Lake-S had launched this year (as planned), then the new socket would've arrived almost in time for the first PCIe 5.0 SSDs. So, I/O-wise, these CPUs are already overkill.


They're telling us about a press release the company made. This is news, not a review. It can & should have more background information about these devices, but I think you're setting the bar too high. I value hearing about such announcements, because that makes me aware of them before they reach reviewers hands - and Toms quite possibly won't ever review it. So, if they only published reviews, then we'd be needlessly surprised by products already on the market and there's a lot out there we'd never know about.
In fairness, pcie 4.0 boards also arrived too early to truly take advantage of 4.0, and we're not really seeing them (high end GPUs ironically need 4.0 less than the low end ones but I blame memory bandwidth/amount skimping and limited lanes)

Anyway on the topic of "backups", ZFS accounts for bitrot, supremeraid doesn't.
Yet supremeraid directly compares itself to ZFS.
ZFS is heavily relied on for bitrot resilience, something supremeraid will never have
 
Status
Not open for further replies.