News Silverstone supports four NVMe M.2 SSDs in a single-slot PCIe card

Aye given the size of modern GPUs it would be better for motherboard manufacturers to wire the very top most PCIe slot, right one above the GPU slot, with 16 lanes for use with an AIC for M.2 drives. Not only would they run cooler, but also be far easier to change out or add to.
 
Who needs this product?

Motherboards manufacturers currently put 19 NVME slots on their boards taking up space that would've formerly been used by PCI-E slots.

EDIT: Thought about it for a few minutes, the higher end boards are often times less likely to have very many slots and more NVME, and usually are the eye candy that receive reviews. Mid-end boards sometimes have more slots. But even if you have 3 16x slots and remainder 1xs it might still make more sense to buy two 1x or 3 1x NVME pci-e cards than this one 16x.

Additionally there's also the maths involved. A nice board but not top of the line plus the cost of this card and you're left with probably the same $price$ as a highest end board that has 6 NVME ports integrated onto it, so why not just buy the better board if you need that many drives. I'm still not seeing who needs this product.

The root problem is that board manufacturers often do not put enough 4x, 8x, or 16x slots on their boards. They force us to pick only boards with 2-3 at max slots, and then the remainders are all 1xers. (or absent altogether)
 
Last edited:
These are worthless for most consumers as it can only be used to maximum effect on workstation platforms or AMD systems that don't need a video card.

Unless AMD and Intel can be convinced to give more lanes or better lane splits that's where these will stay. PCIe switches are much too expensive once you get past PCIe 3.0 (and even these aren't cheap) so they aren't a viable solution.
 
  • Like
Reactions: williamjeremiah
Motherboard manufacturers pay attention:

LESS onboard M.2 slots and more PCI-e x16 slots for cards like this to install your m.2 drives on to.
They do that already.
Who needs this product?

Motherboards manufacturers currently put 19 NVME slots on their boards taking up space that would've formerly been used by PCI-E slots.
These are for HEDT, like the Thread Ripper Pro. Asus sells a Pro Sage WRX80E-SAGE, which has 7* x16 PCIe slots.
TR Pro actually supports bifurcation, unlike AM5 consumer mobos.
In fact, I haven't seen an WRX80 mobo that doesn't have 7* x16 slots.
 
  • Like
Reactions: ezst036
When one 8 TB SSD sometimes cost $1000 while 4 SSDs by 2TB each (also 8 TB in total) cost $500 using such adapter has perfect sense. Or $1000 can buy you even 16TB total nowadays. Plus such adapter has excellent cooling which will save your stored bits intact for way more substantial time as the retention time strongly depends on temperature
 
Last edited:
They do that already.

These are for HEDT, like the Thread Ripper Pro. Asus sells a Pro Sage WRX80E-SAGE, which has 7* x16 PCIe slots.
TR Pro actually supports bifurcation, unlike AM5 consumer mobos.
In fact, I haven't seen an WRX80 mobo that doesn't have 7* x16 slots.
Without hardware RAID, there's no point in a 4-drive expansion device. Software RAID will hamper performance and who in their right mind would add 4 drives in JBOD or as four different volumes?
 
Software RAID will hamper performance
Depending on what your workload happens to be this isn't necessarily the case. The advantage of hardware RAID is in IOPs and nothing else since it provides zero data integrity. If the workload happens to be mostly sequential then the performance differences won't be noticeable.

There's also the cost difference since a hardware RAID card that allows for the same bandwidth as this will cost around $700 whereas cards like these typically cost $80-100 new.

Then there's the whole what if you need more than 4 drives situation where hardware RAID will simply be slower in sequential due to the inherent limitations of a single PCIe slot. Unless of course you're planning on making multiple 4 drive volumes.
who in their right mind would add 4 drives in JBOD
While I wouldn't do it personally as I don't need that much storage in my primary system there are people who do need the storage. I wouldn't begrudge anyone who didn't need the additional performance from RAID and had no interest in dealing with the downsides.
 
While I wouldn't do it personally as I don't need that much storage in my primary system there are people who do need the storage. I wouldn't begrudge anyone who didn't need the additional performance from RAID and had no interest in dealing with the downsides.
4 drives is 4x the failure points, 4x the chance of having a problem. If you are spending that much money and not putting basic mechanical resilience in your setup through say RAID5.... well, it shows a lack of understanding imo.
 
Not sure what differentiates these in a market that’s already flooded with cards that have zero smarts―just unintelligently adapting PCIe lanes to a different connector.
 
  • Like
Reactions: Mama Changa
They do that already.

These are for HEDT, like the Thread Ripper Pro. Asus sells a Pro Sage WRX80E-SAGE, which has 7* x16 PCIe slots.
TR Pro actually supports bifurcation, unlike AM5 consumer mobos.
In fact, I haven't seen an WRX80 mobo that doesn't have 7* x16 slots.
I am aware that there are some who offer a daughter card for nvme. But a lot, if not most high end boards now come with 6-8 m.2 slots and 2 compromised x16 slots. I am calling for a bit of a reversal here: 2 m.2 slots (maybe 3) and 6-8 PCI-e x16 slots on all motherboards in the future.
 
4 drives is 4x the failure points, 4x the chance of having a problem. If you are spending that much money and not putting basic mechanical resilience in your setup through say RAID5.... well, it shows a lack of understanding imo.
We're talking NVMe drives not HDDs the failure rates are orders of magnitude lower. No moving parts goes a long way towards eliminating the random failure down the road aspect. Typically SSDs are going to be bad right away or potentially bad firmware update and RAID isn't going to do a thing there.
 
Yet another 4x m.2 card that requires bifurcation, when most consumer motherboards only support 2x bifurcation at most. We need more cards that come with on-board hardware that allows them to support multiple m.2 drives without using bifurcation. But so far, it seems like Sabrent is the only company making those.
 
this isnt bad its better in a sense to have a trusted brand then some odd card from god knows where its from

alot of these random cards like this are some knock of brand from china which god knows what chips are on the pcb.
 
We're talking NVMe drives not HDDs the failure rates are orders of magnitude lower. No moving parts goes a long way towards eliminating the random failure down the road aspect. Typically SSDs are going to be bad right away or potentially bad firmware update and RAID isn't going to do a thing there.
I'm not saying what the chance of failure is, I'm stating the simple fact that that % is quadrupled by having 4 devices. If you are spending that much budget, why not have mechanical resilience (and potential performance increase, but you'd need a great controller to realise that with nvme) and deploy RAID5 as a minimum? It's not difficult, it's not that onerous as a cost, it adds piece of mind and if one devices bricks itself, it's a quick-fix with zero data loss.
 
Who needs this product?

Motherboards manufacturers currently put 19 NVME slots on their boards taking up space that would've formerly been used by PCI-E slots.

EDIT: Thought about it for a few minutes, the higher end boards are often times less likely to have very many slots and more NVME, and usually are the eye candy that receive reviews. Mid-end boards sometimes have more slots. But even if you have 3 16x slots and remainder 1xs it might still make more sense to buy two 1x or 3 1x NVME pci-e cards than this one 16x.

Additionally there's also the maths involved. A nice board but not top of the line plus the cost of this card and you're left with probably the same $price$ as a highest end board that has 6 NVME ports integrated onto it, so why not just buy the better board if you need that many drives. I'm still not seeing who needs this product.

The root problem is that board manufacturers often do not put enough 4x, 8x, or 16x slots on their boards. They force us to pick only boards with 2-3 at max slots, and then the remainders are all 1xers. (or absent altogether)
It's the cpu that determines how many lanes for expansion and nvme slots, not the motherboard.