News PCIe 4.0 Card Hosts 21 M.2 SSDs: Up To 168TB, 31 GB/s

It makes sense why they chose pcie 4.0. I couldn't imagine trying to cool 21 pcie 5.0 nvme drives

PCIe 5.0 SSDs do not need the big bulky heatsinks being offered. That is a marketing gimmick. The flat heatsink used on PCIe 4.0 SSDs and supplied by most mobo makers is all that is typically required on a PCIe 5.0 SSD unless some company is doing a very poor job of implementing PCIe 5.0 on their SSD.
 
  • Like
Reactions: domih
I wish them good luck selling this thing.

8TB NVMe are around $1,000, so 21 of them means $21,000. Plus the unknown price of the card.

Given the large size, you'll probably want to use them as RAID10. Minus the FS overhead, you end up with 75TB usage. Which is nice in itself.

IMHO, you better go with a data center server where you can replace an NVMe live, with probably an LED telling you which NVMe is faulty. No disassembly, no shutdown. But probably more $$$ though.

But pretty good for people who want a trophy PC, to go with the trophy car(s) and the trophy girlfriend(s).
 
I wish them good luck selling this thing.

8TB NVMe are around $1,000, so 21 of them means $21,000. Plus the unknown price of the card.

Given the large size, you'll probably want to use them as RAID10. Minus the FS overhead, you end up with 75TB usage. Which is nice in itself.

IMHO, you better go with a data center server where you can replace an NVMe live, with probably an LED telling you which NVMe is faulty. No disassembly, no shutdown. But probably more $$$ though.

But pretty good for people who want a trophy PC, to go with the trophy car(s) and the trophy girlfriend(s).
In what way do you think this device is useful for a consumer level PC?
 
  • Like
Reactions: einheriar
Are there limits to PCIe bifurcation? I know there are cards that split an x16 slot into 4 x4; could you do 8 SSDs connected as x2 or 16 as x1?

It's throwing away a lot of bandwidth (you don't fill it with the fastest SSDs, probably), you still need enough area to hold all the SSDs, and if you have trace length matching/a maximum trace length and a layout holding lots of m.2 sticks, that may complicate things.

But if it is possible, it still gets you the capacity, without PLX chips or similar.
 
For something this big I imagine the noise from Back to the Future when Marty flips the switch to turn on the speaker at the beginning of the movie.

Turn on your computer and the lights dim for a second.

😛😛
 
PCIe 5.0 SSDs do not need the big bulky heatsinks being offered. That is a marketing gimmick. The flat heatsink used on PCIe 4.0 SSDs and supplied by most mobo makers is all that is typically required on a PCIe 5.0 SSD unless some company is doing a very poor job of implementing PCIe 5.0 on their SSD.
really? lots of reviews shows that PCIEv4 and early PCIEv5 NVMEs do overheat with no heatsink.
 
Are there limits to PCIe bifurcation? I know there are cards that split an x16 slot into 4 x4; could you do 8 SSDs connected as x2 or 16 as x1?

It's throwing away a lot of bandwidth (you don't fill it with the fastest SSDs, probably), you still need enough area to hold all the SSDs, and if you have trace length matching/a maximum trace length and a layout holding lots of m.2 sticks, that may complicate things.

But if it is possible, it still gets you the capacity, without PLX chips or similar.

I believe the bifurcation fractions are hard wired on the PCB, but somebody could come up with a card that had 4 or 8 M.2 slots that only had 2 or 1 lanes actually connected for each.

The main challenge would be the BIOS/OS who have to probe, negotiate and configure PCIe ports. Even with the current normal variants of bifurcation, support for it isn't always available even if any 16x root port should need to support 16 x1 devices to pass certification.

Coming to think of it, I believe there might even be crypto miner hardware left over somewhere, which did in fact connect 16 GPUs with only a single lane for each: perhaps not bootable with M.2 SSDs, but a PnP OS might eventually get it right.

Of course, I'd still prefer switching variants, if only they didn't sell them at these crazy prices.

In the case of AMD's AM5, there are "budget" variants in a way from ASmedia which they use for the "PCH".

"budget" because they are cheaper than many other PCIe-only switches with similar capabilities, yet not really that cheap as current high-end finished AM5 mainboard with dual chips make painfully obvious.

I find it hard to stomach, that the IOD in the cheapest Zen 3/4 CPUs is essentially a rather capable PCIe++ switch chip with 6 CPU cores thrown in for free at €100 or less, but a much less capable PCIe only-switch will cost an arm and a leg. Even the first ASmedia switch on the mainboard offers much more for less and it's only with the cascaded fully fledged ones that they try to top PCIe switch vendor pricing.

These IOD/ASmedia chips are essentially multi-protocol PCIe switches that also support SATA and USB and I've always thought that I'd prefer those on a x4 PCIe 5.0 slot, so that I can do the allocation myself by chosing an appropriate board that has the mix of USB/M.2/SATA/NIC which fits to my use case (and extension needs).

And with M.2 -> PCIe x4 slot cables you can even physically cascade them to support some really crazy custom levels someone might find useful who today is thinking about hooking up M.2 SSDs to an x1 slot and then a Thunderbolt cable should actually allow even more flexibility (and some chaos).

The basic flexibility is all there and exactly what has made Personal Computers so attractive for four decades, but sometimes vendors seem to concentrate more on market segmentation than supporting it.

P.S. Actually the IOD supports even Infinity Fabric and RAM, potentially even some variants of CXL... Of course the signal speed and mainboard trace lengths pose challenges that won't fit into desktop MB budgets.
 
Are there limits to PCIe bifurcation? I know there are cards that split an x16 slot into 4 x4; could you do 8 SSDs connected as x2 or 16 as x1?

It's throwing away a lot of bandwidth (you don't fill it with the fastest SSDs, probably), you still need enough area to hold all the SSDs, and if you have trace length matching/a maximum trace length and a layout holding lots of m.2 sticks, that may complicate things.

But if it is possible, it still gets you the capacity, without PLX chips or similar.

Bifurcation is splitting lanes from one "bundle" to several bundles, and needs BIOS support.
Inserting a bridge chip does not.

x8 can be split into 2x x4, as the sum is still 8. 16x could be split into 2x x8 or 4x x4, etc, however if you have more lanes on one side, you use a bridge chip, and then it's just addressed as normal. The BIOS obviously needs to have support for bridge chips, but that's about it. It just works. Even a BIOS without bifurcation support will work just fine (bar any BIOS bugs, obviously).

In theory you could make a x4 card with a bridge chip that supplied 16 lanes and allowed you to connect 16 GPUs for mining purposes.

// Stefan
 
Are there limits to PCIe bifurcation? I know there are cards that split an x16 slot into 4 x4; could you do 8 SSDs connected as x2 or 16 as x1?

It's throwing away a lot of bandwidth (you don't fill it with the fastest SSDs, probably), you still need enough area to hold all the SSDs, and if you have trace length matching/a maximum trace length and a layout holding lots of m.2 sticks, that may complicate things.

But if it is possible, it still gets you the capacity, without PLX chips or similar.
You can bifurcate all the way down to single lanes

This card need more then bifurcation, it will need a bridge of some sort. Even if you bifurcated down to single lanes. That would leave 5x drives without even a single lane. So some how they are sharing the 16x lanes with 21 drives.
 
Are there limits to PCIe bifurcation? I know there are cards that split an x16 slot into 4 x4; could you do 8 SSDs connected as x2 or 16 as x1?
This card doesn't use bifurcation, it has a PCIe switch (PLX or similar) under that heatsink. Can't have 21 endpoints with only 16 bifurcated lanes. I don't know for server chips but AFAIK, desktop CPUs cannot bifurcate CPU PCIe lanes in smaller chunks than x4 and they only have three PCIe host controllers for the x16 group, which limits that breakdown to x8x4x4.
 
(To responses to my comment -- I did get that this product can't use bifurcation, had just wondered if bifurcation could even theoretically work for more than four SSDs. Sounds like yes, with the usual caveats about BIOS support!)
 
For something this big I imagine the noise from Back to the Future when Marty flips the switch to turn on the speaker at the beginning of the movie.

Turn on your computer and the lights dim for a second.

😛😛

"Apex Storage implemented two regular 6-pin PCIe power connectors to provide auxiliary power. This configuration allows for up to 225W"

75W from each connector and 75W from the PCIe port itself
 
Wouldn't it make more sense to let the data run through PCIe 5.0 to squeeze more bandwidth out of RAID setup? Like putting some kind of converter chip for output data?
 

TRENDING THREADS