News RAID card delivers 128TB of NVMe storage at 28 GB/s speeds — HighPoint SSD7749M2 houses up to 16 M.2 2280 SSDs

Status
Not open for further replies.
I want to hit myself over the head without end for not thinking about this form factor myself: it solves so many isssues at once in a very elegant manner!

Turning things 90° somehow seems extra hard for someone who grew up with Lego, while thinking general modularity is never a problem...

Of course the price of PCIe switches is still putting this way beyond what I'd be willing to pay when CPUs with an IOD with more capabilites and power cost less and deliver more.
 
What is supposed to be so "advanced" about that PCIev4 switch ?

There are quite a few PCIev5 chips with plenty more lanes that could double that bandwidth easily.
OR more. WHole thing could be done w2ith all PCIev5 lanes, with full 4 lanes per each M.2 stick.
Look at Microchip's portfolio.

Sure, those aren't cheap, but neither is this card.

This thing is literally just a friggin PCIe switch chip with a vent and a couple connectors.
 
A chaotic motherboard design would be to support bifurcation to the point you can connect 16 SSDs as PCIe 4.0 x1 devices; I wonder where things would break if you tried to do that.

As is, mobos with bifurcation + the four-drive cards + PCIe 4 SSDs having turned into the less expensive not-cutting-edge option => some pretty wild and economical-for-what-they-are setups are possible.
 
Beat handily by Apex's 1/3rd more expensive cards, with 3x the warranty, more speed, more capacity, and lower latency. A PCIe 5 version is available soon, 50M IOPs.
 
A chaotic motherboard design would be to support bifurcation to the point you can connect 16 SSDs as PCIe 4.0 x1 devices; I wonder where things would break if you tried to do that.

As is, mobos with bifurcation + the four-drive cards + PCIe 4 SSDs having turned into the less expensive not-cutting-edge option => some pretty wild and economical-for-what-they-are setups are possible.
I believe rigs with nearly all lanes exposed as x1 slots were a somewhat popular niche during GPU crypto-mining, very little "chaos" about them, just an option that perfectly conforms to PCIe standards.

Now would that break things? There is a good chance of that, because vendors often just test the most common use cases, not everything permissible by a standard.

So I'd hazard that a system with 16 M.2 SSDs connected via x1 might actually fail to boot, even if it really should.

But once you got a Linux running off SATA or USB there is no reason why this shouldn't just work. And with a 4 lane PCIe slot often left for networking on standard PC boards, a 25Gbit Ethernet NIC on a PCIe v3 PC otherwise destined for the scrap yard could recycle a bunch of older NVMe drives, with a more modern base 40Gbit might make sense, too.

For building leading edge new stuff this isn't very attractive, I'd say. For using getting some use of out of stuff that's fallen behind in speed and capacity on its own, there may be potential, if time, effort and electricity happen to be cheap around you.
 
Status
Not open for further replies.