News PCIe 4.0 Card Hosts 21 M.2 SSDs: Up To 168TB, 31 GB/s

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

kal326

Distinguished
Dec 31, 2007
1,229
108
20,120
Interesting design and also completely ignores that 30+ TB U.3 drives for 3 years. With 4x lane headers built into plenty of server and workstation motherboards. Relies on a simple 4x burificarion board if the board doesn’t have headers.

8TB pcie drives are 1k plus for QLC drives. So a simpler single 30.72TB with higher endurance actual enterprise drives even if used seems the better option. Prosumer or even small business I would think would rather run 4 to 8 of those across one to two 16x PCIe slots. Versus this proprietary card if you need really large and relatively fast storage.
 

8086

Distinguished
Mar 19, 2009
46
26
18,560
My biggest gripe with most modern motherboards is the general lack of PCI-e slots and PCI-e slots with the correct amount of lanes. Many boards will offer you multiple x16 slots but in the fine details it will read something like x16 slot mechanically but electrically is only x2 and in other instances it will often disable the main graphics pci-e slot and lane split it with another slot, thus compromising gpu performance (now or in the future). A lot of modern motherboards have too many M.2 Slots and too few PCI-e slots. I would much rather give up my M.2 in favor of more x16 slots that are 100% electrically compatible with x16, so end users can run their own RAID Cards like this one (apex storage) or other solutions.

Right now, I'd almost kill to have more x16 slots on my boards so i can upgrade my NIC to 10gb and thus upgrade my internet connection with it. But a plethora of m.2 slots is gettting in the way of that. Give us MORE x16, less M.2 and let users choose how to use their own expansion, after all isn't that some of the point of pci (isa, and other slots) to begin with? So untill mobo makers get their acts together, we end users cannot fully utilize great hardware like M.2 Raid Cards (that could open a new market of Storage Accelerators for Asus, Gigabyte, etc), upgraded networking, sound, AI, Physics, Capture Cards, or even USB C for some people.


https://www.highpoint-tech.com/usb-catalog
 
Last edited:
Mar 14, 2023
2
1
15
As sub-text of my OP, not in many ways.

For server, the disassembling when you have to replace an NVMe is a no no.

Not sure if it's suitable for a server rack, but this is an interesting concept from the icy dock: https://global.icydock.com/product_323.html.

CP074-1_tray.gif
 
  • Like
Reactions: 8086

thullet

Prominent
Jan 15, 2023
11
6
515
Wouldn't it make more sense to let the data run through PCIe 5.0 to squeeze more bandwidth out of RAID setup? Like putting some kind of converter chip for output data?

Yes, but the PCIe5 equivalent of the switch chip wasn't available when they made the card, and unless it's popped up in the last few weeks it still isn't. Should arrive shortly though.
 

thullet

Prominent
Jan 15, 2023
11
6
515
My biggest gripe with most modern motherboards is the general lack of PCI-e slots and PCI-e slots with the correct amount of lanes.

Dude, Ryzen supports 24 lanes with 4 reserved for chipset, Raptor Lake supports 20 lanes.. how do you suggest they're gonna support multiple 16 lane slots?