[SOLVED] How many 16TB hard drives can a Threadripper 3970x server with 256GB ram handle?

sds20020024

Reputable
Jan 23, 2019
50
8
4,565
Hi everyone, first timer here.
The title pretty much says it all.
I've been considering building a storage server myself, with these components:

Server case: Norco RPC-4244 ( https://www.newegg.com/norco-rpc-4224/p/N82E16811219038 )
CPU: AMD Threadripper 3970x (32core/64thread)
GPU: Gtx 1050 ( just used as a basic display adapter)
Motherboard: MSI TRX40 pro wifi (https://pcpartpicker.com/product/CTrYcf/msi-trx40-pro-wifi-atx-strx4-motherboard-trx40-pro-wifi)
RAM: Gskill ripjaws 256 gb (8*32gb) DDR4 3600Mhz CL18
Boot drive: 500GB samsung 960 evo nvme m.2 drive
L2 Cache drives: Samsung 860 pro 2tb drives x2 ( running in RAID 0)
Host Bus Adapter: Intel RAID 24 port (6 SFF-8087 mini SAS port) Expander card RES2SV240 (https://www.newegg.com/intel-res2sv240-sata-sas/p/N82E16816117207?Description=sff 8087 mini sas&cm_re=sff_8087 mini sas--9SIA4RE7MP4353--Product&quicklink=true)

NIC: StarTech ST10GSPEXNB (https://www.newegg.com/startech-st10gspexnb/p/N82E16833114165?&quicklink=true
Hard drives: Seagate Ironwolf pro 16TB drives (24 of them for this case, but might add more in disk shelves later on)
Additional disk shelves : NETAPP DS4246 (https://www.newegg.com/p/14P-0032-000V9?Description=disk shelf&cm_re=disk_shelf--9SIAH8WAZW5102--Product)

The server will probably be configured in RAID Z2 using FreeNAS.
It will be used for mainly archival storage, with up to 6k video being edited off of it very seldom.

Now what I want to know but couldn't find on the web ( atleast from sources I'm used to getting all my tech info from), is that how many of these disk shelves I can add to my Threadripper server, or rather how many disks can this server handle before I have to get another system to add disks to and make vdevs to add to my ZFS pool.

And also, at what point will my read/write speeds be jeopardized through my 10 gigabit network connection?

Any suggestions are welcome regarding the build.
I would also appreciate it if you could point out any erros/wrong choices I made ( if any).
Thanks
 
Solution
Well, 24 drives have never been a limit for an sas controller with an expander so you should be fine there. You do need to look into how much transfer speed you're going to need at the pcie bus as well as the controller to make sure you can amply saturate the 10Gb connection, which I think you'll have no problem doing. Just a single one of my 16TB exos will almost saturate a 1Gb connection, so a nice raid of them should saturate it easy, especially if they're sas drives (mine are sata).

The memory is more likely where you may have an issue as consumer boards aren't really designed to have such high ram at server level reliability. I've got 256GB in my hp z420, but that's a workstation and the ram is LRDIMMs so they're ecc and...
Well, 24 drives have never been a limit for an sas controller with an expander so you should be fine there. You do need to look into how much transfer speed you're going to need at the pcie bus as well as the controller to make sure you can amply saturate the 10Gb connection, which I think you'll have no problem doing. Just a single one of my 16TB exos will almost saturate a 1Gb connection, so a nice raid of them should saturate it easy, especially if they're sas drives (mine are sata).

The memory is more likely where you may have an issue as consumer boards aren't really designed to have such high ram at server level reliability. I've got 256GB in my hp z420, but that's a workstation and the ram is LRDIMMs so they're ecc and registered. Same for my Dell R710 with 288GB--ecc reg. Non-ecc memory will have problems above 8GB in size.

Aside from these minor details, it's a pretty solid setup. Remember to have a full backup of the raid as raid helps with availability but won't help if you have a controller failure that scrambles the raid or a ransomware attack.
 
Solution
'Storage Servers'/file servers are actually pretty light on required CPU resources, which is why so many 6-8 bay storage devices are run off quad core ARM processors or Pentiums, Celeron-J, etc... (Hopefully you'll find other roles/features to pile on, such as a VM host, etc., so that the almost $2000 processor is loaded/utilized something more challenging than a 3 year old Atom at 2 GHz could have handled..) :)

As few as spinning 5-6 drives can provide enough throughput to pretty much saturate a 10 GbE connection; your question should be how much storage you need, and how many drives you can afford to make a logical RAID that is not likely to suffer data loss (RAID 50, 60, etc..), and not how many drives a 32 core processor can 'handle'...

But to play, I'll suggest you go quickly spend ~$12,000 on 24 drives :)