Question TrueNAS Hardware bottleneck review 3xM2 RAIDZ

Nov 14, 2024
4
0
10
[Moderator note: Moving thread from Networking to Storage. More applicable.]

Hello, Im building NAS from my previous gaming PC. Use case is not typical because my goal is to use 6/7 devices on 2.5Gb/s CAT6 (sometimes few on remote acces as well). Every day 100-300GB will be writen/overwriten so not really usecase for ARC and I need constant acces to this speed on every machine thats why Im buying dual 10Gb SFP+ card with agregation and switch - “TP-Link TL-SG3428X-M2” and RAIDZ with 3x 4TB M2 disks (no HDDs, 8tb real space is enough)
PC specs:
CPU :Intel i7-6850k
MotherBoard: Asus Rog strix x99 gaming
32GB RAM
NIC: Intel X520-DA2 (or similiar)
3x 4TB M2 disks RAIDZ
What do you think of this setup. Is this CPU and motherboard enough? It looks like its stronger than those 10gb qnaps etc. I cant find info about bus layout in mother board, but Im not very worried because there wont be GPU. I want to plug one M2 direcrtly to MB and use adapter to PCIe for another 2 M2 disks. Have somebody used this kind of adapters for example: “Axagon PCIE NVME+SATA M.2 ADAPTER”? Do you see some redflags bottlencs etc? I want to plug NIC into first PCIe (instead of GPU) and two adapters into 2 bottom ones? Do you guys can confirm that this is optimal for lanes? this cpu is 40 lanes but this manual of MB is not clear for me when it comes to lanes
 
Last edited by a moderator:

kanewolf

Titan
Moderator
[Moderator note: Moving thread from Networking to Storage. More applicable.]

Hello, Im building NAS from my previous gaming PC. Use case is not typical because my goal is to use 6/7 devices on 2.5Gb/s CAT6 (sometimes few on remote acces as well). Every day 100-300GB will be writen/overwriten so not really usecase for ARC and I need constant acces to this speed on every machine thats why Im buying dual 10Gb SFP+ card with agregation and switch - “TP-Link TL-SG3428X-M2” and RAIDZ with 3x 4TB M2 disks (no HDDs, 8tb real space is enough)
PC specs:
CPU :Intel i7-6850k
MotherBoard: Asus Rog strix x99 gaming
32GB RAM
NIC: Intel X520-DA2 (or similiar)
3x 4TB M2 disks RAIDZ
What do you think of this setup. Is this CPU and motherboard enough? It looks like its stronger than those 10gb qnaps etc. I cant find info about bus layout in mother board, but Im not very worried because there wont be GPU. I want to plug one M2 direcrtly to MB and use adapter to PCIe for another 2 M2 disks. Have somebody used this kind of adapters for example: “Axagon PCIE NVME+SATA M.2 ADAPTER”? Do you see some redflags bottlencs etc? I want to plug NIC into first PCIe (instead of GPU) and two adapters into 2 bottom ones? Do you guys can confirm that this is optimal for lanes? this cpu is 40 lanes but this manual of MB is not clear for me when it comes to lanes
Your network will be the limiting factor.
What network protocol(s) are you planning to use to access this hardware? Tuning your software will be more important than your hardware choices.
 
Nov 14, 2024
4
0
10
Your network will be the limiting factor.
What network protocol(s) are you planning to use to access this hardware? Tuning your software will be more important than your hardware choices.
It will be accesed by windows machines so I think SMB (Samba) is the only choice I have? I hope to achieve bottleneck in my network becasue thats the exepnsive part and I can not upgrade to more than 2.5Gb/s on each workstation (esspecialy switch). My goal is to provide constant 2.5Gb/s for every PC (6-7 of them) basically maximum of 8 PCs untill connection starts to be less than 2.5GB/s on each
 

Misgar

Respectable
Mar 2, 2023
1,870
495
2,090
Im buying dual 10Gb SFP+ card with agregation and switch - “TP-Link TL-SG3428X-M2”
I assume you're running one (or two) DAC or optical SFP+ links from the NAS up to the switch?

Are you 100% constrained to using 2.5GbE on all the other machines? Do you have any spare PCIe slots in these computers for a faster 10G NIC?

I've picked up cheap Solarflare SFN 7122F dual SFP+ cards on eBay for circa $30 each. I didn't need dual port but they were cheaper than single port. There are loads of Chinese "knock off" clones available advertised as "old stock" but I stick to genuine ex-server pulls.

I run most of my home network on a mixture of 10G Ethernet and 10G Fibre/DAC connections with MikroTik and Netgear 10G (unmanaged) switches.

If you don't need a managed 2.5G switch, consider this inexpensive MikroTik 8x10G SFP+ switch for $260:
https://www.servethehome.com/mikrotik-crs309-1g-8sin-review-inexpensive-8x-10gbe-switch/

It will be accesed by windows machines so I think SMB (Samba)
That's what I use on my four TrueNAS servers, because it's easy to set up. There are probably other options which I haven't tried. All my servers are 6 or 8 hard disk arrays, running in RAID-Z2.

“Axagon PCIE NVME+SATA M.2 ADAPTER”
I've no experience of this adapter, but the most important thing to remember about TrueNAS Core is that it likes free access to all drives in the array, i.e. no RAID controllers. If the Axagon provides IT (Initiator Target) mode access to the drives, that's good. If it works in IR (RAID) mode, take care.

I recommend posting on the Serve The Home forum if you haven't already done so. A wealth of useful information and hardware experts.
https://forums.servethehome.com/index.php
 
Nov 14, 2024
4
0
10
I assume you're running one (or two) DAC or optical SFP+ links from the NAS up to the switch?
Exactly, Dual 10Gb SFP+ NIC from nas to switch, and Cat6 to every workstation on 2.5Gb outputs from switch

Are you 100% constrained to using 2.5GbE on all the other machines? Do you have any spare PCIe slots in these computers for a faster 10G NIC?
I've planned in the future some hybrid approach, problem is that I wont be able to provide more than two 10Gbe connections simultaneously anyway. This is great tip with this switch though. I didnt expect to see this cheap 10Gbe, however it would give me only 6x 10GBe ports (Becasue 2 are from NAS's NIC) and I may need more than 6 in the future. Buying this switch, NICs and SFP+ adapters for every workstation for only 2x10GBe at the same time seems not worth it at this point.

1. Why the RAIDZ (Raid 5)?

2. What is your actual backup routine?
Because its only for "current" projects and I'll be doing backups every night to second HDD nas and for now there is no option for more than 3xM2 disks. There are no more free PCIe slots and bifurcation on this MB is not supported, and those adapters for 4xM2 without bufircation are pretty expensive, still 7TB of usable space is enough. For now at least. Im taking this as upgrade option to buy adapter or new motherboard with bifurcation and new cpu
 
Last edited:

USAFRet

Titan
Moderator
Because its only for "current" projects and I'll be doing backups every night to second HDD nas and for now there is no option for more than 3xM2 disks. There are no more free PCIe slots and bifurcation on this MB is not supported, and those adapters for 4xM2 without bufircation are pretty expensive, still 7TB of usable space is enough. For now at least. Im taking this as upgrade option to buy adapter or new motherboard with bifurcation and new cpu
OK.
I asked, because far too many assume the RAID is all they need.
 
Nov 14, 2024
4
0
10
I’ve done a little bit more reasearch and found company in my city that refurbishes enterprise switches and offers service for a few years. They have Dell N4032 24x 10Gb RJ45 + 2x 40Gb QSFP+ extension for 1000$. That made me realize that 10Gb might actually be within my budget. Now bottleneck will be the disks and CPU. They also have a dual QSFP+ 40Gb NIC on PCIe Gen 3 x8. I know that PCIe Gen 3 x8 offers 8GB/s, which is less than 2x40Gb/s (RAIDZ with three M.2 is even little less than 7GB in Gen3), but it still seems great. Im planning to buy ICY BOX PCIe 4.0 x4 adapters to help with heat and 3xCrucial P3 M.2 drives-RAIDZ (or maybe samsung 990pro). I hope the CPU can handle it, but Im worried about one of M2 drives sharing bandwith with PCH (system sata ssd in this scenario, if I understand correctly). Can this slow down my overall experencie significantly or cause other issues with RAIDZ? What do you think about this setup? In the worst-case scenario, I could use a second PC with a 16-core Ryzen (I don’t remember the exact model), and its motherboard even supports bifurcation, or buy PCIe Gen3 x16 PLX adapter. Thanks for all the information—it helped me dig deeper into PCIe/lanes/M.2 drives etc.!
 
Last edited: