Question TrueNAS Hardware bottleneck review 3xM2 RAIDZ

Nov 14, 2024
12
0
10
[Moderator note: Moving thread from Networking to Storage. More applicable.]

Hello, Im building NAS from my previous gaming PC. Use case is not typical because my goal is to use 6/7 devices on 2.5Gb/s CAT6 (sometimes few on remote acces as well). Every day 100-300GB will be writen/overwriten so not really usecase for ARC and I need constant acces to this speed on every machine thats why Im buying dual 10Gb SFP+ card with agregation and switch - “TP-Link TL-SG3428X-M2” and RAIDZ with 3x 4TB M2 disks (no HDDs, 8tb real space is enough)
PC specs:
CPU :Intel i7-6850k
MotherBoard: Asus Rog strix x99 gaming
32GB RAM
NIC: Intel X520-DA2 (or similiar)
3x 4TB M2 disks RAIDZ
What do you think of this setup. Is this CPU and motherboard enough? It looks like its stronger than those 10gb qnaps etc. I cant find info about bus layout in mother board, but Im not very worried because there wont be GPU. I want to plug one M2 direcrtly to MB and use adapter to PCIe for another 2 M2 disks. Have somebody used this kind of adapters for example: “Axagon PCIE NVME+SATA M.2 ADAPTER”? Do you see some redflags bottlencs etc? I want to plug NIC into first PCIe (instead of GPU) and two adapters into 2 bottom ones? Do you guys can confirm that this is optimal for lanes? this cpu is 40 lanes but this manual of MB is not clear for me when it comes to lanes
 
Last edited by a moderator:
[Moderator note: Moving thread from Networking to Storage. More applicable.]

Hello, Im building NAS from my previous gaming PC. Use case is not typical because my goal is to use 6/7 devices on 2.5Gb/s CAT6 (sometimes few on remote acces as well). Every day 100-300GB will be writen/overwriten so not really usecase for ARC and I need constant acces to this speed on every machine thats why Im buying dual 10Gb SFP+ card with agregation and switch - “TP-Link TL-SG3428X-M2” and RAIDZ with 3x 4TB M2 disks (no HDDs, 8tb real space is enough)
PC specs:
CPU :Intel i7-6850k
MotherBoard: Asus Rog strix x99 gaming
32GB RAM
NIC: Intel X520-DA2 (or similiar)
3x 4TB M2 disks RAIDZ
What do you think of this setup. Is this CPU and motherboard enough? It looks like its stronger than those 10gb qnaps etc. I cant find info about bus layout in mother board, but Im not very worried because there wont be GPU. I want to plug one M2 direcrtly to MB and use adapter to PCIe for another 2 M2 disks. Have somebody used this kind of adapters for example: “Axagon PCIE NVME+SATA M.2 ADAPTER”? Do you see some redflags bottlencs etc? I want to plug NIC into first PCIe (instead of GPU) and two adapters into 2 bottom ones? Do you guys can confirm that this is optimal for lanes? this cpu is 40 lanes but this manual of MB is not clear for me when it comes to lanes
Your network will be the limiting factor.
What network protocol(s) are you planning to use to access this hardware? Tuning your software will be more important than your hardware choices.
 
Your network will be the limiting factor.
What network protocol(s) are you planning to use to access this hardware? Tuning your software will be more important than your hardware choices.
It will be accesed by windows machines so I think SMB (Samba) is the only choice I have? I hope to achieve bottleneck in my network becasue thats the exepnsive part and I can not upgrade to more than 2.5Gb/s on each workstation (esspecialy switch). My goal is to provide constant 2.5Gb/s for every PC (6-7 of them) basically maximum of 8 PCs untill connection starts to be less than 2.5GB/s on each
 
Im buying dual 10Gb SFP+ card with agregation and switch - “TP-Link TL-SG3428X-M2”
I assume you're running one (or two) DAC or optical SFP+ links from the NAS up to the switch?

Are you 100% constrained to using 2.5GbE on all the other machines? Do you have any spare PCIe slots in these computers for a faster 10G NIC?

I've picked up cheap Solarflare SFN 7122F dual SFP+ cards on eBay for circa $30 each. I didn't need dual port but they were cheaper than single port. There are loads of Chinese "knock off" clones available advertised as "old stock" but I stick to genuine ex-server pulls.

I run most of my home network on a mixture of 10G Ethernet and 10G Fibre/DAC connections with MikroTik and Netgear 10G (unmanaged) switches.

If you don't need a managed 2.5G switch, consider this inexpensive MikroTik 8x10G SFP+ switch for $260:
https://www.servethehome.com/mikrotik-crs309-1g-8sin-review-inexpensive-8x-10gbe-switch/

It will be accesed by windows machines so I think SMB (Samba)
That's what I use on my four TrueNAS servers, because it's easy to set up. There are probably other options which I haven't tried. All my servers are 6 or 8 hard disk arrays, running in RAID-Z2.

“Axagon PCIE NVME+SATA M.2 ADAPTER”
I've no experience of this adapter, but the most important thing to remember about TrueNAS Core is that it likes free access to all drives in the array, i.e. no RAID controllers. If the Axagon provides IT (Initiator Target) mode access to the drives, that's good. If it works in IR (RAID) mode, take care.

I recommend posting on the Serve The Home forum if you haven't already done so. A wealth of useful information and hardware experts.
https://forums.servethehome.com/index.php
 
I assume you're running one (or two) DAC or optical SFP+ links from the NAS up to the switch?
Exactly, Dual 10Gb SFP+ NIC from nas to switch, and Cat6 to every workstation on 2.5Gb outputs from switch

Are you 100% constrained to using 2.5GbE on all the other machines? Do you have any spare PCIe slots in these computers for a faster 10G NIC?
I've planned in the future some hybrid approach, problem is that I wont be able to provide more than two 10Gbe connections simultaneously anyway. This is great tip with this switch though. I didnt expect to see this cheap 10Gbe, however it would give me only 6x 10GBe ports (Becasue 2 are from NAS's NIC) and I may need more than 6 in the future. Buying this switch, NICs and SFP+ adapters for every workstation for only 2x10GBe at the same time seems not worth it at this point.

1. Why the RAIDZ (Raid 5)?

2. What is your actual backup routine?
Because its only for "current" projects and I'll be doing backups every night to second HDD nas and for now there is no option for more than 3xM2 disks. There are no more free PCIe slots and bifurcation on this MB is not supported, and those adapters for 4xM2 without bufircation are pretty expensive, still 7TB of usable space is enough. For now at least. Im taking this as upgrade option to buy adapter or new motherboard with bifurcation and new cpu
 
Last edited:
Because its only for "current" projects and I'll be doing backups every night to second HDD nas and for now there is no option for more than 3xM2 disks. There are no more free PCIe slots and bifurcation on this MB is not supported, and those adapters for 4xM2 without bufircation are pretty expensive, still 7TB of usable space is enough. For now at least. Im taking this as upgrade option to buy adapter or new motherboard with bifurcation and new cpu
OK.
I asked, because far too many assume the RAID is all they need.
 
I’ve done a little bit more reasearch and found company in my city that refurbishes enterprise switches and offers service for a few years. They have Dell N4032 24x 10Gb RJ45 + 2x 40Gb QSFP+ extension for 1000$. That made me realize that 10Gb might actually be within my budget. Now bottleneck will be the disks and CPU. They also have a dual QSFP+ 40Gb NIC on PCIe Gen 3 x8. I know that PCIe Gen 3 x8 offers 8GB/s, which is less than 2x40Gb/s (RAIDZ with three M.2 is even little less than 7GB in Gen3), but it still seems great. Im planning to buy ICY BOX PCIe 4.0 x4 adapters to help with heat and 3xCrucial P3 M.2 drives-RAIDZ (or maybe samsung 990pro). I hope the CPU can handle it, but Im worried about one of M2 drives sharing bandwith with PCH (system sata ssd in this scenario, if I understand correctly). Can this slow down my overall experencie significantly or cause other issues with RAIDZ? What do you think about this setup? In the worst-case scenario, I could use a second PC with a 16-core Ryzen (I don’t remember the exact model), and its motherboard even supports bifurcation, or buy PCIe Gen3 x16 PLX adapter. Thanks for all the information—it helped me dig deeper into PCIe/lanes/M.2 drives etc.!
 
Last edited:
found company in my city that refurbishes enterprise switches
On a note of caution, before buying any (high quality) enterprise hardware, check how noisy the fans are. If the switch will be mounted in another room, basement, large cupboard, out of earshot, then you won't be affected by the constant noise. I put up with the screaming Delta fans in my HP servers at start up, but they settle down after 4 minutes to a low hum. I can always move to another part of the house when running backups.

I know that PCIe Gen 3 x8 offers 8GB/s
The unfortunate side effect of upgrading one part of your system (e.g. to 10G or 40G network) is another part (processor bus speed) becomes a bottleneck. If PCIe Gen.3 becomes a limiting factor, you'll just have to accept it "as is" or buy new hardware with Gen.4 support or even Gen.5. I'd stick with the X99 board.

I'm worried about one of M2 drives sharing bandwith with PCH
In practice, you may not observe any significant slow down. Even if you do, how much money are you prepared to spend on new hardware to overcome the bottleneck(s)?

I suggest running a few benchmarks with the minimum number of new parts. This might help you work out the speed of your ZFS RAIDZ, so you can match it to new network hardware.
https://icesquare.com/wordpress/zfs-performance-mirror-vs-raidz-vs-raidz2-vs-raidz3-vs-striped/

They have Dell N4032 24x 10Gb RJ45 + 2x 40Gb QSFP+ extension for 1000$.
No point buying a 40G-capable switch if your hardware doesn't saturate 10G, unless you're future-proofing?

There's also the question of SFP+ transceiver compatibility (let alone 40G QSFP/QSFP+). Some transceivers are pretty much "universal" and work with many switches/NICs, others are far more particular. Some switches are "locked" to the manufacturer's own brand of transceiver, which can be awkward/expensive.

eb8eb744e1f86c22b1685960c665f103853f9542.webp

Or you could use DAC links for short runs.
https://www.servethehome.com/what-is-a-direct-attach-copper-dac-cable/

10Gbase-T (RJ45) adapters use more power than optical transceivers, which can cause problems with some "underpowered" switches.
https://www.servethehome.com/sfp-to-10gbase-t-adapter-module-buyers-guide/

MikroTik-SRJ10-In-and-out-of-Switch.jpg



Now bottleneck will be the disks and CPU.
Bear in mind writing RAID parity bits consumes additional processor cycles (blindingly obvious). I accept my four RAID-Z2 systems will be slightly slower as a result.

Can this slow down my overall experencie significantly or cause other issues with RAIDZ?
I think you need to post this type of question on the TrueNAS forums, or just browse a few questions posed by other people:
https://www.truenas.com/community/threads/good-nvme-speeds-but-should-be-faster.114711/