Help with DAS/NAS diy

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

BLACKBERREST3

Prominent
May 23, 2017
50
0
630
Hello, I am looking to build a DAS/NAS. I have been researching on what the best solutions would be for performance and it has left me with many questions. I would like to use existing hardware if possible which consists of: i7 6700k, 64gb DDR4 (non ecc) corsair ram, Z170 Deluxe (20 lanes all together). I will use SyncBackPro for scheduled backups. I want insane read/write speeds on the main work portion (around 8 terabytes) and 2 high-speed redundancies (I’ll add more as I need it). I am looking towards using software raid and using either FreeNAS or Windows Server 2016. I also need my data to be byte perfect with no degradation over time to preserve precious data. I plan to use this as a personal DAS/NAS and not something that would require it to run all the time.

My questions are:
1. ZFS or ReFS, suggestions?
2. Can I use ram as a non-volatile super cache or lazy read-writer if it is always powered and I keep redundancies to prevent data loss?
3. What is the best set up for performance that also lets me add more storage easily if I need it; raid0 SSDs + raid10 HHDs or tier-ing or something else?
4. What SSD/HDD combros do you guys recommend, I am leaning towards seagate for HDDs?
5. If a raid array fails, does that mean that I must replace the drive that failed or all drives because it damaged it somehow (only talking about hardware not data)?
6. What is the best way to connect to this DAS/NAS; direct PCIE pc to pc or 40/100gbe or something else?
7. How would I set up a 40/100GBE connection and what would I need?
8. Is there any other thing that I may need to know or want relating to this?
 
Solution
PCIe Gen 4 is due next year. Gen 5 is due a couple years later. 5 years from now, you won't be looking at a build like this at all. ThreadRipper is due in weels. I wouldn't go for a build like this even then.

Regarding the workstation+storage server build, you don't need a crazy network. Just allocate half the drives to the storage server for backup purposes. Unless you need twice-daily backups, a simple gigabit network would suffice. All you'd need is a half decent switch that won't destroy the rest of the network while the backup is running, and a second port for RDA.

Regarding "no bottleneck vs depends on networking", that's fairly naive. There's always a bottleneck. It's almost always a function of the workload, and usually best...


My perception may be skewed. My wife has exceptionally large hair dryers.
 


Check mine out its somewhat quieter than a hairdryer. it helps its in a closet :)
 
I'm still researching, but I have a question that may or may not relate to my build. It would be really cool to use my existing motherboard as a pcie switch given how expensive and rare they are [that may be a possibility in the future (https://semiaccurate.com/2015/05/12/avagos-pex9700-turns-plx-pcie3-switch-fabric/)], but I was wondering if there may be a less expensive option already available. I have already looked at Magma Expressbox, OneStopSystems (maxexpansion), Xpander, amfeltec(does not look like gen3), and others, but they are all over 3-4k which you could spend on pcie networking instead for 2k more. I was looking for something in the range of 1000. The reason is that almost no gen3 16 lane hba exists except this; https://www.serialcables.com/largeview.asp?cat=355&tier=264&id=1722 . If something like a non-switch based x16 to 4 x4 exists, that would be perfect. Sadly I have not come across this at all yet. I have also thought of something really funny and this probably does not work at all, but what if you connect two of http://www.maxexpansion.com/adapters/pcie3-x16 with one of http://www.maxexpansion.com/cables/pcie-x16. Would you get a pex9700?
 


Why do you want a 16 lane HBA? The industry standard is 8 lane. 8 lanes provide more than enough bandwidth for a single HBA. If you need more bandwidth, you use two HBAs.

There's no advantage to a 16 lane HBA vs two 8 lane HBAs. By the time you run out of 8-lane slots, you usually have to build a new system for other reasons.

Regarding PCIe switches, what do you plan to do with an IC? Unless you're half decent at laying out circuit boards and have a reflow station lying around, PCIe switches are only of interest to motherboard and product designers (and HPC engineers). Also, switches don't improve bandwidth. Think of it like an ethernet switch. You can connect to lots of other devices, but you're still limited to the same total speed as without the switch.

Regarding the use of PCIe as an interconnect between computers, there are some significant issues involved. Unless you have the resources to ensure software compatibility with the platform, I'd stick with more established alternatives. For this setup, I'd stick with 1 Gb ethernet or perhaps 10 Gb, but only if you find that 1 Gb isn't sufficient in the actual workload on this system (i.e. build the system before worrying about 10 Gb).
 
There are benefits to having a pcie switch. Switches work by multiplexing lanes, much like how motherboards work, except they have a more rigid bandwidth allocation. When one device is using less bandwidth than it needs, it can switch to less lanes giving other devices on the system more bandwidth or even be set up for pcie redundancy. Most graphics cards and nvme drives use at least 4 lanes of gen3, and a sound card uses less than 1. Read this; https://www.techpowerup.com/reviews/Intel/Ivy_Bridge_PCI-Express_Scaling/. For example, how would you connect 3 nvme drives, a gpu, soundcard, and 2 HBAs to your system without a fanout card? There wouldn’t be enough space to run these at once even though they only take up 32-34 lanes. You don’t even have to have a switch, you could have a splitter to turn x16 into 4 x4 or 2 x8 lanes and have enough granularity for any device. By the time you run out of slots, you wouldn’t have even used half of the lanes that your system could offer. I’m looking for said splitter which I presume is less expensive than a switch. I can’t verify that they exist though. I am also looking for a 16 lane HBA to take advantage of the limited amount of pcie slots so I can avoid pcie switches and splitters altogether. Yes, I am still looking to combine my datacenter and workstation in one. The storinator advertises that it has a quiet 30 bay chassis, so I was talking to a representative over there about a custom solution. If worst comes to worse, I could put a different motherboard and cpu in and put the storinator somewhere else, but then I would be limited by what type of networking I could set up. 10gbe network switches cost 1-1.5k from netgear.
PCIE was never meant as a communication protocol, but rather as an intercommunication between the host and peripherals, but the latest pex chips hope to bypass this issue which you can read here; https://semiaccurate.com/2015/05/12/avagos-pex9700-turns-plx-pcie3-switch-fabric/ . I think that pcie networking is the next biggest thing, but ethernet still has a pretty large market share, so only time will tell. I don’t have enough knowledge to build a circuit board of that caliber, or any for that matter. Rework stations look interesting. I wish I had gotten into that kind of stuff as a kid.

edit; Found another hba. https://www.supermicro.com/products/accessories/addon/AOC-SLG3-4E4R.cfm This makes 2 so far. Also, this one is 8x less expensive than the other one for only 23.63Gbits/s less bandwidth and no switching.
 
There are ways to solve this problem without resorting to a high speed intranet.

If you really want to use PCIe as a way to connect PCs together, be aware that there are massive security considerations to take into account. It's bad enough that you can bypass a TPM entirely via the PCIe bus. I wouldn't even consider those PEX chips for use as an intranet until those security implications were completely addressed. That isn't likely to happen, though. It would add latency to a system that's extremely sensitive to that sort of thing.

I wouldn't get an x16 HBA regardless of your potential PCIe card count. I'd throw out the sound card and GPU first. I'd also seriously question the need for three PCIe SSDs. NVMe is fine, but 3 PCIe SSDs simply strikes me as a poor use of space. There are other interfaces for that. If that still weren't enough, then I'd build a separate machine with minimal processing power and simply use it as the backup target. If you get a rack, it's simple to leverage that sort of approach. It'd eliminate the need for more than a single HBA in the workstation.

In your situation, I'd suggest not trying to put that many cards in a single machine. Consider the U.2 interface and get a USB sound card. The Xonar isn't that good anyways (I have one, too).
 
I thought about the same thing too. I'm skirting around what is actually possible at this point in time. I have no doubt that 10 or even 5 years from now, there will be options for an even greater amount of storage and connectivity than ever before. Until that time comes, it doesn't hurt to get creative. The toughest part of this whole build is planning for the future. I’m banking on that devices will still use pcie gen3 as the main or secondary connectivity 10 years from now. I have no idea what’s going to come out next, but I want to at least plan for the future. That includes having enough slots and lanes for future devices, recycle-ability to reuse parts, and scalability so I don't hit a wall that would require a new platform. I am a firm believer in "buy nice - don't pay twice", but that doesn't mean that I won't consider other approaches as well. As far as security wise, I'm not too worried. It's not like I'm a specific target for anyone...yet. That's all for the backstory right now :) . I have a lot of different paths to take with this, and here is what I have found so far.

All in One vs Workstation+Datacenter
Both are future proof, scalable, and reusable
Loud – Quiet+Loud
Least expensive – Largely depends on networking gear, and to a lesser degree, extra components.
80 lanes (plenty) – 80 + 40-80 lanes (overkill)
No Bottleneck – Depends on type of network
Most likely to need a pcie switch/splitter – Less likely depending on workstation devices
Harder to build – Easier to build
I am not trying to be biased towards the All in One, but if it weren’t for the noise factor then that would be the best choice. That is all I can remember so far, I’m not sure if I missed something.

edit: To answer your last statements, C612 chipset uses DMI 2.0 which is not enough for nvme drives which means I would need to use a pcie slot anyways. I would rather have the OS and programs on an nvme drive. The amount of HBAs is actually dependent on the type of performance that it needs. I don't want to waste pcie slots because pcie switches/splitters are more expensive than a single x16 hba. If I did end up separating the datacenter from the workstation, then I would be less concerned with how many pcie devices take up space on the datacenter.
 


If you were building this 10 years ago (2007)...what drive sizes would you have been looking at?
1TB drives were introduced that year.

SSDs? Hahahahaha.

Build so you can expand, but be prepared to go in a whole different direction in a few years.
You can't build a "Forever Box", no matter how much money you throw at it today.
 


Neither am I.
Yet my little Qnap box has had no less than 5 individual log in attempts to the default admin account since I stood it up a few months ago. All failed, of course, because I'm not silly enough to leave the default admin account enabled.

Russia x 2, Egypt, Portugal, Ohio.
 
PCIe Gen 4 is due next year. Gen 5 is due a couple years later. 5 years from now, you won't be looking at a build like this at all. ThreadRipper is due in weels. I wouldn't go for a build like this even then.

Regarding the workstation+storage server build, you don't need a crazy network. Just allocate half the drives to the storage server for backup purposes. Unless you need twice-daily backups, a simple gigabit network would suffice. All you'd need is a half decent switch that won't destroy the rest of the network while the backup is running, and a second port for RDA.

Regarding "no bottleneck vs depends on networking", that's fairly naive. There's always a bottleneck. It's almost always a function of the workload, and usually best addressed by changing the workload rather than the hardware.

Regarding security, anything that would target a datacenter with PCIe networking has the potential to hit you. I wouldn't be comfortable putting a build this expensive in that kind of situation. It'd take a few minutes to essentially render the entire thing unusable.

Regarding DMI 2.0, motherboard connectors aren't limited to chipset connectivity. That point is nonsense.

Regarding HBAs and the bandwidth they need, I'll simply point you towards a market share of hundreds or more x8 HBAs for every x16 HBA, including HPC applications.

Regarding PCIe switches and splitters, I would rather deal with two known quantities (separate builds and a network) than try to make a non-standard approach work on my own. It's not worth the headaches.

Regarding wasting PCIe slots, you're considering throwing a Xonar in there. There are better uses for the slot, to say the least.

Regarding where you put the OS, be aware that BIOS will take as long to initiate as most computers need to finish booting, let the user log in, and fire up a few programs. You won't see any practical difference between a SATA SSD and an NVMe drive as a boot disk.

What you're doing isn't pushing the limits of what's possible at all. It's trying to make what's possible as convoluted as you can. Your performance requirements aren't particularly special in the datacenter. You're just trying to do it in a way that no professional would ever consider. The technologies you're talking about are intended to solve specific niche problems. They are not intended to be a cure-all, and won't perform well in general purpose applications.

Regarding future proof builds, the easiest way to future-proof a system is to make it independent of any particular hardware, make it able to use all hardware at it's disposal without caring what it is, and design the system to be expanded with new builds rather than upgrading a single build. Any single server/workstation build should be designed to meet the needs of the immediate task at hand. When the task changes, upgrade the build if you can to meet the new demands. Otherwise, it's time for a new build.

Designing a single build specifically to be expanded and upgraded to any level of performance is simply foolish. Even supercomputers are replaced on a regular schedule, and rarely upgraded in place.

I understand you don't have any experience with this sort of build. You should get familiar with the status quo before venturing into no-mans-land. Build a server with a pair of HBAs, two PCIe SSDs for workspace, a SATA SSD boot drive, and a GPU. If that doesn't solve the problem, you won't need to replace any of those parts, and that build is virtually guaranteed to handle anything you throw at it.

If you aren't going to need the build before ThreadRipper or Naples, respec the build when those are released to save some cash.

Lastly, I'll simply point out that most HBAs assume they'll be used in a server with server-grade airflow. A quiet version of this type of build isn't a good idea.
 
Solution

TRENDING THREADS