BLACKBERREST3 :
How does this look;
Workstation
Z10PE-D16 WS
Phantom 820 – have 1
2x XEON E5-2687W
2x KINGSTON 128GB 2400MHz DDR4 ECC Reg CL17 (KVR24R17D4K4/128)
AX1500i PSU - have 1
2x Samsung 4TB 850 Evo 2.5" SATA III SSD Raid 0 - I'll upgrade to PCIE or something else when it becomes cost effective.
2x ST4000DX001 - have them.
2x H110i GTX (CW-9060020-WW) - Already have 1, but to get another I will have to buy it refurbished.
LSI 9305-24i HBA
960 PRO with WINGS PX1 HBA
Nvidia GTX 970 – have
Xonar essence stx – have
Server
Z170-Deluxe - have
i7 6700k - have
2x Corsair DP 32GB (2x16GB) DDR4 3200 C16 (CMD32GX4M2C3200C16) – have 1, but I might not buy the other if I don’t need it.
AX1500i PSU – Need to buy another 1.
20x Seagate Archive HDD 8TB SATA III
LSI 9305-24i HBA
NZXT Kraken x61 280mm - have
PC-D8000
From what everyone says about ssi ceb boards, the holes don't line up that well and even if it can hold there, it would put too much pressure on the board the more stuff you add. I could always try and if it seems like there is too much load on the mobo, then I could get a 4u chassis like you said and build a pc desk to house it.
To answer rogue leader's question, I am going to be using it for an all in one workstation to help me with virtualization projects, data management, video editing, and yes gaming too. I need it to have many pcie slots for the gpu, sound card, pcie ssd, possibly a nic, and add anything else to it later down the road. I also plan to recycle this build into a second node or turn it into the main node for more storage in the future. It will eventually be re-purposed for what it was intended for.
There are more than a few issues. In fact, I'm having trouble finding a single part that's a good choice for your application. The only possible exception is a single (not double) kit of 32GB RAM for the server. Even that's a stretch.
You're going to butcher those 850 EVOs. In return, they will seize, thrash, and scream in retaliation. You'll get better performance from old-school 10k Raptors in a RAID array. Even standard 7200 drives would be a step up. Those are on the list of SSDs that RAID will eat alive, just like every other SSD on Tom's Hardware except the P4800X and 750. You should be looking on Tom's IT Pro.
The 960 Pro is still a consumer drive. Do not subject it to sustained, write-heavy workloads. As a boot drive, it will work, but it's not something I'd put in a server or workstation of this caliber.
The ST4000DX001 isn't good for sustained sequential workloads (at all, ever), either. Most SSHDs will seize and thrash in sequential workloads. If that's what you had before, it explains a lot. WD Reds would be much better. Some laptop drives would be better. A Celeron could fully load these in sequential compression workloads. I wouldn't be surprised to see an Atom fully load them. A 4-core ARM would be a good fit for those in a sequential compression workload.
Those Archive drives use SMR technology. They're slower than mud. I wouldn't even put them in a cold storage server, as the backup times would be too long. Some people use them for exactly that, though. I'm just not that patient.
The H110i GTX is not suitable for use in this type of build. Stick to air coolers. The x61 shouldn't be used in a server either. Even if the H110i GTX were reliable enough for a workstation like this, it doesn't support Narrow ILM mounting. You can't use it with dual socket boards.
For the money, I wouldn't consider the HBA for the workstation. There are solid RAID cards for that type of build, and you could get entry-level hardware RAID and an expander for that price in the server.
Registered RAM is a waste of money for this build. Unless you have a specific workload in mind that will exceed 128 GB per socket, stick with unbuffered. If the motherboard won't support that, get a different board. Seriously, it's just a bad way to spend your money.
The server's case isn't a good option for that many drives. The odds of drives failing is quite high when you run that many. Hot-swap bays will make the situation bearable. The Lian-Li case will make maintenance quite a chore.
The AX1500i is a good PSU, to be sure. I'd never put that in a server myself, though. It's over-priced, and it doesn't support PMBus monitoring. Get a server grade PSU and use it with the workstation.
The GTX 970 doesn't belong in that workstation. The rest of the build won't play games well at all. It's not intended to. Get a Quadro or Firepro if you want a GPU suitable for the workstation, and actually need it. Those will handle virtualization much better than the 970.
The Xonar is also quite out of place. I don't think it will cause you any particular issues, but it's certainly not something I've ever seen or considered for a build like this.
Now for the advice one what should happen:
Regarding the RAM and motherboard, I've already pointed you towards Supermicro and Tyan. I will do so again. Find yourself a board that doesn't require registered RAM, and get that instead. Then, get two 128 GB kits of unbuffered ECC RAM if you actually need a total of 256 GB. That's 16 sticks of 16 GB each.
Take the money saved on RAM, and put it into the parts that you're actually interested in: the SSDs. Check out the Tom's IT Pro reviews for guidance. I would point you to the SN150 in the AIC form factor or the DC P4500/4600 in the U.2 form factor. If those are too expensive, check out some mid-endurance options. Do not even think about consumer SSDs. They will not give you what you're looking for. Consider the WD Reds before thinking about a consumer TLC drive.
Regarding the SSHDs, throw them out. They're useless for the workload you're describing. If it's as sequential as I think it is, a magnetic tape drive will outperform the ST4000DX001.
For the workstation, you have no reason to get that HBA. None at all. If you convert it to a node down the line, just get an expander then. For this build, you just need an 8-port mid-range RAID card.
Since you don't have the ability to run performance verification trials on the PSU, stick to established server brands like Supermicro.
A pair of E5-2637s would be able to move data around significantly faster than any other component in the workstation build, even in heavy compression workloads. You should have a well-grounded, well-documented reason for getting a faster model than that.
If you want to have a gaming build, get one and put it in the Phantom. Something like your i7-6700k is a good fit. It will be faster in games than the workstation.
For the workstation, don't bother with the 820. Get a decent pedestal case that's designed for dual socket boards. Many 4U cases are designed to be used in either a standard tower configuration or on a rack. For the tower configuration, you put the included feet on it. For a rack, you put the included rails on it.
For the amount of money you're putting into this, you should really allocate it to the parts that matter. This does not include compute server CPUs, registered RAM, M.2 HBAs, consumer SSDs, SMR HDDs, consumer cases, or consumer PSUs.
Lastly, none of the workloads you've described could leverage the CPU resources you've listed with the possible exception of "data management", and that would need to be incredibly unusual to require the compute power you've listed. Those CPUs don't work well with video editing, and will perform marginally better than an i7 6900k. Dual socket builds don't edit videos well. They can encode videos very well, though, especially when paired with a mid-range GPU. For gaming, you'll get performance on par with an i5 6500. For virtualization, you'd need several users to properly leverage those CPUs.