Best use of Storage Assets for 10GBe Workstaion + Render/Media Server Workflow

Bropa

Prominent
Apr 29, 2017
1
0
510
(Long, robust & complex Video/Rendering thread, sorry in advance!)

Hello everyone, thanks in advance for your help! I just build my dream office at home and went for the gusto with upgrading my 5 yo media server too. Ive not got an abundance of storage drives + RAID controllers and not sure the very best use for them. I am quit technical but Id love fresh eyes/ears on this from fellow techys.

A) Current Workstation BoM -

Intel Core i7 5930K w/ Samsung SM951 PCI-E SSD (Non-NVMe version) as OS Drive, Sammy 256GB 850 Pro (Cache Drive) and 4 x WD 750GB 2.5" Black HDDs in RAID 5 (working volume) for a 2.0TB usable drive running on ICH10R onboard Asus X99-A Motherboard. (Not happy with the performance of this array at all.

B) New Render/Media Server BoM (Just arrived!) -

Intel Core i7 7700K with Asus Z170-A Motherboard, Samsung 960 Pro 512GB M.2 NVMe SSD.
Both workstation and Server now have 10GBe SFP+ Network cards and are connected to Ubiquiti Edge Lite 48P 10GBe SFP+ switch for use of the server as a working volume for rendering and storage.

Old parts id like to utilize :

4 x WD SE 4TB HDDs (in RAID 5) running on old Adapted 5445 controller, but just got LSI 9261-8i controller which I want to put to use in the new server, as well as the Samsung 850 Pro 128GB SSD that I have kicking around. Lastly, I have a pair of Seagate 4TB Ent HDDs running RAID 1 for important documents and files (that I don't intend to touch) And I also have a batch of 4 x WE RE4 2TB HDDs that are about 6 years old, but have only 4 years of 24/7 use on them.

Performance Goals & Expectations:

Server to provide a working volume for High-Bit rate Video Production and Rendering (Adobe AE, PP, Maya, etc) and also Plex media server, and of course file/printer sharing in my home network. Id like to hit the 1000MBps transfer speeds on that server with my storage arrays (to utilize the 10GB network elements) Overall storage space could be sacrificed to achieve these goals, I'm open to input and guidance here.

Some Ideas I've considered;

Ive heard RAID 10 for my Server Working Volume is the way to go, also that Cachecade is available for the LSI card to use a front-end SSD for Cache (make sense?) How about the 4 x WD 750GB HDDs running on-board RAID 5 in my local workstation, can we eliminate them completely? Thoughts and feedback welcome guys!

Edit : I realize I missed some critical elements, both systems are running Windows 10 Pro OS 64 bit, both systems have 32GB RAM, the Chassis for the Workstation is Fractal Design R5, the chassis for the server is a 4U Norco 9 x 3.5" HDD + 1 x 2.5" (non-hotswap) chassis.
 
Solution
The only way you can achieve what you need is using Hardware RAID with 12 drives minimum.
Everyone keeps saying RAID5/6 is slow and aim for RAID10. That is a waste
With proper IO processor RAID card you can have over 1200MB/s transfer rate.

We did a dual Thunderbolt 2 (12x drives each T12-S6.TB2) and end up read/write @ 2600MB/s plus
https://forums.macrumors.com/threads/2-6gb-s-2600mb-s-raid-array-transfer-rate-via-thunderbolt-2.1746263/

To utilize your 10Gb at full potential. you need to have a source can delivery.
You can find similar RAID box that can delivery more than 10Gb/s with DATOptic, Newegg or Amazon
1. If you want performance. Use RAID 0 or 10 or Windows Storage Spaces in a similar setup. You don't have redundancy with RAID 0. So you will suffer downtime when you build a new array off a backup. When a drive fails

2. Dump all the miscellaneous drives on eBay. Replace them with three 8TB, 7,200RPM WD Red Pro or Gold or HGST Ultrastar He8 drives. These 8TB helium sealed drives have impressive performance. Especially in the sequential operation you will be doing. Much faster than those older and smaller drives. Use them in RAID 0 for max performance or RAID 5 for parity (or similar Windows Storage Spaces options). You could also do RAID 10 with more drives if you want performance and redundancy.

Even an 8TB 5,400 RPM WD Red is as fast as a 4TB 7,200 RPM Red Pro in sequential workloads and demolishes it in high Que Depth workloads.
http://www.tweaktown.com/reviews/7681/wd-red-8tb-helium-filled-wd80efzx-nas-hdd-review/index5.html

3. RAID or Storage Spaces. With the onboard Intel controller. You should get moderately better performance with RAID. However, if you want to increase the size. You have to build a new array. Storage Spaces gives you the option to add drives to the pool. Then let it do it's thing overnight to optimize the Storage Space.
https://foxdeploy.com/2015/10/30/windows-vs-intel-raid-performance-smackdown/

4. Your performance expectations are too high. Even with an RAID 10 Array utilizing eight 8TB WD Red Pro or Seagate Iron Wolf. The best sequential 128KB read speeds are about 460MiB/s. Individual drives max out at about 180MiB/s but the overhead of RAID results in some losses.
http://www.storagereview.com/seagate_nas_hdd_8tb_review
http://www.storagereview.com/wd_red_8tb_review

Artificial benchmarks may show much higher speeds then the reviews. Somewhat more real world testing is a better standard to use.

5. You want to use Enterprise/NAS drives for any RAID array. They utilize TLER (Time Limited Error Recovery). It spends less time trying to recover a bad sector. This is because a RAID controller may take a consumer drive with a bad sector off line when it stalls recovering a bad sector. Thinking the drive is bad. TLER will stop hammering on a bad sector after about 7 seconds non TLER may go on for a minute trying to get data out of a bad sector. In most instances if seven seconds is not enough time, nothing will be.

6. Do the RAID in your workstation for maximum performance. Even 10GbE will add some overhead and latency. Reducing performance.

Edit: If you want capacity and performance. You could load up three 8TB drives in RAID 5 on the server. Then on your workstation use dual 1TB Samsung 850 Evo in RAID 0 (or quad 500GB). Store your data on the server. Move your current video projects to the SSD RAID 0 array. While RAID 0 isn't great for high IOPS on an SSD, it does get spectacular sequential performance.
 

FireWire2

Distinguished
The only way you can achieve what you need is using Hardware RAID with 12 drives minimum.
Everyone keeps saying RAID5/6 is slow and aim for RAID10. That is a waste
With proper IO processor RAID card you can have over 1200MB/s transfer rate.

We did a dual Thunderbolt 2 (12x drives each T12-S6.TB2) and end up read/write @ 2600MB/s plus
https://forums.macrumors.com/threads/2-6gb-s-2600mb-s-raid-array-transfer-rate-via-thunderbolt-2.1746263/

To utilize your 10Gb at full potential. you need to have a source can delivery.
You can find similar RAID box that can delivery more than 10Gb/s with DATOptic, Newegg or Amazon
 
Solution