Alright guys and gals, I've deployed a few large scale storage solutions in the past, but looking for ideas on a new solution. I've got someone that needs a storage server that could scale past 500TB, which would normally not be a big deal except they're asking of all the storage can be one one logical drive in Windows. I'm pondering possible ways to do this and here's where I'm at:
1) HBA and JBOD chassis, presenting to Storage Spaces in Server 2016 and one large ReFS volume. I've seen this done before and it's supported, and technically possible, but I've also seen that solution fail while Storage Spaces flakes out. Super odd software issues, disk rebuild issues after failure, etc. Basically everything you'd fear going wrong with a software-based solution (especially in Windows...) vs hardware RAID.
2) Hardware RAID. The issue here is drive group & logical drive limits on RAID controllers. For instance, on LSI controllers (which I typically swear by) you're capping out at 32 HDDs per drive group. I'm certainly not gonna stripe across 32 x 14TB drives, and that would still cap me at just over 400TB usable.
3) QNAP ZFS NAS, presenting iSCSI to a Windows server. Won't do it due to iSCSI limits.
4) Hardware RAID, with multiple RAID-6s (or 60s) presented to Server 2016, then stripe across them using disk management to create one large ReFS volume. This one intrigues me but I'm not sure how Windows will react if it can't access the drives directly due the RAID arrays. I'm curious if anyone's tried this.
5) Linux (in general). I think the issue here will be the application that's going to run on the server, which I believe is only supported on Windows.
Wondering if anyone has any other idea.
1) HBA and JBOD chassis, presenting to Storage Spaces in Server 2016 and one large ReFS volume. I've seen this done before and it's supported, and technically possible, but I've also seen that solution fail while Storage Spaces flakes out. Super odd software issues, disk rebuild issues after failure, etc. Basically everything you'd fear going wrong with a software-based solution (especially in Windows...) vs hardware RAID.
2) Hardware RAID. The issue here is drive group & logical drive limits on RAID controllers. For instance, on LSI controllers (which I typically swear by) you're capping out at 32 HDDs per drive group. I'm certainly not gonna stripe across 32 x 14TB drives, and that would still cap me at just over 400TB usable.
3) QNAP ZFS NAS, presenting iSCSI to a Windows server. Won't do it due to iSCSI limits.
4) Hardware RAID, with multiple RAID-6s (or 60s) presented to Server 2016, then stripe across them using disk management to create one large ReFS volume. This one intrigues me but I'm not sure how Windows will react if it can't access the drives directly due the RAID arrays. I'm curious if anyone's tried this.
5) Linux (in general). I think the issue here will be the application that's going to run on the server, which I believe is only supported on Windows.
Wondering if anyone has any other idea.