somebodyspecial
Honorable
bikerepairman1 :
Rookie_MIB :
bikerepairman1 :
I for myself am not very fond of high capacity disks. When the array needs to rebuild, it will take a long time. (I had my 7x 1TB RAID6 array + 1x H.S. rebuild due to a disk failure.)
No kidding. I had a 5x2TB array lose a drive (about 1 month after the 3 year warranty went out grumble grumble) and it took about 40 hours to rebuild. Not a big fan of that.
You got my point. Now imagine you have 5x 8TB in the array. That rebuild would take what, 140-160 hours? That is nearly a week! In a datacenter with, most likely, mission critical data? have fun selling that.
Mission critical data of this type has fail-over/redundancy...Worst that happens is things slow down on another array for a while if needed (load balanced properly not much happens at all). You are not DOWN. If you designed your system correctly you might not even slow down much and only at certain times of the day (never design to be loaded always). If extra capacity/bandwidth are built in, others just take the load for a while without much issue other than peak times perhaps having slightly higher response etc. Places like amazon, facebook, google, microsoft etc will go large just to manage fewer things with less power/heat (all the same reasons you go virtual servers). You would rather manage 1million drives than say 4 million and all the heat/watts those create/use. I would even say it's smart for almost all cases other than really small businesses that don't have the money to properly prepare for going down on one. The bills to care for and cool the servers are more than the servers themselves so larger drives lowers the bills for almost everyone.
For home users, probably a different story, but for huge enterprise, this is a no brainer. You can't use a home user experience as a guideline for enterprise 😉