First, the hardware:
Gigabyte GA-X58A-UD3R Motherboard
Intel i7 980x
5x 2TB Seagate Barracuda XT Harddrives (AHCI mode)
12GB G.Skill RAM
OS: Win7 x64 Pro
I have these 5 brand new disks in a GPT Raid 5 array (yeah, I know the potential consequences of having such a large array in raid 5, but I like this option best) just to use as a single large data drive.
However, the performance so far has been absurd - The array has been initializing 24 hours a day for a little over a week, was at 98% complete, and as I was trying some new software, the system crashed and put it back at 0%.
Anyways, file transfers to the array while it has been initializing average ~9MB/s, when each disk is supposed to be ~138MB/s. I started looking into it, and figured it was because IRST showed the Write-back cache as Disabled and wouldn't let me select the Enable option (grayed-out). However, in the device manager, the volume showed as enabled. So I disabled it in device manager and restarted, then enabled it again in device manager and restarted. So now IRST actually shows Write-back cache as Enabled, but there's has been no improvement.
So I'm really hoping someone can explain my problem:
- Is it slow only because it's still initializing, or will should I be expecting similarly slow speeds once it's done?
- Is it due to poor raid performance of the ICH10R?
- Is it because these drives are SATA 6.0Gb/s but are connected to SATA 3.0Gb/s connections on the motherboard?
- Or is it still related to the write-back cache? (I'm still wondering because this post sounded similar to my problem, in that he was getting 5MB/s until enabling write-back which raised it to 80MB/s)
Thanks for any advice. If no one knows, I'll just break the raid array and figure something else out.
Gigabyte GA-X58A-UD3R Motherboard
Intel i7 980x
5x 2TB Seagate Barracuda XT Harddrives (AHCI mode)
12GB G.Skill RAM
OS: Win7 x64 Pro
I have these 5 brand new disks in a GPT Raid 5 array (yeah, I know the potential consequences of having such a large array in raid 5, but I like this option best) just to use as a single large data drive.
However, the performance so far has been absurd - The array has been initializing 24 hours a day for a little over a week, was at 98% complete, and as I was trying some new software, the system crashed and put it back at 0%.
Anyways, file transfers to the array while it has been initializing average ~9MB/s, when each disk is supposed to be ~138MB/s. I started looking into it, and figured it was because IRST showed the Write-back cache as Disabled and wouldn't let me select the Enable option (grayed-out). However, in the device manager, the volume showed as enabled. So I disabled it in device manager and restarted, then enabled it again in device manager and restarted. So now IRST actually shows Write-back cache as Enabled, but there's has been no improvement.
So I'm really hoping someone can explain my problem:
- Is it slow only because it's still initializing, or will should I be expecting similarly slow speeds once it's done?
- Is it due to poor raid performance of the ICH10R?
- Is it because these drives are SATA 6.0Gb/s but are connected to SATA 3.0Gb/s connections on the motherboard?
- Or is it still related to the write-back cache? (I'm still wondering because this post sounded similar to my problem, in that he was getting 5MB/s until enabling write-back which raised it to 80MB/s)
Thanks for any advice. If no one knows, I'll just break the raid array and figure something else out.