Here is the setup...
supermicro x7sba motherboard
2 gb ecc memory
xeon 3065 dual core
pioneer dvd+-rw (sata)
promise fastrax tx4310 raid controller
supermicro case sc743 w/hotswap bay
dual GB lan on GB network
win server 2k3 R2 sp1 - all updates
i am running the the following HDD arrays, all use sata seagate drives
onboard controller (ich9r)
3 x 160gb drives in RAID 5 (boot)
2 x 750gb drive in RAID 1 (mirrored)
promise controller
3 x 250gb drives in RAID 5
This is what is happening. I use the server as a file/print server using server 2003 R2. I just built this system and the throughput is horrible. When I plug in an external usb drive to transfer files from my old server, it chokes. It take 30-40min to transfer a few gigs, maybe ~10-15gb. I had to quite because it was so terribly slow, and if I transfer in windows, the file transfer time fluctuation greatly... it will read 24 minutes, then 60, then 40, then 75. It takes forever, way too slow.
I also automate backups from my two workstations to this system nightly. both the server and workstations have GB nics and I use a GB switch. I was transferring about 8 gigs and it was taking 26-30 HOURS. My old setup could transfer about 75 gigs of data, over a 100mbs network to my old server (athlon 2600) in about 4 to 4.5 hours.
Something is terribly wrong here. I have had the system running for only a few days. I am thinking my problems are stemming from choking the ICH9R chipset. It runs (from my understanding) the USB, the sata and possible some of the PCI slots. Even the raid 5 array on the promise controller stinks, transferring to those drives is also slow, but not quite as bad. I haven't tested that array as much as the others, but I do it is not on par.
I am open to buying a real hardware RAID card, but dont want to spend $500+ if it isnt the solution. I am looking at something like a 3ware 9550SXU-8LP. I would like a card that supports 3 arrays, at least for now - at some point I might consolidate arrays as I purchase bigger drives... but for now.
I would appreciate suggestions to the throughput issues and possible RAID cards. I realize now I should not have tried to build my own server, or at least did a little more research before starting. I'm in to this project for too much $$ at this point to run away, but don't want to keep feeding it money if it wont help. At this point this server is useless.
I really appreciate any help.
supermicro x7sba motherboard
2 gb ecc memory
xeon 3065 dual core
pioneer dvd+-rw (sata)
promise fastrax tx4310 raid controller
supermicro case sc743 w/hotswap bay
dual GB lan on GB network
win server 2k3 R2 sp1 - all updates
i am running the the following HDD arrays, all use sata seagate drives
onboard controller (ich9r)
3 x 160gb drives in RAID 5 (boot)
2 x 750gb drive in RAID 1 (mirrored)
promise controller
3 x 250gb drives in RAID 5
This is what is happening. I use the server as a file/print server using server 2003 R2. I just built this system and the throughput is horrible. When I plug in an external usb drive to transfer files from my old server, it chokes. It take 30-40min to transfer a few gigs, maybe ~10-15gb. I had to quite because it was so terribly slow, and if I transfer in windows, the file transfer time fluctuation greatly... it will read 24 minutes, then 60, then 40, then 75. It takes forever, way too slow.
I also automate backups from my two workstations to this system nightly. both the server and workstations have GB nics and I use a GB switch. I was transferring about 8 gigs and it was taking 26-30 HOURS. My old setup could transfer about 75 gigs of data, over a 100mbs network to my old server (athlon 2600) in about 4 to 4.5 hours.
Something is terribly wrong here. I have had the system running for only a few days. I am thinking my problems are stemming from choking the ICH9R chipset. It runs (from my understanding) the USB, the sata and possible some of the PCI slots. Even the raid 5 array on the promise controller stinks, transferring to those drives is also slow, but not quite as bad. I haven't tested that array as much as the others, but I do it is not on par.
I am open to buying a real hardware RAID card, but dont want to spend $500+ if it isnt the solution. I am looking at something like a 3ware 9550SXU-8LP. I would like a card that supports 3 arrays, at least for now - at some point I might consolidate arrays as I purchase bigger drives... but for now.
I would appreciate suggestions to the throughput issues and possible RAID cards. I realize now I should not have tried to build my own server, or at least did a little more research before starting. I'm in to this project for too much $$ at this point to run away, but don't want to keep feeding it money if it wont help. At this point this server is useless.
I really appreciate any help.