Reader's Voice: Building Your Own File Server

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
jeffunit,

I do understand where you are coming from, and I do appreciate that your spec builds a very decent 5TB fileserver at a great price. I believe that many enthusiasts build fileservers with spare parts, simply because in a small network a Quad Core Extreme is not really any different a file sharing device than a repurposed P3.

Respectfully, though, I must disagree with your build.

I believe there are three primary destinations of file servers: home, small office, and corporate (server room). I disagree with your build for any of those markets.

My main concern for a home server is noise, size, and power. I don't think most people will want a 6-drive full tower just to give storage; the noise level, once you factor in fans, is horrible. I would prefer a small ATX or SFF chassis with 1-2 very large hard drives, and eSATA to add more storage if necessary.

A small office without a proper server room suffers from similar problems: someone who tucks a server into a (literal) closet can't generate too much heat. Cost is less of an issue, and other factors such as servicability is more of an issue. In this case, I would suggest a smaller chassis, fewer hard drives, and at least 1 spare hard drive. The likelihood of a hard disk failure in a multi-disk array, during the lifespan of the array, is high; have you ever tried to find a replacement Spinpoint F1 or RE2 after it's been discontinued?

I have rarely seen a small office that requires as much storage as you suggest. Usually, the purpose of a file server in a small office is centralized file storage (rather than space) and centralized backup.

An office with a properly air conditioned and powered server room would almost certainly want a file server in a rackmount chassis. Here, noise is not a factor. However, usually, neither is price. Assuming that "value" is still important, I would suggest something like the Asus P3Q Premium motherboard with 4 onboard 1000TX, paired with an adaptec SATA card.

Hard drive wise, I'm a big fan of the WD Green series. As I mentioned above, I can't overemphasize the importance of a spare hard disk drive that is safely tucked away -- it will allow you to swap in a drive, and send the old one in for repair.

In terms of operating system software, make sure you have all the file sharing protocols you need (an office, for example, might need IPX or NetBEUI), and that there's remote desktop. Make sure you know how to deal with replacing a failed RAID drive -- because it's likely. Have a plan for what you will do if the motherboard fails -- because that might happen too.

The worst thing that can happen? The server "dies" and you have no idea how to get retrieve your 5,000 GB with of stuff.
 
here my server and they have it all wrong, you NEED a raid card because the amount of times my motherboard has dyed is insane, they don't last more than 2 years if that without issues. with a raid card you can drop it into any computer with a PCie slot and be off. simple and effective

my server
CPU- Celeron 430 (no need for cpu power raid card does the party work)
MB- Foxconn M7VMX-K LGA (cheap board with onboard video, a must)
Ram- 2x1gb (800) (2gb is the most you need for win32 and no apps running or ever should be)
Raid card -LSI 8344ELP 4x PCie with 2x SAS=8x SATA2 (running Raid5(4x750GB)+ (2x320GB) Raid0)
PSU- PPC&C 370watt (a solid psu is a must over a UPS or your computer will fry with a rainmax cheapo brand)

computer $150, raid card $150 (love ebay), HDDs $500 (at the time).

for people who want to know the benchmark my raid5 hits 500mb/s read and 200 write constantly or peak 900read/300write (not bad for raid5). i set it up for 4mb blocks because of all the video editing i'm doing (also needs to be talked about). a file server can't handle all because you need to set the block sides according to what the server is mostly going to be used for.
 
I recently had the need to build a file server for myself. I used a Dell Poweredge 840 with a 2.0 GHz quad core and 2 Gb ram, that I picked up form Craigslist for $200 from a small business that went bankrupt and 4 1TB SATA drives from Newegg for $80 each. The Dell came with a 4 port PERC6 SATA/SAS raid card. I added a 40 GB IDE drive that I had lying around for the OS drive. The Dell came with Windows Server 2003 R2; however I chose to use Open Filer. Total price was around $600 dollars and about 2 hours worth of my time.

Anyone building a file server should seriously look at Open Filer. It has web based management, is very easy to install and configure and works on just about anything. The OS supports LDAP, SAMBA/CIFS, NFS, iSCSI, FTP, Replication, and a ton of other features. The only thing missing that I would like to have is block level data De-duplication like a Data Domain storage appliance. Other than that I am very impress with Open Filer.
 
[citation][nom]talys[/nom]Respectfully, though, I must disagree with your build.I believe there are three primary destinations of file servers: home, small office, and corporate (server room). I disagree with your build for any of those markets.

My main concern for a home server is noise, size, and power. I don't think most people will want a 6-drive full tower just to give storage; the noise level, once you factor in fans, is horrible.[/citation]

I agree if you can solve your problems without RAID-5 and with
a 1 or 2 disk solution, then you should. However, if you need more storage, speed, reliability, or flexibility, then a full tower with lots of hard drives has a place. It is certainly not a solution for everyone.

The noise isn't a problem. Most good cases have room for multiple 120mm fans. If you were building a 1U or 2U fileserver, then noise would certainly be a problem. Of course, my fileservers aren't totally silent either. If you really require low noise, then you will have to optimize for that, which means low power processors, efficient hard drives, sound deadening material and the like.
 
[citation][nom]bravesirrobin[/nom]I've been thinking on and off about building my own NAS for around a year now. While this article is a decent overview of how Jeff builds his NAS's, I also find it dancing with vagueness as I'm trying to narrow my parts search.[/citation]

He was vague as he made a file server from old parts, and you can use just about any motherboard with at least two PCI, PCI-X, or PCI Express slots as a base for your file server.

[citation]Are you really suggesting we use PCI-X server motherboards? Why? (Besides the fact that their bandwidth is separate from normal PCI lanes.) PCI Express has that same upside, and is much more available in a common motherboard.[/citation]

He wanted to make a *good* file server that has plenty of bandwidth, multiple CPU cores, and ECC memory. 32-bit, 33 MHz PCI slots on typical desktop motherboards cannot carry all that much bandwidth. PCI Express, 64-bit PCI, and PCI-X are much better. Sure, you can buy newer parts that have PCI Express slots, have CPUs/chipsets that support ECC memory, and support dual-core CPUs. Or you could pick up an old dual-socket server from eBay for cheap, which will have PCI-X or or 64-bit, 66 MHz PCI rather than 32-bit 33 MHz PCI. He seemed to be much more of the "pick up an old server" kind of person, so that's where PCI-X came into the picture.

[citation]You explain the basic difference between fakeRAID and "read RAID" adequately, but why should I purchase a controller card at all? Motherboards have about six SATA ports, which is enough for your rig on page five.[/citation]

Many southbridge SATA controllers stink at handling the bandwidth that a RAID5/RAID6 setup can pull. About the only decent southbridges out there for SATA are AMD's SB700/710/750 and Intel's ICH7 and newer. NVIDIA's are terrible; my desktop's NForce4 bottlenecks at 20 MB/sec writes on RAID5 writes and 60 MB/sec on reads, but a RAID-less PCIe x4 HighPoint 2310 controller wrung the hard drives out for 130 MB/sec reads, 60 MB/sec writes, which is exactly what you would expect the HDDs I used to do.

[citation]Since your builds are dual-CPU server machines to handle parity and RAID building, am I to assume you're not using a "real RAID" card that does the XOR calculations sans CPU?[/citation]

Correct. He didn't come out and specifically say it, but he was almost certainly using the Linux "md" OS-based RAID. That is actually a good approach today as the "hardware" RAID cards are ridiculously expensive and multi-CPU setups can handle the XOR calculations much better than the little I/O processor on the hardware RAID cards. Plus, you can move the array to any other machine running Linux if your motherboard or disk controller dies rather than having to buy an identical hardware RAID card to access your data.

[citation](HBA = Host Bus Adapter?)[/citatioin]

Correct.

[citation]Also, why must your RAID cards support JBOD? You seem to prefer a RAID 5/6 setup. You lost me COMPLETELY there, unless you want to JBOD your OS disk and have the rest in a RAID? In that case, can't you just plug your OS disk into a motherboard SATA port and the rest of the drives into the controller?[/citation]

He is using Linux md RAID, which requires all of the disks in the array to be visible as individual drives to the OS at boot. The OS starts the array during the boot sequence after all of the disks come online. If your controller does not support JBOD, the disks don't show up in the OS and you can't use them in an array. The author wasn't using a hardware-based RAID card, so a setting up a RAID using the card's BIOS tools would set up a "fake-RAID" that is also software-based (just like "motherboard RAID" that almost all newer motherboards support.) These fake-RAIDs are not all that well-supported under Linux and carry all of the disadvantages of both a "fake" and hardware RAID controller, so the author wisely decided not to go this route.

[citation]And about the CPU: do I really need two of them? You advise "a slow, cheap Phenom II", yet the entire story praises a board hosting two CPUs. Do I need one or two of these Phenoms -- isn't a nice quad core better than two separate dual core chips in terms of price and heat?[/citation]

The author is mostly talking about older hardware. His recommendation is to have at least two CPU cores available. On old hardware, that means two separate CPUs since dual-core CPUs weren't invented yet. A newer unit with a single dual-core or better CPU would work just as well, though. You would only need one Phenom II as all Phenom IIs have at least two cores.

[citation]What if I used a real RAID card to offload the calculations? Then I could use just one dual core chip, right? Or even a nice Conroe-L or Athlon single core?[/citatioin]

If you did that, you would pay $400+ just for the RAID card, which is far more than getting a new motherboard with a good SATA controller and a nice dual-core CPU. That is why he didn't mention using a hardware RAID card. To tell the truth, if you are using remotely-modern CPUs, a single-core will work well if your array isn't huge, but the price differential between a Conroe-L and a Celeron Dual Core or a Sempron and a cheap Athlon 64 X2 is a couple of cups of coffee at Starbucks, so I'd go with the dual-core unit.

[citation]Finally, no mention of the FreeNAS operating system? I've heard about installing that on a CF reader so I wouldn't need an extra hard drive to store the OS. Is that better/worse than using "any recent Linux" distro? I'm no Linux genius so I was hoping an OS that's tailored to hosting a NAS would help me out instead of learning how to bend a full blown Linux OS to serve my NAS needs. This article didn't really answer any of my first-build NAS questions.[/citation]

You can build a file server using any Linux distribution. You would just need to be able to partition the disks yourself, set up md, and then define the mount point of the array if the distribution doesn't offer any specific wizards to help you through the process. If you like the tools FreeNAS gives you to work with, then by all means use it. I personally use Debian on my file server as that's what I use on my other machines. But any Linux distribution would end up working just as well as any other.

You can install the file server's OS on a CF card or even have it boot over LAN via PXE. There are a lot of ways to do this and it is a little outside of the scope of this article to cover all of them.

[citation]Thanks for the tip about ECC memory, though. I'll do some price comparisons with those modules.[/citation]

They are more expensive and not all motherboards/CPUs can take them. You will want to look very carefully at your motherboard's manual to see if it can take ECC memory.

You know, somebody should start a thread on this in Storage. I have built a file server using old parts and I could give a lot more information than the author did.
 
[citation][nom]wuzy[/nom]Yet again why is this article written so unprofessionally? (by an author I've never heard of) Any given facts or numbers are just so vague! It's vague because the author has no real technical knowledge behind this article and are basing mainly on experience instead. That is not good journalism for tech sites.[/citation]

There IS something to be said for someone speaking from experience instead of technical knowledge. I have heard MANY MANY techs be wrong about a given resolution to a problem. And the reson is because of a lack of experience. School did NOT prepare me for the job I'm in now. Being thrown into my current job is what got me educated and ready for real world problems. So if this guys is speaking from experience (It's obvious he also has technical knowlege), then I'm all for it and listening. A guys speaking something from a book means very little to me compaired to someone speaking from experience.
 
I have 13 drives in my Coolermaster server and I run XP using two Promise RAID arrays and one onboard RAID array w/ a pent 4 2.4 ghz processor and a 8KNXP mobo. It consists of one RAID 5 and three RAID 1 arrays. I would NOT make the leap to Linux because I bet there would be a lot of stress involved in getting a Linix box to the level where I have my Windows box. I also have a Media Center PC in my living room and I would have to think about. I fix PC's for a living so I'm not about to open a Linix can of worms at my home.
 
yes good raid cards cost $400+ like LSi, 3ware etc but you can get them on ebay for $100. my $550 newegg raid card cost $170 on ebay new and sealed. also people need to realize that "cheap" $150 price point raidcards like highRAID off load ALL to the cpu even the onboard raid chipsets do the same. this is why you need a "cheap" quadcore or a single core and a "real" raidcard.

someone please pay me to write a article worthy of reading!
 
RAID 5 maxes out at 12TB. The unrecoverable read error rate (URE) is 10^14 on regular SATA drives. Which means that once every 100,000,000,000,000 bits you get an unrecoverable read. If you have a failed drive, it MAY be impossible to rebuild your array because of the unrecoverable read.
 
I've got a few "file servers."

The first is my Home Theater PC that's connected to my projector. It has 1.5TB in a hadware raid5.

Then I have a 2TB NAS that's RAID1 for my important stuff and music collection.

Then I have another old P4 with 8 old HD's that I felt like re-using. Those are all in various RAID0's and JBOD arrays off integrated sata, ide, and add in sata and ide controllers. About 1.5TB. That server only stores rips that can be reproduced easily.

I then have two PS3's and use PS3 Media Server on my personal box to give a central access point for all my servers data and do transcoding. It's a quad core AMD with another 1TB of space in RAID0's.

The NAS supports iTunes for direct access via portible devices. The PS3's and Media Center PC all have consolidated interfaces for everything on my network.

The NAS is so slow it only works for music streaming, though I can easily get three streams at a time off it.

With my computer transcoding, I can get three 1080p streams AT ONCE to the PS3's and HT machine. I have dual gigabit nic's teamed to my cisco switch so bandwidth is no issue. :)

Using HT purposed NAS' is a complete waste of money unless you are completely technically incompetent.
 
On a related matter, What happens to a RAID array when the drives aren't a perfect match?
Say the original array was built with 4 identical drives (in raid 5) and 5 years from now, a drive fails. There is no way you'd be able to get the exact same model of drive with the same firmware. Would sticking a more modern drive in the array affect the performance of the whole array? Will it even work at all or will I need to build an entire new box to be able to copy the now unprotected data off?
By the same token, I have 3 or 4 TB data drives kicking around various machines in the office, none of which will be 'identical', can I mix them all together into one machine and get decent drive performance.
 
[citation][nom]BartmanNZ[/nom]On a related matter, What happens to a RAID array when the drives aren't a perfect match? Say the original array was built with 4 identical drives (in raid 5) and 5 years from now, a drive fails.[/citation]

Linux doesn't care as long as the replacement drive is at least as big as the original drive. And you can mix and match different drives with no problems. I once bought 4 maxtor 500gb drives, and within a month 3 of four failed. Of course, maxtor
replaced them. Having different drives would reduce the problem
of getting a bad batch of drives, as would using slightly used drives that are known good.
 
BartmanNZ nothing HAS to match other than amount of space required but let me explain. say you have 3x 120gbs from 5 years ago 1 fails and you need a replacement. if you REALLY don't want to set up a new raid since it would still be working you can put ANY same interface hard drive in. as in 5 years ago only IDE and SCSI was around so to keep the raid going you would have to find one of those. but what im trying to get at is even a 1TB IDE drive would only show up as 120gb in the raid, or the "smallest drive in the raid".

as for performance it's only going to go as fast as the slowest drive or worst. really the only concern would be they all could die at anytime since you don't know the wear and tear the drives have experienced so far.

personally unless they are a new drives identical i would not trust a raid-0, 1 or 5. only raid6 since 2 can fail at once.
 
It's a shame few new mbds today come with a PCIX slot, or even PCI64.
Seems a bit bizarre to me to see an expensive 'enthusiast' mbd come
out that has the usual 3 or 4 PCIe slots but there alongside is a lowly
PCI32 slot.

Good SCSI RAID cards, and other PCIX cards such as FC controllers,
can be obtained 2nd-hand very cheaply these days, yet new mbds only
seem to have PCI32 at best. There are a few which do have PCIX (mine
does, an ASUS M2N32 WS Pro), but they're generally a fair bit more
expensive as they're marketed as 'workstation' boards, although my
ASUS board was a lot cheaper than many enthusiast/gamer boards yet
offers the same overclocking facilities.

However, at least as someone said it is certainly not hard to obtain
a good controller card 2nd-hand, eg. LSI 22320-R, and also PCIe, eg.
I won an LSI 320-2E PCIe card off eBay for a crazy low amount (DELL
PERC variant), and the results with just 4 x SCSI disks are very nice.

Btw, that's also something worth mentioning - it's easy now to obtain
used SCSI disks, especially 15K drives, though this is more relevant
to a locally connected RAID, except when one is accessing a remote
data store where access time can make a big difference for dealing with
lots of small files.

Ian.

 
I run a Synology cs-407e Cubestation. I love it and would never go back to anything else. It's been running about a year without a hickup, takes very little power, and I keep all shares in read only mode unless I plan to put something else on the filer.
 
I have a home server. I'm currently running Windows Server 2008 RC2, ASRock X48TurboTwins motherboard with 2 MB of DDR2 memory, 2 x 1TB Samsung SATA2 in Raid 0. With this configuration I can push out up to 210MB/s burst transfer speeds and 90-120MB/s sustained (without jumbo frames) from my SSD drive/machine to my server. And, considering all other computers in the house uses one rotary hard drive and have 80MB/s sustain transfer speed (if not less), what difference in transfer speeds would jumbo frames do for me?
 
I have 22 disks in my oldest fileserver, two RAID5 sets, 12 disks and 9 + one boot disk. Having them on a hardware raid card would be very expensive, and quite pointless. When the motherboard dies, it is not more complicated to get the array up than with a true hardware raid card.
Have done it many times now. And even though some newer RAID cards supports sleepping the disks etc, not all of them do.
This is very important (imho) in a home, where you don't need the disks spinning all the time, generating heat, noice and consuming a lot of power. My oldest fileserver consumes about 95W whith all disks except one sleeping. About 300 with all up.

Unless performance is really important, a single core does the job just fine, say an old amd64. But if one is looking for transferrates of many hundred of MB/s one have to be a little bit more careful about the set-up.

With all my servers, I have had troubles with SATA cables (especially when sata was new), motherboards, northbridges overheating, powersupply breaking down and I think at least one harddrive dying on me. You name it. But I have never lost one bit of data. :)
 
Wow, didn't realize I was reading slashdot.org....


Jeff was doing so well until we get to the last page and are met with a load of crap.

"...enlightened manufacturers..." Gimmie a break.

"However, I don't recommend Windows for several reasons. First, it is expensive. Windows Server 2008 costs start around $999. "

Well its nice he did his home work... not! Since we're talking about setting up a "home server" you'd think he'd mention Windows Home Server which is made specifically for the home market which would be fitting since the very first sentence of this article starts off with talking about making a “personal file server” and not an enterprise server. While yes, Windows Home Server does cost more than free, but it is way cheaper than the $999 starting price of Windows 2008 as Mr Deifik puts it. Newegg price for windows home server is $99. That's way expensive dude! http://www.newegg.com/Product/Product.aspx?Item=N82E16832116550
 
[citation][nom]tbhall77[/nom]Wow, didn't realize I was reading slashdot.org....Jeff was doing so well until we get to the last page and are met with a load of crap."...enlightened manufacturers..." Gimmie a break."[/citation]

By enlightened, I meant manufacturers who wished to maximize their sales and profits, by supporting more than one operating system. If you find that unenlightened, you are entitled to your opinion.
 
[citation][nom]tbhall77[/nom]Since we're talking about setting up a "home server" you'd think he'd mention Windows Home Server which is made specifically for the home market which would be fitting since the very first sentence of this article starts off with talking about making a “personal file server” and not an enterprise server. While yes, Windows Home Server does cost more than free, but it is way cheaper than the $999 starting price of Windows 2008 as Mr Deifik puts it. Newegg price for windows home server is $99. That's way expensive dude! http://www.newegg.com/Product/Prod [...] 6832116550[/citation]

True, Windows Home Server is cheaper. But it doesn't support *any* software raid. Linux and BSD support all kinds of RAID,
including 5 and 6. With FreeBSD you can even get RAID-Z and RAID-Z2 with ZFS. Personally, I don't like losing data when a single drive dies. Windows Home Server doesn't even support RAID 1, based on everything I have read. If you want software RAID support and Microsoft Windows, I think the solution is Windows Server 2008.
 
i use one 4gb cf card/ide for free nas os
and 2x250gb sata drives in asus barebone case works like charm for 2 years now
 
What's all this talk of "Free" operating systems. It's only free if your time is worth nothing.
 
What's all this talk of "Free" operating systems. It's only free if your time is worth nothing.
 
That's a pointless comment. If I don't pay money for something, it's free. Arguing
about one's time being worth something is meaningless - people waste more time in
the pub, watching soaps, sitting on the toilet, etc. It's not that hard to learn
how to use a different OS if one is used to Windows, and you might just learn a few
things along the way.

Ian.

 
[citation][nom]jeffunit[/nom]True, Windows Home Server is cheaper. But it doesn't support *any* software raid. Linux and BSD support all kinds of RAID,including 5 and 6. With FreeBSD you can even get RAID-Z and RAID-Z2 with ZFS. Personally, I don't like losing data when a single drive dies. Windows Home Server doesn't even support RAID 1, based on everything I have read. If you want software RAID support and Microsoft Windows, I think the solution is Windows Server 2008.[/citation]


Thats because it doesnt have it by design. It protects files by shares. It duplicates the files in a share across 2 disks. Those disks dont have to be the same size either. So you can have a server with a 250gig, 300gig, 500gig, and 1tb drives and still have file level protection. on top of that you have volume shadow copies to have snapshots like the zfs you mentioned in the article.

This isnt even going into the Windows Home Connector you load on your desktops that do cluster level backups of your desktop/laptop to the home server.
 
Status
Not open for further replies.