Synology DS412+ And Thecus N4800: Two NAS Devices With Atom D2700

Status
Not open for further replies.

tokfan

Distinguished
Jul 14, 2010
1
0
18,510
Could be interesting to compare what the CPU load for the DS411+II was when i.e copying large files reaching the NIC's maximum speed compared to the DS412+ doing the same but with both it's NIC's connected and with LAG activated.

Ofc there will not be much of an improvement adding more MHz when the old CPU already is capable of pushing one NIC to the max :)
 

cknobman

Distinguished
May 2, 2006
1,127
273
19,660
IDK but this thing is over $600 with no HDD's and has a limited set of functionality.

I find it hard to shell out that much money when I went and built my own Server with a Athlon II 3.2 Ghz processor, 8gb RAM, 4TB of HD space with RAID, and a Windows Ultimate 7 for less than $600.

I know my server is less specialized than a dedicated NAS like the one reviewed here but it performs all the NAS functionality I need plus can serve as a all purpose computer, a HTPC, etc.. etc... Sure my server sucks more juice than these NAS devices but it is not a huge concern for me.

Is there a big positive I am not seeing to spending more on a little NAS like this compared to just building my own server?
 

torque79

Distinguished
Jun 14, 2006
440
0
18,780
I wish there was some competition for mediasonic in affordable basic 4 bay enclosures. These raid NAS devices are so insanely overpriced just because they have some media sharing service and a basic raid function. give me a simple backup enclosure without FTP and media sharing functions that handles 4 drives and that I don't have to replace in 2-3 years because it only supports up to 3tb or 4tb hard drives. that would make me so happy!
 
G

Guest

Guest
The "big positive" about these is time spent to get one up and running. If your time doesn't cost anything - sure, build your own and you probably get a better deal.
I have DS411+. Time spent:
1. 4 hours for research on the net
2. 30 min to find where to order
3. About 1,5 hours for unpacking and setting everything up
Total: 6hours
To build my own - I would probably spend more than a day for just researching hardware. Building, installing, setting up all the services - this takes time. Time I rather spend with my family.
 

torque79

Distinguished
Jun 14, 2006
440
0
18,780
Plus if being used as a backup, copying via 1gbit takes insanely long to transfer multi TB of data. Even though this is bulky and heavy with 8 drives, id still carry it to my htpc to back up all the media I have on there via esata. I don't know what I would do with a stationary PC just for hard drive storage, copying the data would just take too long. After initial copying of the data regular synching would not be so bad I guess, but moving over a few TB of data would probably take a week copying with 100mbit network file transfer.
 

cknobman

Distinguished
May 2, 2006
1,127
273
19,660
torque79 on a 1 gigabit network copy large amounts of data does not take as long as you might think.

Heck I copy ripped movies from my server to my pc's over the network and it takes less than 30 seconds to copy 3 gigabytes. My network easily sustains 85-100 megabyte per second transfer rates.
 

billj214

Distinguished
Jan 27, 2009
253
0
18,810
Would it be possible to run SSD (or hybrid SSD) drives instead? That is if throughput is the primary concern and the increased processor speed might be relevant! Aside from cost this would be interesting to see if the processor becomes a bottleneck.
 

tpi2007

Distinguished
Dec 11, 2006
475
0
18,810
I have to wonder how relevant this review is when the Atom D2700 has been discontinued. You say it's being phased out, but according to the roadmap, it has effectively been discontinued.

You said it yourselves in May:

http://www.tomshardware.com/news/intel-cedarview-atom-d2700-processor,15516.html

The rather expensive netbook processor ($52) was introduced in September 2011 and apparently does not pull enough demand anymore to justify carrying the CPU.

According to Intel, final orders will be taken June 29 and final shipments will take place on September 28 of this year.

So, it wasn't popular enough to keep making, but after all it is popular with "network storage vendors", like you say ? Or is this just a move to sell the remaining NAS devices with the Atom D2700 ? I mean, there aren't more being made, the final orders took place more than three months ago and the final shipments already took place. So, whatever stock of Atom D2700s they have to make NAS, won't be replenished.
 

SirGCal

Distinguished
Apr 2, 2010
310
0
18,780
[citation][nom]DS411+ owner[/nom]The "big positive" about these is time spent to get one up and running. If your time doesn't cost anything - sure, build your own and you probably get a better deal. I have DS411+. Time spent:1. 4 hours for research on the net2. 30 min to find where to order3. About 1,5 hours for unpacking and setting everything up Total: 6hoursTo build my own - I would probably spend more than a day for just researching hardware. Building, installing, setting up all the services - this takes time. Time I rather spend with my family.[/citation]

Actually, that's just YOUR plus. I've honestly woundered why they cost so much myself too and I agree with the others. It just doesn't make much sense.

I have a 12TB Raid 6 server in my house with 8 2TB drives. I'm getting ready to build a new one actually. (out of room). But when I do it's literally 30 seconds for all of the parts accept the hard drives which you'd have to research for either one. Do I try new 4TB drives, or the new 3TB units or just build a known reliable 2nd 2TB setup... etc. But the rest, from board/ram/cpu/even case (case is already here actually)/ power supply/ RAID controller, etc are already known what will be purchased. Might tweak itself depending on when purchase happens but anyone who's on these websites is all too familiar with the tweaking of that part of the industry.

Simply said, after the HDD research is done, the parts will be purchased within 15 minutes. When they arrive, it takes me less then an hour, even from my wheelchair, to assemble a computer from raw parts. 10 minutes to install Linux or 30 minutes to install/update Windows and that's it. Add the shares, and start populating it for whatever purpose it needs. Plus, far more potential power, should I need it to do other things (often it does actually. I use the one now to play videos in my office. An extra HTPC.).

Now if this box cost say $250 or less, then I would be more attracted to it. But for what it is, I just can't see spending $600 on the empty box. The core of my setup would be a lot less. I can do the core for $200 for top-shelf components. $250 for raid 5 instead of 10. (raid 6 would be totally pointless with 4 drives). But that's $400 cheaper then then DS412+ from the same vendor. I just don't get it. I'd rather take my family out to dinner for a week. Plus, simply a lot less worry about them getting their 'firmware' right to run everything and just the good old and proven OSs that work as we expect them too.
 

milktea

Distinguished
Dec 16, 2009
599
0
18,980
When will NAS devices come support for NTFS volumes? I keep seeing ext3, ext4, HFS+; but no NTFS?!?!
Come on, is it really that difficult to have an SOC controller that will mount a GPT/MBR NTFS disk?

I don't mind paying an extra $50-$100 to have support for NTFS. I think it's time to bring flexibility and mobility (removable storage) to the NAS arena.
 

SirGCal

Distinguished
Apr 2, 2010
310
0
18,780
[citation][nom]milktea[/nom]When will NAS devices come support for NTFS volumes? I keep seeing ext3, ext4, HFS+; but no NTFS?!?!Come on, is it really that difficult to have an SOC controller that will mount a GPT/MBR NTFS disk?I don't mind paying an extra $50-$100 to have support for NTFS. I think it's time to bring flexibility and mobility (removable storage) to the NAS arena.[/citation]
there's actually a very good reason that they are ext3 or 4... They use a Linux kernel for their OS to keep the footprint tiny. NTFS is not native to Linux and would require more room for the OS. But really there it's no need for it either so... I don't see Amy reason why that would even be necessary, or an advantage either...
 

milktea

Distinguished
Dec 16, 2009
599
0
18,980
[citation][nom]SirGCal[/nom]there's actually a very good reason that they are ext3 or 4... They use a Linux kernel for their OS to keep the footprint tiny. NTFS is not native to Linux and would require more room for the OS. But really there it's no need for it either so... I don't see Amy reason why that would even be necessary, or an advantage either...[/citation]
One good reason is that there are buyers who runs only Windows machines, and not Linux. And being able to pull a drive out of the NAS and mount it on Windows is a big big bonus.
And with regards to the footprint, take a look how small is GoFlex Home. It is the only NAS out there that supports NTFS. And it is super tiny compare to any NAS. The only disadvantage is that it only supports single drive, so no RAID setup. But I could take any NTFS formated drive (and buy a male-to-female SATA cable) and plug right into the GoFlex Home NAS. So I have my own hot swap NAS.

Sometimes I wonder are all those NAS manufactures all anti-Microsoft?
 

SirGCal

Distinguished
Apr 2, 2010
310
0
18,780
[citation][nom]milktea[/nom]One good reason is that there are buyers who runs only Windows machines, and not Linux. And being able to pull a drive out of the NAS and mount it on Windows is a big big bonus.And with regards to the footprint, take a look how small is GoFlex Home. It is the only NAS out there that supports NTFS. And it is super tiny compare to any NAS. The only disadvantage is that it only supports single drive, so no RAID setup. But I could take any NTFS formated drive (and buy a male-to-female SATA cable) and plug right into the GoFlex Home NAS. So I have my own hot swap NAS.Sometimes I wonder are all those NAS manufactures all anti-Microsoft?[/citation]

Well... no. While technically that is a network accessed storage device, it would be a stretch in any terms. It's a network accessed single hard drive. And FYI, hot-swapping does not apply to single-drive systems. Hot-Swapping refers to the ability to remove a drive from a RAID array or other server system, while the system is still in operation without ever removing power and the system can be continually accessed. It can also refer to the ability to plug in a drive to like your ESATA port which would require it to be on AHCI to be recognized repeatedly when it's already started. But that's not really hot-swapping, that's just the ability to recognize a new drive with the computer still turned on. IDE detection won't do that repeatedly (once per boot). So if your ESATA is set to IDE, you'd have to reboot your box (well, that's the simplest way for the layman) to get it to see a new drive if you changed it over.

I can hot-swap my HDD's in my RAID array. This is useful because the people using the array never know a drive has been changed even if 10 people are using a file from it while I'm changing out the bad drive. That's the whole point. The system is never down and the drive cluster itself is still in use. Otherwise for single-drive systems, the drives obviously are not usable when removed/replaced so hot-swapping is really pointless.

And a final FYI - The GoFlex is NOT hot-swap capable anyhow. There is a little power button on the rear. If you slam a drive into that unit while it is powered, you run a risk of damage to either. Plus, very likely it's drive recognition system in it's firmware may only work at 'boot up'. But the instructions clearly state:

From: http://www.seagate.com/files/www-content/support-content/home-entertainment/goflex%20home/_shared/docs/GoFlex%20Home%20User%20Guide.pdf

3. Connect the drive to the dock, then press the Power button to the On position.
a. Align the connector on the bottom of the drive with the connector in the base.
b. Gently press down on the drive until it clicks into place.
c. Press the Power button.

That's not hot-swapable by any stretch of the term. Hot-swapable refers to the host for the drive itself, not the remote systems viewing it.

NAS systems want to offer you a SECURE storage system. Meaning that if a drive fails, you do NOT lose your data. RAID (or simple drive mirroring also) is the only way to try to ensure this. No single-drive system can. The point is a drive fails, the NAS tells you, you replace the drive, it rebuilds the array off of the error-protection systems. This is why we use RAID 1, 5, 6, 10, etc. (NOT raid 0, that's a performance only setting). Sure, we lose a drive or two (or more) to error checking. My 12TB array has 8 2TB drives in it. 16TB total. But two drives are for error-checking. This means for me to lose the data, I have to have a triple drive failure. While that is still possible, it is very highly unlikely for the low-count drive array. Odds go up with more drives though so you might choose a different RAID configuration with 16 or more drives for example.

I have plenty of single-storage drives around the house for this and that. But anything secure (pictures, music, videos, etc.) that I want to keep, goes on my secure NAS. That's the whole point of it. These 4-drive (or more) units are very nice at least that they offer RAID 5 (or 5+ hot spare). This means that one drive can fail and it can still rebuild the array. Hot spare means a drive that is there and waiting. Should a drive fail, that drive is instantly added to the array and the array rebuild started on the fly before you even read the warning notification to replace the drive. RAID 6 would be two drive failure, and so on.

So to do all of this. Run the NIC and have the necessary client communication protocols (CIFS, NFS, etc.) and to handle the RAID array, raid configurations, rebuild speeds, etc. EXT3 and EXT4 (more likely today) are by far the preferred methods because they use a Linux kernel to run the NAS. But adding NTFS would take more room. But then what about MAC users? Why not HFS+systems as well?!? But it looks like Linux flavors will be using BTRFS in the future too. So they might change yet again. (which to me is a bit odd, EXT4 allows for 1 Exobyte. We're not even close to that yet. BTFRS allows for 16... weee! But there are other advantages like it's mirroring simplicity. Might be a HUGE advantage actually to NAS boxes in the future. We'll see.)

But WHY do they use Linux? Very simple: It's FREE! If they used a M$ based OS, they'd have to pay a licensing fee. Plus, quite simply, that's just how flexible Linux is to be broken down into base parts and tweaked for specific hardware. It's very friendly on that aspect and extremely powerful.

The point is, NAS boxes that are RAID array controlled, are going to be some simple and fast and easy to use format for the box itself. Right now that's a Linux kernel, and the most efficent and reliable formats for Linux are EXT3 (older with some limitations but most home-users would never tell a difference) and EXT4 (newer).

And even if they DID offer NTFS partition capability to the arrays, to pull the drives out and stick them in your PC, you'd still need the appropriate RAID 5 (or whichever) controller to read them. AND for that matter, Windows can quite easily read ext3/4 partitions also.

Also.. FYI: NTFS partitions from Windows 4 and NT are different from NTFS used in Vista or 7. So which VERSION of NTFS do you need? (thinking of it that way would be similar to the differences of EXT3 to EXT4, but M$ didn't change their name between revisions (should have probably))

So there's your answer. If you're waiting for NTFS NAS, the question still is "why"? But the real response is more likely that you'll be waiting quite a while. I wouldn't expect to see them as common place any time soon. If you REALLY just want a NTFS NAS single drive system, just build a micro-PC and do whatever you want with it. You can easily build 2 to 4 units for the $650+ these cost anyhow... Install windows whatever, endable file sharing how ever you want to restrict it and/or FTP, etc access. Done. For someone to bundle that functionality into a box to sell it just isn't very profitable.
 
G

Guest

Guest
I use a Synology and you can map the shares as drives right to your Windows and swap files to your heart's content over the network.
 

milktea

Distinguished
Dec 16, 2009
599
0
18,980
[citation][nom]SirGCal[/nom]Well... no. While technically that is a network accessed storage device, it would be a stretch in any terms. It's a network accessed single hard drive. And FYI, hot-swapping does not apply to single-drive systems. ...[/citation]
Thank you for your explanation. And please excuse me that I might have misused the term 'hot-swapping'.

But FYI, the GoFlex Home does not need to be power down to dismount the HDD. There's a web interface to the GoFlex Home NAS, which allows you to dismount the HDD. Much like how you 'safely remove' a usb drive attached to the computer. This is first hand knowledge!

And like I've said before, a 'disadvantage' (what I don't like) of the GoFlex Home is that it is a single drive NAS. I was looking for a RAID capable NAS that supports NTFS disks.

And I agree, like you said, that Linux is free. And M$ might charge royalties or licensing fee. But like I've said before, I'm willing to pay 'extra' for the NTFS support!

And I agree that it might not be profitable, as you said in that last sentence. But if there's enough user base, then I hope this will change.

So yes, I'm still waiting for an NTFS NAS. And I'm not buying any NAS out there now, holding off, just because of the fact that they don't support Windows disk format.

And yes, I do have an old computer running as a file server (Windows Vista). But it's idle power is over 70watt. The advantage of these dedicated NAS is that I get much lower idle power, in the order of 10watts. And by the way, the GoFlex Home idle power is less than 7watt.

If you want to see where I'm going with this, I was hoping for a dual usage from these home NAS systems. That is, a switch that will turn the NAS into an external enclosure that can be plugged into my windows machine using USB3.0 or ESATA. But that is just my wishful thinking.

 

SirGCal

Distinguished
Apr 2, 2010
310
0
18,780
[citation][nom]milktea[/nom]Thank you for your explanation. And please excuse me that I might have misused the term 'hot-swapping'.But FYI, the GoFlex Home does not need to be power down to dismount the HDD. There's a web interface to the GoFlex Home NAS, which allows you to dismount the HDD. Much like how you 'safely remove' a usb drive attached to the computer. This is first hand knowledge![/citation]

And that's fine. But the port is not 'hot' when the drive is swapped out. Doing through the interface disables the connection to the drive. If it didn't, it could be very easy to corrupt the data on the drive. Have to turn off the I/O to the drive for a safe transfer and that's exactly also why we have the 'safely remove storage' stuff in windows and even Linux.

[citation][nom]milktea[/nom]And like I've said before, a 'disadvantage' (what I don't like) of the GoFlex Home is that it is a single drive NAS. I was looking for a RAID capable NAS that supports NTFS disks.And I agree, like you said, that Linux is free. And M$ might charge royalties or licensing fee. But like I've said before, I'm willing to pay 'extra' for the NTFS support!And I agree that it might not be profitable, as you said in that last sentence. But if there's enough user base, then I hope this will change. So yes, I'm still waiting for an NTFS NAS.[/citation]

You'll really be waiting a long time I suspect. But best of luck. More on that in a sec.

[citation][nom]milktea[/nom]And I'm not buying any NAS out there now, holding off, just because of the fact that they don't support Windows disk format. And yes, I do have an old computer running as a file server (Windows Vista). But it's idle power is over 70watt. The advantage of these dedicated NAS is that I get much lower idle power, in the order of 10watts. And by the way, the GoFlex Home idle power is less than 7watt.[/citation]

That's something you can fix. Even my giant server's idle power is no where near that high. Picking the right parts for the specific reasons can make a huge difference. Granted, you won't see 10 watts per hour, BUT, you could see even less then that if you allow it to go to sleep and wake on LAN. Then it is only used when necessary. In sleep/standby, they can use as little as 1 watt per hour. Heck, some motherboards even allow for boot or power on by LAN in some cases, making idle power useage 0... (and yea, some good NAS boxes do the same thing too as long as you don't mind the delay for initial query time). There are many ways to trigger a WOL event. If low/no power is a priority for you, it's easy enough to get what you want right now, possibly even with your current setup.

[citation][nom]milktea[/nom]If you want to see where I'm going with this, I was hoping for a dual usage from these home NAS systems. That is, a switch that will turn the NAS into an external enclosure that can be plugged into my windows machine using USB3.0 or ESATA. But that is just my wishful thinking.[/citation]

For a NAS to function, it has to have it's own OS (obviously). However, plugging something into your USB, etc. port uses your computer's OS. So for it to do this, it would have to have some sort of bridge-connection. If you actually took a direct USB from one computer to the other, you'd fry the ports or worse. You need a bridged cable (obviously, this could be added to the device so it could then use like a micro-usb to a usb cable to connect the two). The bridge device alone though would be more cost added for this. And to my knowledge, there is no current USB3 bridge even. ESATA, I don't know of any way to do a crossover or bridge using ESATA, but I've never looked. Honestly, the easiest way would probably be an ethernet bridge. It's just a vastly different function to plug a drive into the slots vs a whole rig to act as a NAS. The closest thing I can think of that is like this right now might be smart phones and their memory cards. They allow it to work as 'drive' mode when connected to USB. But then the local device loses connection (the phone in this case). The phone(Android, which is a flavor of Linux, iPhone doesn't have extended memory cards) uses memory formatted to FAT (very oldschool). But again, it would be horrible (impossible actually) for large NAS to use FAT.

But for sake of argument, let's assume you had a multi-drive NAS with EXT4 and the box failed for whatever reason. To access the drives, you would have to have a raid card capable of the same raid method used in the box. That's the bare minimum. Raid 5 cards are pretty cheap but if you go up to raid 6, you're talking real $. That's when those $600 boxes are actually reasonable (but they also need 8 drives IMHO to be worth it for RAID 6, 6 drives would be my minimum). Point is regardless, there is no 'plugging it into my windows box to get the data' without other expensive hardware. I happen to have RAID cards laying around but i doubt most other people do, even fellow nerds. Just very expensive pieces for very specific purposes.

Anyhow, you get the RAID card, add the drives, and boot off a DVD into Linux (there's TONS of flavors out there, CentOS, Ubuntu, etc. that have boot&run full DVD Iso's, for free. Download and burn it to a blank disk and off you go. Great for troubleshooting hardware too. I always keep a few handy). And boom, there's your data. Plus there are some tools out there for windows to read (only in most cases) EXT partitions directly. But instead of going through all that, just get the original box replaced under warranty or purchase a new one and throw the drives in it and you're off and going again. That's the wounders of RAID. But also why we use the NAS to begin with.

But heck, as I said before, you can do what you want right now and have possibly 1W or even less for idle time, and maybe from the rig you already have. If that's what is important to you, by all means it's simple to do. I have 3 boxes running 24/7 in the house for various reasons and 4 more 'on demand'. It's very doable and you don't need someone else to design some box to do it for ya. Don't wait for someone to build some overpriced (most likely) box. You can do what you want right now. :)
 

milktea

Distinguished
Dec 16, 2009
599
0
18,980
[citation][nom]SirGCal[/nom]And that's fine. ... Don't wait for someone to build some overpriced (most likely) box. You can do what you want right now. :)[/citation]
You've mentioned some important points. And they are well worth considered.

With regards to the RAID config, if you use RAID 5 or 6, then it'll be more trouble to recover the data. But if you use RAID 1 or 0+1, then it would be so much easier.

SANS DIGITAL has the MobileRAID which is the Dual Drives RAID external enclosure. It supports both USB3 as well as ESATA interface. And if config in RAID 1, I could literally take one HDD out and connect it to my Windows machine and retrieve the data. That means 1) if the MobileRAID ever goes dead, and 2) SANS Digital goes bankrupt, I would still be able to recover all my data without any 'special' or 'expensive' equipment/software. All I need is a working Windows machine.

So let's think, about this... GoFlex Home already has an NTFS NAS solution, and SANS Digital already has an External RAID solution. Is it so far fetch to think that a NAS+SAN combo is so impossible?
Obviously, it wouldn't be a simple bridge to connect a NAS device to a USB3. There has to be a controller chip that does the switch over. But this is up to the HW designer to implement.

I seriously doubt that one could ever put together an NTFS NAS system that is as modular and compact as the one you buy. What you pay for in a NAS device is their compact integration of file server into a small form factor box.

I'm pretty sure that you have the ability to put together a Linux file server that has RAID option. But then why would you buy a Synology? There are many advantages that you can list, and so much more personal preferences that I just don't have time to list.

And just for a thought... if I twist the picture around and... all the NAS devices out there only supports NTFS, and not ext3 or some other linux file system, wouldn't that just suck? How would you feel?
 

SirGCal

Distinguished
Apr 2, 2010
310
0
18,780
[citation][nom]milktea[/nom]You've mentioned some important points. And they are well worth considered.With regards to the RAID config, if you use RAID 5 or 6, then it'll be more trouble to recover the data. But if you use RAID 1 or 0+1, then it would be so much easier.SANS DIGITAL has the MobileRAID which is the Dual Drives RAID external enclosure. It supports both USB3 as well as ESATA interface. And if config in RAID 1, I could literally take one HDD out and connect it to my Windows machine and retrieve the data.[/citation]

No. RAID 1 yes, it only does a drive mirror but RAID 10 no. RAID 10 uses striping and mirroring both. This means that it requires 4 drives to do (minimum, but it has to be an even number of drives). It takes a file and say for example, breaks it down to 6 parts. The even numbered parts will be on drives one and two and the odd numbered parts will be on drives three and four. This means half of your drives are used for mirroring, or you lose half of your potential storage. Think of RAID 10 as a bunch of RAID 0 arrays, paired together and mirrored. (hence the nickname RAID 0+1)

RAID 5 only requires 3 drives (or any number after). It puts part 1 & 4 and parity part 3 on the first disk, part 2 & 5 and parity part 2 on the second disk, part 3 & 6 and parity part 1 on the third disk. This allows a drive to fail, and the remaining two (or more) drives to rebuild the failed drive. Plus, you only lose one drive of your array. So if you had 4 1TB drives for example, you'd still have a 3TB array where-as RAID 10 you're down to a 2TB array and still only have one drive failure security (actually you could lose two, just depends WHICH two. If you lose drive 3 and 4 from the above example, you're screwed.).

Others are a bit more confusing and rarely used (RAID2,3,4 for example). RAID 6 however is very common. I won't go into the details cause it's more complicated then RAID 5 but basically, think of it as RAID 5 with two disk redundancy. Hence why I use it for my 8-drive arrays. Because if you lose one disk. The most dangerous time for another one to fail is during a rebuild. While rare, this is a time where the drives are getting hammered hard and it is possible. Hence RAID 6 offers reasonable protection. So in my 8-drive example, two drives are lost to parity so with 2TB drives, I have a 12TB array. It is also a tiny bit slower then RAID 5 arrays because it has to calculate twice the parity bit information.

[citation][nom]milktea[/nom]That means 1) if the MobileRAID ever goes dead, and 2) SANS Digital goes bankrupt, I would still be able to recover all my data without any 'special' or 'expensive' equipment/software. All I need is a working Windows machine.[/citation]

No again. I think you still misunderstand how RAID works. ANY RAID controller can understand RAID 5 or 6 or 1 or 10, as long as it was made to handle same. So you can take drives out of one manufacturer's box for RAID 5 and put them in another and it should recognize it. If you had previously setup some sort of special file sharing or something, that would be an exception but the array itself is very much a standard. For example, we had, for a very short while, a Buffalo 4-drive RAID 5 array at work. Piece of junk failed so fast. But we were like 'heck with it'. I took one of my not horribly important servers offline, took the drives out of the box, stuck them into this server with a standard, inexpensive 3Ware RAID card and fired it right up. All I had to do was mount the array and then go to it and start copying stuff off. And actually, the drives were fine so I left them there and added a share to the system and people kept going. If SANS Digital went bankrupt, you honestly really wouldn't matter either way. Get another diskless system and keep truckin.

[citation][nom]milktea[/nom]So let's think, about this... GoFlex Home already has an NTFS NAS solution, and SANS Digital already has an External RAID solution. Is it so far fetch to think that a NAS+SAN combo is so impossible?Obviously, it wouldn't be a simple bridge to connect a NAS device to a USB3. There has to be a controller chip that does the switch over. But this is up to the HW designer to implement.I seriously doubt that one could ever put together an NTFS NAS system that is as modular and compact as the one you buy. What you pay for in a NAS device is their compact integration of file server into a small form factor box.[/citation]

That's a whole lot of $ to spend on bare INCHES of space though... I can show you plenty of micro or ITX solutions that would hold 2-5 HDDs and be extrordinarily small. But one other tiny thing you might not know, RAID calculations actually take quite a bit of power. Not so bad with 1 or 0 or 10 but any of the others take a CPU (usually on hardware, unless you're doing software based which I don't normally recommend for anything other then 1 or 0 or 10) to do some crunching. And memory, and cache, etc. They can't put much of that in a tiny hard drive stand setup. At least not yet. But having a full computer behind it is a massive performance advantage. The biggest, global complaint I see about these RAIDs in a box is they are slow. RAID 0 should be extremely fast. RAID 1 is as slow as the drives are basicaly. RAID 5 is very quick. And because stuff is spanned acrossed multiple drives, faster with more drives. 6 slows down a bit again, but still very peppy considering.

The reason for NAS is multi-user, high performance with added security. So a RAID 1 doesn't cut it almost ever. 10, depends on the drives but is relatively somewhat wasteful compared to 5. 6 is more secure then 5 for large drive-count systems.

[citation][nom]milktea[/nom]I'm pretty sure that you have the ability to put together a Linux file server that has RAID option. But then why would you buy a Synology?[/citation]

Personally, I would never... Or perhaps you missed my first post way above?

[citation][nom]milktea[/nom]There are many advantages that you can list, and so much more personal preferences that I just don't have time to list.And just for a thought... if I twist the picture around and... all the NAS devices out there only supports NTFS, and not ext3 or some other linux file system, wouldn't that just suck? How would you feel?[/citation]'

Again, I really wouldn't give a hoot one way or the other. The RAID still functions and that's the point. And no matter what the container is, I can stick it in my gaming rig with a RAID card and get the data back off. If it's NTFS, boot right up, if it's EXT#, use my Ubuntu or CentOS LiveDVDs. You make it sound like it's difficult. It really isn't. YOU can do any of this, easily. I know you can download a file. And I'm pretty sure you know how to burn a file to a disk. That's it. Fully legal and ready to go fully functional OS on a DVD or even CD disk. Cost, $1 tops for the blank media (and that's expensive media too).

It just doesn't matter what the container of the drive system is. It just doesn't. And they use EXT flavors because it's free. To use NTFS, they have to pay a licensing fee to M$ for every one, a more complicated OS, etc. And it's not a cheap one either. Another advantage to Linux is that it can basically have no boot time. Almost instant-on, especially in these types of situations stripped down to hardware code.

Remember, every company out there today cares about JUST one thing... $$$$$ and how to make more. They want to make a good product people will buy, one people need, and do it the absolutely CHEAPEST way possible. (we're talking down to parts of a penny per screw per item, etc. EVERYTHING is counted.) So honestly, doing something like this for a RAID box is just... doesn't make sense on the bottom line. And honestly, there are also issues with the performance of NTFS systems in small servers. Especially because, noone is going to license M$ for the OS for these things when they can do Linux for free. (and M$ licenses for stuff is pretty steep) And to do NTFS through Linux, you need, for layman's terms, a translater. So performance suffers. I tried to find an example and I did find an interesting read with a web software company (most webservers out there today run a Linux system, hence the test).

http://fsi-viewer.blogspot.com/2011/10/filesystem-benchmarks-part-i.html

The tests I was originally thinking about weren't THIS different but it gives you some of the idea. And you're not going to be anywhere near this type of thread count but... How stuff is stored through different file systems changes and can affect things. Plus then you also have different OS differences too but still. The point I'm trying to make is; you want a box to handle NAS work for you. And that's it. EXT flavors are your best option for best performance and inexpensive units. Should the worst happen, it's NOT hard at all to move the disks to another system or local RAID card. You sound at least a tiny bit tech savy, and that's all it takes. This is super-simple stuff to do. Don't let Linux scare you. Infact, it might surprise ya.

Download one of the LiveDVD flavors, burn a disk and just reboot your rig. If you're really paranoid, unplug your hard drives first. Ubuntu is VERY popular. CentOS I like but it's a bit more 'techy'. It isn't some evil, totally cryptic thing. Ubuntu is known for making it very intuitive actually. Plus, as you know, it's EXTREMELY powerful and secure. Hence why it is so popular with servers. UNIX is the real 'evil' one. And their server counts are constantly shrinking and require very specific hardare configurations. I use quite a few of those at work too. (Sun (Solaris) and HP(HP-UX) are the most common UNIX OSs). I use those LiveDVDs for a lot of other things. Like tweaking hardare or troubleshooting various things. I use them all the time when someone brings me an otherwise dead computer to try to fix. Or cleaning off some really nasty malware/spyware or viruses... Or just to play around with. Heck some of them have a few entertaining time-wasters on them too.
 

2korda2

Honorable
Oct 21, 2012
1
0
10,510
Actually, that's just YOUR plus. I've honestly woundered why they cost so much myself too and I agree with the others. It just doesn't make much sense.
Looking at your comments - you're a professional who deals with server harware/software daily. What do you think - how much of this the average NAS customer knows? So I would say "it's just YOU" who can buy needed hardware in 15 minutes, set it up in 2 hours and can expect it all to work.
I've been in IT business for 15 years (on software development side) and so far have built all my home desktops myself. It would still take whole weekend of research for me to buy hardware that I would be comfortable with.
Same goes for stability/reliability. I'm sure you agree - for average NAS customer prebuilt box will be better (less to f*k up).
 

milktea

Distinguished
Dec 16, 2009
599
0
18,980
[citation][nom]SirGCal[/nom][/citation]
With regards to RAID, yes ideally any RAID 0,1,2..5,6 should work the same regardless of the manufacturer. And I agree that when one RAID box died, you should be able to remove the drives and move it to any RAID boxes. But the reality isn't always so ideal. I've heard horror stories, drives got re-initialized when first put into a new RAID storage just because it wasn't initialized in that box prior. You probably never seen this issue because you've always build your own server solution. But if you buy one of those boxed RAID solution, then it's at the mercy of the manufacturer. So there could be format compatability problems between manufacturers. And I would not bet on the fact that a RAID setup I have now would work on another RAID setup from a 'different' manufacturer 10 years from now.

And with regards to the file system performances... well that's just another whole new topic. I think Tom's might have a good thread on that. But what matters is that RAID 1 setup on the Dual Drives External Enclosure is completely sufficient for Home usage. So there's no point to even look beyond that. By the way, RAID 0 with two SSD is so fast, just completely saturates the SATA bandwidth. So just curious (and off topic), does any of your RAID setup comes close to the 2xSSD speed?

Also, I've tried running Ubuntu in a VirtualBox before. I don't find it any more appealing than Windows O/S. And I just don't have the bandwidth to learn another o/s like Linux. For a typical 'Home' user, one O/S is sufficient. And I've chosen MS Windows.
Trying to convince me to use Linux is like I'm trying to convince you to use Windows. Really, no point in doing that. By the way, I do respect other O/S users, Unix, Linux, MAC. We all have our own preferences. But you are probably running multple flavors of the above O/Ses, so it probably doesn't matter to you.

When the NAS system breaks down, the 'first' thing I want is to recover my valuable data, and the 'last' thing I want is to dig through my junks to look for a 'LiveDVD' to boot to an o/s that I've never used. The point is that, for an average home users, the simpler the better. 1 Desktop (Windows), 1 Monitor, 1 (NTFS) NAS, 1 internet connection, 1 keyboard, 1 mouse. Anything more than that means headache. I'm sure you are capable of much much more than that. But I hope you see my point.

Oh, BTW, I just remember something that is a variant of the NAS+SAN combo. Take a look at the ' Patriot Gauntlet Node'. It is 2.5" protable WiFi + USB3.0 external enclosure. You can access the drive through WiFi, which means similar to a strip down NAS. Or you can plug that directly into a computer using USB3.0 cable, which simply means an (fast) external enclousure. And you can format it NTFS! So it isn't so far fetch after all to think of a NAS+SAN under the same roof.

I mean Intel/AMD has already fused a CPU with GPU into the same die, and completely removed the North Bridge. So why can't you have a NAS+SAN combo? I'm glad that at least Patriot has thought about it. I'm just hoping that they will put a GBethernet connector to it and put a RAID controller on that thing (for at least 2 drives).
 

SirGCal

Distinguished
Apr 2, 2010
310
0
18,780
Small medical issue had me away for a while. Sorry;

But @2korda2, if you've been in the computer industry for that long directly and couldn't rattle off acceptable hardware in a few seconds, I wouldn't want you working for me. I'm also on the software side. But to write good software, you have to know the hardware that is involved. Unless you're just some .NET or Java programmer that just write in a software box. But that's not the computer industry. Or IT for that matter. I know plenty of IT professionals that couldn't tell you a CPU from a hard drive. They just work with windows and if it's not windows, and doesn't fall in their 'checklist', they are totally lost (and usually still so if it is). IT just is NOT what it used to be and it's disgusting they are allowed the same labels.

However, the bulk of the people that come to Tom's I suspect are more savvy. I'd bet any one of them could easily put together a basic system in a very short amount of time. Just chip/board/ram. Just make sure the chip fits the board and the ram fits the board. It's pretty basic simple stuff. And for a server like this, any single-core budget-rig would be more then enough. OK, they might have to take a bit to figure out which RAID controller to use or how they work since that is likely out of their comfort zone but. AS for the core, it's extremely basic. If you've built and planned your own boxes in the past, and tell me right now you couldn't list SOMETHING that would work. I'd have to call you out on that one. You might spend a whole weekend, but you'd be wasting your time when you know there are almost no system requirements for a setup like this accept for the drive system. If you were building an enterprise grade drive to host thousands of users, sure. But even one server for a few families even, any old PC is more then enough. Find the RAID card you want to use, then get a board that can handle that format (most are PCIe today) and finish it off.

[citation][nom]milktea[/nom]With regards to RAID, yes ideally any RAID 0,1,2..5,6 should work the same regardless of the manufacturer. And I agree that when one RAID box died, you should be able to remove the drives and move it to any RAID boxes. But the reality isn't always so ideal. I've heard horror stories, drives got re-initialized when first put into a new RAID storage just because it wasn't initialized in that box prior. You probably never seen this issue because you've always build your own server solution. But if you buy one of those boxed RAID solution, then it's at the mercy of the manufacturer.[/citation]

Just because 'I' don't use them, doesn't mean I've not been exposed to them.. and a LOT. All of the failures come to me to retrieve the data. And I've done just about every manufacturer out there. So far, no problems. People who have these nightmares most likely don't generally have an idea what they are doing or just get slap happy with them and don't do any research before installing them in the new units.

IF there was something that locked the drives to the unit specifically, that is NOT a RAID standard. That's whatever they made up themselves. This might be common with real no-name brands maybe but... (and you want to put your data into one of these things?!?)

But MORE IMPORTANTLY, on the flip-side, you assume that if you use RAID 1 and NTFS, that you'll simply be able to yank out the drive and stick it in a windows box and get your data... THIS may not be true either. It greatly depends on the disk header, partition, block-level optimization, etc. it used. And THOSE changes are what make various items able to read others not work. So you could easily get some box and do RAID 1 NTFS and still not be able to stick it into a PC. Odds are just the same as the 'nightmares' of people not being able to put RAID drives from one controller to another. And while I've used a ton of the brands out there, I'm sure there is some that don't work. Just as some will not in this situation. NTFS/EXT#, etc are just the file systems on the drives. You also have the containers and headers. You might be familiar with MBR (Master Boot Record). Windows is also starting to use GPT (GUID Partition Table) also now. But there are MANY, many others.

[citation][nom]milktea[/nom]So there could be format compatability problems between manufacturers. And I would not bet on the fact that a RAID setup I have now would work on another RAID setup from a 'different' manufacturer 10 years from now.[/citation]

RAID is an industry standard. They don't get changed it or it's not a standard. When time dictates a change is necessary (like now), new formats are created or old formats thought at the time senseless or pointless or 'overkill' like RAID 50 and 60 which are becoming more popular.

[citation][nom]milktea[/nom]And with regards to the file system performances... well that's just another whole new topic. I think Tom's might have a good thread on that. But what matters is that RAID 1 setup on the Dual Drives External Enclosure is completely sufficient for Home usage. So there's no point to even look beyond that.[/citation]

You couldn't be more incorrect; Depends ENTIRELY on the home. For yourself, sure, perhaps. But (I had a long example typed out but I'm just going to get to the point instead:) RAID 1 is only sensible for 2 drive setups; or 1 drive worth of storage. If you need more then that, your far better served going to another format. I.E. Add one more drive (3 total) and go to RAID 5 and double your storage with still drive-failure security and improve speeds at the same time (although 5 is becoming less popular with large drives, and even less popular as sizes go up, but that's another topic).

To me, the whole POINT of having a NAS to begin with is for security (and speed) and the items in ONE shared location. Not spanned over multiple drives or arrays (which I have done in the past with pictures by years for example). These past sentences were more targeted for my 'to do 12TB with RAID 1' stuff I had typed out before but... But that's all another argument. But the basic point is single-drive (same as 2-drive RAID 1) storage is NOT enough for plenty of people for 'Home use'. Otherwise > 2 drive systems wouldn't even exist for consumer use. And it has to be a considerable number for them to exist also, not just a rare few. A lot of people think 2 or more TB is a lot. It really is not depending on what you're doing. With a HD Video Camera for example, you burn through space SOOO fast.

[citation][nom]milktea[/nom]By the way, RAID 0 with two SSD is so fast, just completely saturates the SATA bandwidth. So just curious (and off topic), does any of your RAID setup comes close to the 2xSSD speed?[/citation]

No it doesn't. To do that, you would have to have each unit saturate it's SATA port independently. Each drive is on it's own SATA port (or SAS port supporting 4 SATA 6G ports). I'd love to see the drive that could do that but am not aware of any sustained rate drives of that caliber yet. Still, the problem would be instead of capping the network instead. Especially if you're doing this over WiFi. There is no speed in WiFi. My house is wired Cat6E to do my home video streams in HD. WiFi can't handle that. There are a few specialized products now but they do something different other then your regular 'n' protocol and/or also change the video from it's original state.

[citation][nom]milktea[/nom]Also, I've tried running Ubuntu in a VirtualBox before. I don't find it any more appealing than Windows O/S. And I just don't have the bandwidth to learn another o/s like Linux. For a typical 'Home' user, one O/S is sufficient. And I've chosen MS Windows.Trying to convince me to use Linux is like I'm trying to convince you to use Windows. Really, no point in doing that. By the way, I do respect other O/S users, Unix, Linux, MAC. We all have our own preferences.
[/citation]

Ohh comeon... Again, you TOTALLY misunderstood me and took it way out to left field. You were acting like EXT was the anti-Christ or something and your data would explode if you couldn't get it to NTFS. Honestly I have only one Linux box at home out of my cluster of servers and various different computers for different reasons. And that one only because I didn't want to dump another $100 on an OEM Windows license when it absolutely was NOT necessary in any way for the task.

But to boot up a system that can easily read EXT formats is a joke. Ridiculously easy to the point of being laughable. The nightmare to me would be having to deal with a failed box to begin with. (Hence building your own. MUCH less likely to fail using standard drivers, OSs, etc. then some companies 'firmware' cover-all option. Especially if they're doing non-standard stuff to an otherwise standard RAID array.) I NEVER told you to 'convert' to Linux. EVER! And frankly a bit ticked you even suggest that I did. I simply told you how simple it was to use it if you need it and the fact that a lot of avid computer users get a kick out of the LiveDVDs cause they've never heard of such a thing (with windows), I thought you might get a kick out of it.

Apparently, with a stupid comment like that though, you're just trying to troll me.

[citation][nom]milktea[/nom]But you are probably running multple flavors of the above O/Ses, so it probably doesn't matter to you.When the NAS system breaks down, the 'first' thing I want is to recover my valuable data, and the 'last' thing I want is to dig through my junks to look for a 'LiveDVD' to boot to an o/s that I've never used.
[/citation]

Don't have to dig, just make a new one... $1 for media. That's the whole point. Emergency situation arises; takes just a few minutes to download and burn a new disk to do everything you need.

[citation][nom]milktea[/nom]The point is that, for an average home users, the simpler the better. 1 Desktop (Windows), 1 Monitor, 1 (NTFS) NAS, 1 internet connection, 1 keyboard, 1 mouse. Anything more than that means headache.
[/citation]

Well, I'm sorry, but with THAT hardware, I would never even suggest a NAS at all. You only have one accessing device so just put the drives inside your computer and build a RAID 1 array and call it a day. Most any rig made in the last decade can do on-board software standard RAID 1 without any more hardware then compatible drives... To spend money at all on a stand-alone NAS box for that type of configuration would just be a total waste as well as possible added headache regardless of it's flavor. Computer won't hold two more drives? There's tons of USB/eSATA extended dual-drive boxes out there that can do the exact same thing. And better so cause a lot of them let you configure one drive up as the external drive and their software 'mirrors' the other one. Fixes the possible RAID headers, etc. problems. And EVEN if you had multiple devices accessing it. That's what 'shared drives' are for... And again, Wake on Lan, almost no power used while idle... It's pretty hard to argue FOR a NAS in that situation to be honest. At that point it's just the people that want the new 3-letter acronym in their house but have no real practical purpose at all for it.

[citation][nom]milktea[/nom]I'm sure you are capable of much much more than that. But I hope you see my point.Oh, BTW, I just remember something that is a variant of the NAS+SAN combo. Take a look at the ' Patriot Gauntlet Node'. It is 2.5" protable WiFi + USB3.0 external enclosure. You can access the drive through WiFi, which means similar to a strip down NAS. Or you can plug that directly into a computer using USB3.0 cable, which simply means an (fast) external enclousure. And you can format it NTFS! So it isn't so far fetch after all to think of a NAS+SAN under the same roof.I mean Intel/AMD has already fused a CPU with GPU into the same die, and completely removed the North Bridge. So why can't you have a NAS+SAN combo? I'm glad that at least Patriot has thought about it. I'm just hoping that they will put a GBethernet connector to it and put a RAID controller on that thing (for at least 2 drives).[/citation]

Didn't say it couldn't be done, said it wouldn't likely be as profitable and hence wouldn't likely be done. The one you mentioned was a pretty enormous flop. You can get one now for less then half of their original price. I have one if you want it. (they never worked right, by the way and only accepts 2.5" drives (laptops use these drives)... WiFi on it was HORRIBLE, even sitting 5' from the router; not to mention pointlessly slow. People don't realize how slow WiFi is. The advantage is it's easy to implement though and obvious convenience.) And SAN is another topic entirely...

As for what we WANT; I want a HD camera that can format removable memory to something other then FAT (2G file limits = short time limits or file spliting)... But that hasn't happened yet either. At least not on a consumer level. Or just make them use Laptop HDDs and make them REMOVABLE! Good grief... So simple. But... And the bummer is, I can NOT build one of these myself... :-/ They could format the drives to whatever they want and have a 'dock' to make them readable by the computers. But no... FAT is cheaper then new development for the few thousand consumers that enjoy true HD video recording experiences.

Look, we get it. You want one. Great. You'll probably be waiting a while. I wish we both could have our wishes but... That's not how real life tends to work. And what 'we' see as sensible and useful, companies wanting to sell millions to billions of units do not unless they're trying to make headlines of some sort (it happens, they make something cause no one has yet regardless of the possible cost issues). But to be honest, it's more about the file systems (EXT/NTFS) journal capabilities, error correction, fragmentation (not NTFS's strong suit), error reporting, etc. and how to capture those all from a tiny chip. AND, how much it will cost the company in volumes of millions and more. That's all they care about.
 

milktea

Distinguished
Dec 16, 2009
599
0
18,980
[citation][nom]SirGCal[/nom]Small medical issue had me away for a while. ... [/citation]
I hope everything is fine with you SirGCal (with regards to your medical issue).
And I didn't mean to tick you off in anyway. Just trying to open up a discussion and understand each other's POV from the limited words. But I believe our discussion have gone way out to who knows where.

As you can tell I'm not so much of a DIYer. I'll do some DIY, but I rather buy a solution.

At any rate, you mentioned that you 'would never even suggest a NAS' to the average home user? I failed to see your point there. I mean even printers are going WiFi. Why can't external storage becomes network access? And why would Buffalo, Seagate, Dlink, IoMega, etc... build single&dual HDD network attached storage? I doubt any businesses would buy these single/dual HDD NAS. So they must be marketed towards home users, isn't that correct? That means there has to be some advantages to bring these tiny NAS to home users.

Are you saying that these manufacturers made a wrong marketing decisions?

Oh, and sorry, I didn't mean NAS+SAN. What I meant was NAS+DAS, direct attached.
And yes, I know WiFi connection is horrible. That's why I'm waiting for Patriot to put a GbEthernet port on their Gaunlet Node. But even Ethernet cannot match the speed the the USB3/eSata (DAS). That is why I'm hoping for a NAS+DAS combo. The best of both worlds in one tiny package.
 
Status
Not open for further replies.