iXsystems FreeNAS Mini Goes XL With 8-Bay Model

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
That is some expensive stuff. I feel like it'd be more cost-effective to just build a computer that functions as a NAS. I mean, I suppose it's a value for the people that want to just buy a thing that you plug in and it works, but I'd rather use my smarts and my luxury of time to save the money and build my own NAS.
 
So, are they stating it has hardwarre ZFS?(is that even a thing?) If not, why would ZFS be listed as a RAID option in the specs since it would be software dependent?
 
More nas systems need to support zfs. In today's world, if you need that much storage and redundancy, zfs is the only good option.

Would love to have 8 4tb drives in raidz3 with a 10gb ethernet network.
 
I use Ubuntu 14 Lts with zfs. It's amazing. Only running 4 2tb drives in raidz1 (single parity). Performance is amazing, rock solid stable, snapshots and scrubs are fast. Just had to invest in ecc ram, which all my AMD setups support (thanks for nothing intel)
 
That is some expensive stuff. I feel like it'd be more cost-effective to just build a computer that functions as a NAS. I mean, I suppose it's a value for the people that want to just buy a thing that you plug in and it works, but I'd rather use my smarts and my luxury of time to save the money and build my own NAS.

Exactly what I did. Gigabyte B85 board, Pentium G3220, 64GB SSD for boot drive, IcyDock cage for hotswapping and 5x2TB WD drives in a 4u rackmount setup running CentOS and mdadm software RAID. Total cost (including the drives!) was under $900.00.

Actually getting ready to get rid of it, changing over to a virtualized setup on my main rig after seeing something similar done. 3x4TB enterprise drives, icydock cage, centos, and one or two cores of my Xeon rig and I can take that other system offline with the same amount of storage space.
 
my luxury of time to save the money

Everything has value and even though I can build my own PCs, I generally buy pre-made because I'd rather spend my time doing fun things :) If building and troubleshooting a setup yourself is 'fun' (and it totally can be!) then by all means have at. Some of us would rather just plug it in and use/forget it (as much as you can with a backup system!) to focus on our other interests :)
 
changing over to a virtualized setup on my main rig

But then you're tied to reboots on your main rig, no? I get the idea of virtualization, but generally you want it on a server you basically never reboot right? Of course my 'main' rig is my box for everything so I reboot it relatively frequently :) Much like I don't drive around in my Ferrari, the day to day is the beat up pickup! (I wish!)
 
That is some expensive stuff. I feel like it'd be more cost-effective to just build a computer that functions as a NAS. I mean, I suppose it's a value for the people that want to just buy a thing that you plug in and it works, but I'd rather use my smarts and my luxury of time to save the money and build my own NAS.
Agreed; it's not that expensive and not that hard at all to setup a NAS on decent hardware these days, and you'll have all the same access to OpenZFS etc. that these use anyway.

It's not even as though they're the slimmest NAS units out there anyway; I have a PC case that's not a whole lot bigger and with 9 5.25" optical drive bays it's got plenty of room to install hot-swappable 3.5" drive bays into, though obviously a case designed for server use may be preferred, especially as that should be able to take a redundant power supply.

Also, it seems like these units can only use installed disks for ZIL/L2ARC caching, and only let you assign a full disk to this purpose. Personally I prefer systems that can take much faster PCI SSD add-in modules for this purpose, and if you set everything up yourself you can actually partition the disk and use it as both ZIL and L2ARC at the same time; obviously two separate devices is preferred, but partitioning works well in many cases if you can only install one really high performance SSD.
 
That is some expensive stuff. I feel like it'd be more cost-effective to just build a computer that functions as a NAS. I mean, I suppose it's a value for the people that want to just buy a thing that you plug in and it works, but I'd rather use my smarts and my luxury of time to save the money and build my own NAS.
I don't know about the XL version, but the older was listed at $995 diskless back in July, and I saw I guy price the parts in Nov at $773. So you're paying roughly $200 for someone to put it together, tweak the bios, build a custom kernel with the right drivers, and test it. Don't forget warrantees, etc. How long would it take you? What's your time worth? If you can do it in 8 hours (1 day), and your time is worth less than $25/hr, congratulations.

Me, I might do it for fun, but that's an awfully tempting price to just pay someone else to do it.
 



That's partially my point, I would take the time to build one myself because I enjoy doing that. If you don't find it as fun as I do, cool, you have other options. But also, this thing only has a one-year warranty. Which is pretty poor. For the same money or less, I could build a rig where the component with the shortest warranty would be at least 3 years.
 


I never reboot my main rig anyhow. And it's server/workstation class hardware with ECC RAM and a Xeon and the drives are Seagate enterprise 4TB drives - so it shouldn't have any issues unless the PSU goes.
 
Your hardware can be top of the line all it wants, it's still running Windows (assumption). Virtualization your nas under Windows is a terrible idea if you really think it's going to be up all the time. Even if your base os is Linux it's still a bad idea.

My windows machines and servers usually need a reboot after rate couple weeks. Some need it every week. I could probably push them further but the updates keep piling up.

My Linux server (zfs) has been on for over 60 days without a hiccup. It performs updates, scrubs, and backups on a weekly basis. No need to reboot.
 
Hmmm...

Well, my uptimes for my machines:
webserver - centos 6.5: 121 days - had to shut it down to do a hardware change
NAS - centos 6.5: 187 days
support server - Win7 running four virtualized centos 6.5 systems: 61 days - shut down to do a hardware change
main computer at work Win7 running virtualized centos 6.5 file server: 184 days.

Don't tell me it 'has' to be shut down every week. Or every month. The only reason it seems that I have to reboot my systems is due to hardware changes which I perform... where OBVIOUSLY you need to shut it down...
 
Status
Not open for further replies.