(I'll update with pictures when I can - because everybody likes homebrew SAN pornography).
It has only been 5 years and 2 months since I registered this blog and promptly did nothing with it. Why not suddenly start using it, I ask myself. Good place for personal stuff, and this is about my home SAN/NAS, so why not? I'll probably cross-post this one at Nex7.com, as well as any other ZFS-related ones, but I'll also post other non-ZFS and even non-computer stuff up here (like the plans to move to Washington state and set up a self-sufficient rural living situation - no, I'm not kidding!).
Before I answer the question posed by the title, let me further explain my requirements for this. I have an existing Nexenta-running SAN with 2 TB of usable space in it and a whole 4 GB of RAM. It is an old dual-Opteron SuperMicro motherboard and just locating 2 GB of RAM to add to the 2 I got it with cost me nearly $300! -- oh, and the hardware is dying.
Still, it served well for a few years, and at this point is indispensable, as I have multiple iSCSI, CIFS, and NFS shares hanging off of it that are in live use on my other systems, including my main day to day desktop (which actually lives off that SAN; the only disk in it is an 80 GB SSD for the OS). So, being critical, and with the motherboard starting to flake out and it having components in it over 6 years old, it was time to upgrade!
The solution needed to be:
- Cost-effective (sub-$1000).
- Provide a minimum of 8 TB of raw disk space (required).
- Separate OS from data (required).
- Provide ECC RAM (required - makes things much more difficult).
- Provide at least 8 GB of RAM (preference for 16 GB or more - my ARC hit rate on the old box is usually over 99%, but at peak times can drift down quite a bit, and the plan is to put the wife on it as well, and a few new boxes/VM's I have planned).
- Support ZFS, either via an illumos derivative or FreeBSD (preference on supporting both).
- Be expandable (preferred but optional).
Essentially, I wanted a "pseudo-enterprise-grade" (via ZFS and ECC RAM) SAN for a home user budget, with sufficient space and RAM to handle even a serious power user or two's plus some home server's day to day performance requirements.
So, was I able to do it?Short answer: almost! I slipped by $110.86 ($50.87 for you, who learn from my mistakes).
The most time-consuming parts of this was the requirement of ECC RAM. This is pretty critical - running a ZFS storage server without ECC RAM just rubs me the wrong way. Sure, RAM bit flips are rare, but they do happen, and if I'm going to spend all this effort checksumming all my bytes on disk, I may as well checksum my RAM as well. :)
Tied to the necessity for ECC RAM was motherboard choice - I clearly wanted a good motherboard, but at the same time, a 'budget' build can't spend $450 on a mid-grade server motherboard! So keeping my search to desktop or low-end server motherboards led to a fairly time-consuming process of elimination trying to find one with the features, a decent collection of PCI-e slots for expansion and customization ability, sufficient RAM maximum with ECC support, etc, etc.
I quickly found ECC RAM on any Intel-based platform adds $100's of dollars more than your average home system; requiring a server-grade motherboard and a server-class Intel CPU. AMD to the rescue - certain AMD chipset-based motherboards, notably many of the ones from ASUS, support ECC RAM, and do so for a combined mobo/CPU cost literally a 1/3 or less that of a comparable Intel solution. It took more than a little digging to figure out a currently available motherboard that someone else had already purchased and stated unequivocally did support ECC RAM, but I was able to find a few (the Gigabyte equivalent to the motherboard I got from ASUS is also verified to have it, per a user on [H]ardOCP).
I also spent quite awhile agonizing over cases and such, trying to figure out one that provided me with sufficient cooling in a home environment for at least 8 disks (I knew I only needed 4 to hit my space requirements for the next few years, but I wanted at least double the slots, for future-proofing). There were a couple of good options; the one I've listed below was the winner for me, but feel free to experiment. I will say, I have no complaints about the case, other than that one of the 4-slot HDD bays came with a screw that was unable to be removed through any reasonable means, and cannot be put back in now (but since the HDD's actually are secured via screws in the bottom and the slide 'tray' locks in pretty securely without the screw, I didn't bother to RMA).
Shopping list, with links and prices on NewEgg as of 9/10/2012 when it was all ordered:
|Component||Qty.||Price (sum)||Description||Newegg Link|
|PSU||1||$89.99||SeaSonic M12II 620 Bronze 620W ATX||Link|
|MBD||1||$154.99||ASUS M5A99FX PRO R2.0||Link|
|CPU||1||$129.99||AMD FX-6100 Zambezi 3.3 GHz Hex-Core||Link|
|RAM||4||$28.99 ($115.96)||Kingston 4GB 240-Pin DDR3 ECC Unbuffered||Link|
|OS SSD||1||$49.99||OCZ Vertex Plus R2 VTXPLR2-25SAT2-60GB||Link|
|Data HDD||4||$99.99 ($399.96)||Seagate Barracuda STBD2000101 2TB 7200 RPM||Link|
|Case||1||$109.99||Fractal Design Arc Midi Black||Link|
Compatibility notes - short list:
|NexentaStor Enterprise 3.1.3||✓ (gani driver)||✓||2.0 only||Link|
|NexentaStor Enterprise 4.0
(not publicly avail, yet)
|✓ (re driver)||✓||2.0 only||N/A|
|Illumian 1.0||X**||✓||2.0 only||Link|
* FreeBSD 9.0 - you can snag drivers either pre-compiled or build them into the kernel after install very easily. Google for 'FreeBSD 9 Realtek 8111F' and everything you need should be on the first page.
** I wasn't able to tell - when I finally burned a copy of illumian I already had BSD installed, so I just looked at the installer, and unlike the installer for NexentaStor, I saw no instance of a gani0 or re0 in ifconfig; this could be that it doesn't have the necessary support, or it could be illumian installer doesn't turn up a network device until later in the process than I was willing to go. I'll err on the side of caution and say no. :)
Compatibility notes - long list:
These drives above are the 4K drives that report as 512. On FreeBSD, the following is required to make them work properly (basically gnop them, create pool, export pool, destroy gnop drives, import again, verify ashift is still 12):
bsdsan# ls -lha /dev/ada*
crw-r----- 1 root operator 0, 101 Sep 19 01:00 /dev/ada0 <-- SSD
crw-r----- 1 root operator 0, 108 Sep 19 01:00 /dev/ada0p1 -
crw-r----- 1 root operator 0, 110 Sep 18 20:00 /dev/ada0p2 -
crw-r----- 1 root operator 0, 112 Sep 18 20:00 /dev/ada0p3 -
crw-r----- 1 root operator 0, 114 Sep 19 01:00 /dev/ada1 <-- data HDD
crw-r----- 1 root operator 0, 116 Sep 19 01:00 /dev/ada2 <-- data HDD
crw-r----- 1 root operator 0, 122 Sep 19 01:00 /dev/ada3 <-- data HDD
crw-r----- 1 root operator 0, 124 Sep 19 01:00 /dev/ada4 <-- data HDD
bsdsan# gnop create -S 4096 /dev/ada1
bsdsan# gnop create -S 4096 /dev/ada2
bsdsan# gnop create -S 4096 /dev/ada3
bsdsan# gnop create -S 4096 /dev/ada4
bsdsan# zpool create home-0 mirror /dev/ada1.nop /dev/ada2.nop mirror /dev/ada3.nop /dev/ada4.nop
bsdsan# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
home-0 3.62T 452K 3.62T 0% 1.00x ONLINE -
bsdsan# zdb -C home-0 | grep ashift
bsdsan# zpool export home-0
bsdsan# gnop destroy /dev/ada1.nop /dev/ada2.nop /dev/ada3.nop /dev/ada4.nop
bsdsan# zpool import home-0
bsdsan# zdb -C home-0 | grep ashift
bsdsan# zpool status
scan: none requested
NAME STATE READ WRITE CKSUM
home-0 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
errors: No known data errors
To be honest, I'm not sure the best way to deal with the 512/4K issue on the illumos-derivatives. My usual advice in a professional capacity to people building corporate SAN's is to just steer clear of them altogether, so I've never dug too far into it - and I fear answers that involve custom binaries! As the goal of this build was not only to replace an aging home storage box, but also to familiarize myself with ZFS+FreeBSD since I get plenty of Nexenta at work, I didn't bother to research (maybe I will later), as FreeBSD made it so painless to deal with (nice).
Also, the motherboard I chose has a Realtek 8111F chipset for the NIC. You need to either compile it in after installation or snag compiled drivers for it, OR just wait for FreeBSD 9.1. FreeBSD 9 has no in-built support. It is not hard to do - Google 'FreeBSD 9 Realtek 8111F' and the first page has all you need.
So for those paying attention, you may note that's not quite the total I calculated. No, it isn't from tax. There were a few extra components I took the liberty of ordering that were not strictly necessary (and some went unused), like Arctic Cooling MX-4 to replace the terrible thermal pad included on the OEM heatsink (which I did use) and a few extra cables (they ended up not actually being necessary).
Those really paying attention may notice the case holds 8 drives and I only put in 4 (the OS SSD fits in a small SSD slot up in the 5.25" area, leaving you with a full 4 unused 3.5" internal HDD slots). This is because the on-board motherboard SATA controller(s) only handle a total of 7 drives. That's 1 OS and 6 data. There is a way to get to a full 8, as well as put all 8 onto a solid non-motherboard SATA controller, but it involves more cost and some extra legwork. If you're interested, look into this card: http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157.
Specifically, that card and then flashing it with an LSI IT firmware, which it will take, and it will then be probably the cheapest PCI-e SATA/SAS controller you can get that's really enterprise grade and works fine in Solaris derivatives as well as FreeBSD. You'll also need 2 mini-SAS to SATA FORWARD breakout cables (don't get reverse breakout cables). That's the route I intend to take when I need to double the spindle count in this little home SAN, but for now 8 raw TB is more than sufficient for my needs, so I opted to skip it (it adds about $200 between the card and cables).
Still, add that in and that knocks it up to about $1250 for a 8 TB raw SAN that can be expanded to 16 TB raw for another $400, so $1650 for 16 raw TB (interesting that it comes out to something close to $100/TB). As long as you stay to one break-out cable per mini-SAS port (so 8 disks total), SATA is no problem on OpenSolaris derivatives (reportedly less of an issue on BSD), and if you go SAS, well, the case becomes your limiter (you could probably cram 2-3 more drives up in the 5.25" area, and possibly more if you wanted to get crazy). Obviously 3 TB disks are rapidly lowering in price, too, so soon this same build could be had at 12 TB raw on 4 disks or 24 TB raw at 8 disks -- and of course, 4 TB goes up to 16/32.
I'm still setting it up, after testing out all the OS's above and so on - now it is time to get serious on it, so we'll see if I get motivated to post up what I find out about migrating from a Solaris+ZFS solution to a BSD+ZFS solution (I can already see a few gotchya's, like differences between COMSTAR & istgt).