SAN for Homelab – Part 3

Introduction

In the first two part 1 and part 2 of this series I covered the ground of what I wanted to use a SAN for in my environments and what I actually needed. This time I’ll look at the finite details of exactly what I’m buying to build it.

Hardware Selection

I have spent days going back and forth over hardware selection of various configurations and changed my mind a dozen times. There is no right answer for exactly how to build your SAN but I’ll try to give you an indication of why I chose the components I did to help inform your own decisions.

The case ended up being an important choice and one that made me change other components several times over. I ended up selecting a 2U case with 12 3.5” bays and room for two internal 2.5” bays for operating system drive. I had previously considered a 4U case with over 20 bays but changed my mind because there was no chance I was going to be using 20 bays any time soon and the cost to ensure my motherboard, RAM, PSU and availability of SATA/SAS ports to support 20+ drives would be significant. I worked out it would actually be cheaper just to build a second 2U 12 bay server than have all the support there from day one. Additionally I’ve only got a half height rack so don’t want to waste space unnecessarily. The trade off here is going to be that I can only use low-profile expansion cards in the server and my PSU selection will remove some of the silent, modular systems I’d been contemplating previously. This selection may end up changing shortly as the particular cases I’ve been looking at our showing as out of stock and it then actually works out cheaper to go 3U.

Motherboard and CPU came as next choice. Unlike my ESX host where this was the most critical selection for getting the most capacity into a small space and silently I was less worried here. I just needed enough capacity to support the amount of data transfer I would be doing and a high-end consumer motherboard and CPU would be sufficient. i wanted to get as many SAS/SATA ports on a motherboard as possible, allow for as much RAM as possible, a CPU with a high number of cores, the ability to be cooled reasonably quietly and some software RAID implementation for my OpenIndiana operating system drives. I elected the Gigabyte GA-990FXA-UD3 motherboard with an AMD FX-6400 CPU.

RAM is really important for ZFS. You’re going to want a couple of gigabytes just as a baseline but if you plan on using some of the more heavy features you’re going to need a lot more RAM. The de-duplication features that are within ZFS offer great advantages for my VMWare operating system drives – many of which are replicas of the same operating system – but this comes with an overhead. I’ve seen estimates from between 1GB – 5GB of RAM per terabyte of de-duplicated storage. With a couple of terabytes of SSD to de-duplicate and some HDD that I’d like to de-duplicate I’ve decided to opt for 32GB which is the maximum my board can take. The board doesn’t take ECC so it’s basic consumer-grade memory for me. If I’d gone for a server board I would have taken 64GB and ECC just to be on the safe side for the relatively small incremental cost and the benefits of de-dup and caching.

Although I won’t be needing a hardware RAID card I will be needing additional SAS/SATA ports on my consumer board. The board I’ve selected only comes with 6 and I’ll need 14 total – 2 for my internal O/S disks (from the motherboard RAID) and 12 for the hot-swap bays. Having done some digging it’s imperative to select a card with a suitable chipset for ZFS – they key appears to be to get the drives to be passed through as standard disks rather than any hardware RAID occurring whatsoever (as this removes many of the benefits of ZFS). The selection of supported RAID controllers / HBAs is limited but a good guide is available at http://blog.zorinaq.com/?e=10 – from this and some other research I decided I wanted to go for an 8-port internal HBA with the LSI SAS2008 controller. Of everything I’ve been looking at the best option appears to be the IBM ServeRaid M1015. This card can be flashes to become a standard LSI card but is available for significantly less money. Out of the box it’s configured to provide hardware RAID to IBM servers but it can be turned into a LSI9211-8I for much less money – providing 8 internal SAS connectors for under £100. This, twinned with the 6 on board connectors, provides for my 12-bay case and 2 internal slots. Perfect.

Since I’ll be using ISCSI I’m going to need a fairly decent network connection for this server to send all the data since we’ll be covering 2 ESXs. For my current needs I believe 4x 1Gbps connections should be sufficient and an additional port for management. Since the motherboard already comes with a single ethernet socket for management purposes my best bet is to get an Intel Pro/1000 Quad card. With my case selection this should mean a Intel Pro/1000 GT Quad NIC with low-profile PCI-E connection should do the trick. I’ll have this running on its own VLAN so that only my ESX hosts can access it. My Opteron host will also get one of these cards (but full height) since its two ports are already used for management/PPPOE connection. My other ESX host is an Apple Xserve and I’ll probably end up getting an additional four ports for that as well – but need to dig a little deeper in to the compatibility here first. I use HP 1810-24G V2 switches in my network as they’re fanless which is useful when they’re also in a family house. These should easily provide for my present storage needs.

Graphics is a bit of a pain and not something you normally need to consider when building a SAN. Unfortunately because I no longer have any vintage PCI graphics cards and because I’m buying a consumer rather than a server board there is no built-in output capability meaning I’m going to have to buy one. I was amazed how difficult finding a really cheap PCI card is on eBay second hand but managed to find a new low-profile one for under £20 in the end so long as I don’t mind losing one of my PCI-E slots.

You may remember I bought the Noctura NH-U12DO heatsink and CPU for my AMD ESX last year. These things are brilliant. Whilst I’ve never shelled out £100 on a single heatsink and cooler before it was money well spent. I ended up having two fans left over in my last system and replaced the drive bay coolers with them. This has led to my ESX being completely silent (unlike the Xserve which sounds like a Rolls Royce jet engine). Considering both the noise-levels and cooling offered by my last Noctura setup I’ve decided to repeat the same – buying a Noctura heatsink/CPU to replace the one that comes with the chip and also buying Noctura fans for the drive bay. This may be some money that isn’t technically required but should be a good investment. This time I’m going for the 92mm Noctura CPU cooler and 80cm HDD fans. These are significantly cheaper than the ones I bought last year but I’m hoping they still hold up in terms of quality.

The PSU has been another source of quite a lot of research. When I was considering a 4U case I’d decided I definitely wanted a near-silent, modular design to keep everything tidy. However with everything else factored in and having decided a 2U this severely limits options. I have found no PSUs that offer everything I want (high wattage, near-silent and modular) in a 2U form factor. I’ve calculated (using some online tools) that I should near no more than 500Watt so have elected to go with a Seasonic Gold PSU in the hope it is fairly quiet if nothing else. It offers enough connectors and power for my needs and it fits in the case so that’s a bonus. I decided not to go with redundant PSU as everything else in my setup still has power as a SPOF. Considering this is a home setup I can live with this but elsewhere I’d have got a second power supply and ensured I had two different feeds coming in to my rack.

Finally the hard disks in question. I’ve decided to go with 2x 60GB SSDs to run my operating system which will be RAID 0 (Striped) by the motherboard’s RAID adapter. I’ve then decided to go for 4x512GB SSDs and 4x 4TB nearline SAS hard disks to give me an OS/database and bulk-data datastore for all my VMs. To finish this off will be two SFF-8087 SAS cables from the hot swap backplanes and 1x breakout cable to use the 4 motherboard sockets.

Shopping List

Since I’m based in the UK I’ve been looking around for the best options of the above shipped to the UK. I’ve excluded shipping costs here but in most cases they’re free. Some items are shipped from the USA – namely from Newegg who’ve always proven both cheap and able to delivery internationally in 1-2 working days. These prices are correct at time of writing and are in GBP.

Gigabyte GA-990FXA-UD3 Motherboard 72.99 link
AMD FX-6300 Vishera 6-Core 3.5GHz CPU 66.99 link
Noctua NH-L9a CPU Cooler 30.99 link
X-Case RM 212 GEN II 2U 12-Hotswap Bay Case 171.60 link
Seasonic Gold 500 Watt 2U PSU 99.60 link
4x Noctura NF-A8 HDD Fans 49.96 link
IBM ServeRaid M1015 8 Port HBA 94.00 link
32GB RAM 150.00 eBay (Various)
GeForce N210 PCI-2 Graphics Card 21.99 link
2x Intel Pro/1000 PT Low Profile NIC 79.90 link
Intel Pro/1000 Full Height Bracket 6.16 link
4x 4TB Western Digital Datacenter Nearline 7200RPM HDD 668.00 link
4x 500GB Crucial BX100 SSD 559.92 link
2x Patriot Blaze 2.5″ 60GB SATA III SSD for illumos 53.98 link
2x SFF-8087 1m Cables 20.24 link
Mini SAS to 4x SATA Breakout Cable 20.98 link

Giving a total of under £2,170 for 1.5TB of SSD-based RAID-Z1 and 12.5TB of HDD-based RAID-Z1. Without the drives it comes in at £940. This is about a third-less than something like a Sinology RS2414 and well under half the price of an equivalent HP or Dell product whilst meeting the cooling and sound requirements that I want for a homelab rather than an enterprise data centre.

Next Time

I’m now going to go ahead with ordering these parts although sadly the case I want is out of stock until July. Considering the other options I’ve looked at unless I can find the same spec from another supplier I’m going to wait to complete the order. When the parts arrive I will add a series of blogs for building and setting up the server. Once it’s up and running I’ll cover off-site backup strategies and integrating it with my ESX hosts.

Advertisements

One thought on “SAN for Homelab – Part 3

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s