SAN for Homelab – Part 2

Introduction

In part 1 I covered the subject that I was looking at building a SAN for my homelab. Last year I built a homelab ESX using a single drive to store the data and I’ve been looking at moving this to something more resilient. I’ll pick-up where I left off – deciding exactly what my requirements are and why I’ve chosen them.

Decision Time

The fact that I’m going to want this accessible to both my ESX hosts means that I need to expose the disk space over the network. From my perspective the best option here is ISCSI (other than the fact I’ve never more than played with it before). It’s cheap as it can run over my standard CAT6 infrastructure and it’s fast. The other option for presenting a datastore to ESX would be to use NFS but looking at a selection of benchmarks this has a significant CPU penalty on any machine presenting the devices and consequentially means I’m going to need to spend more on equipment. In a selection of tests that looked at there’s also greater IOPS over ISCSI than NFS. Plus, I want to play with ISCSI.

For the capacities I need I’m going to want to have at least 12 bays. I’ll be looking at 512GB SSD and 4TB Nearline SAS HDDs for this capacity. These drives fit into the sweet spot of sufficient capacity but just shy of insane prices. Depending on my final configuration I’m estimating 4-5 of each disk so may as well look at a 12 bay rackmount enclosure to provide me with a hot-spare incase any of my drives fail because that saves me having to wait for more drives to arrive before rebuilding my array.

Then there is the question – what to do for RAID? I’ve always used fairly simple storage configurations. For bulk data I’ve always used RAID-5 and for databases I’ve tended to use RAID-10. I’ve found that these have met my requirements well and it was probably the best part of 20 years ago I setup a RAID-5 array. Considering that was so long ago I wondered what other options there were.

After some digging I found a solution to a problem I didn’t even know existed – bitrot. In essence the odd bit of data (literally a single bit) on a drive may flip from a 1 to a 0 for various reasons be it natural failure rates or some other issue. This can seriously corrupt data and in the case of RAID prove problematic for rebuilds. Rather than going in to great detail here there’s one article at http://arstechnica.com/information-technology/2014/01/15/bitrot-and-atomic-cows-inside-next-gen-filesystems/ that is worth a read – but there is plenty of reading out there for you.

The data I’m storing is important but I could probably live with a small amount of data corruption without considering it the end of the world. That, however, is not the point as I’m here to learn and play as well.

After much research I came across ZFS. It is a file system designed to deal with data corruption that also includes de-duplication support (very useful for expensive SSDs containing many similar operating system images), compression, mirroring, snapshots and software RAID. Some of the caching capabilities when used with large amounts of RAM or fast SSDs also offer interesting options to boost performance for a small cost. It was originally introduced with OpenSolaris back in 2005 but now supporting illumos distributions (i.e. OpenIndiana), FreeBSD, Mac OS X, NetBSD and others.

ZFS seems to tick a lot of the boxes for my needs. Rather than using hardware RAID the whole lot is handled by the file system. This also removes any tie between RAID controller and the ability to recover data in a worst case scenario. The ZFS community is probably second only to Apple’s community for belief in their technology but cutting through that it still feels to me like a great option even if it solving problems I may never encounter. Most importantly ZFS is a system I haven’t used before and I’d get to experiment with. This is good enough for me as part of the point of anything I do is to have fun and learn something else.

Considering I’d made the decision to use ZFS the next question became a little moot – to build or to buy? Whilst I may be able to find an off the shelf system that does exactly what I want part of the benefit of all of this would be hand-crafting and selecting exactly the right components. As such this was going to be a self-build project the same as my ESX.

The easiest choice for an operating system for my ZFS would have been MacOS apart from licensing means I couldn’t use my own custom hardware and there’s no suitable rack mount equipment from Apple powerful enough to do what I want. Of the remaining choices I have a fair degree of experience with FreeBSD but it has been 14 years since I last used Solaris in any way and so decided to look at a Solaris variant. If you want an off-the-shelf and easy to use storage system based on top of ZFS then FreeNAS seems to be one of the most popular choices – but I wanted to get low-level and didn’t want to be restricted by a GUI for my purposes. Furthermore the majority of features FreeNAS offered I wouldn’t be using since all I need to present datastores to my ESX is iSCSI and ZFS – so the command line will work for me. Another option to consider is napp-it which is an illumos based GUI for managing your ZFS needs.

Since Sun killed the OpenSolaris project illumos seemed a good point to start. After looking at the available options NexentaStor or OpenIndiana were the two best contenders and not wanting to select a part-commercial platform only to decide in six months I need to pay some licensing fees I’m going to go with the fairly simple OpenIndiana.

So this gives me the following decisions to base my SAN on:

  • ISCSI protocol to expose drives
  • 4-5x 512GB SSDs
  • 4-5x 4TB Nearline SAS HDDs
  • ZFS file system and no hardware RAID (apart from on OS drives themselves – I’ll use RAID 10 for that)
  • OpenIndiana operating system

With this in mind I am ready to decide exactly which components are best suited to my home lab SAN.

Next Time

In Part 3 I’ll be using what I’ve decided here to construct a shopping list for my SAN with a detailed breakdown.

Advertisements

2 thoughts on “SAN for Homelab – Part 2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s