SAN for Homelab – Part 4 – Starting the Build

Introduction

In the part 1, part 2 and part 3 of this blog I looked at my reasons for setting up a quite meaty SAN in my home lab for my ESX infrastructure and went through part selection. Over the last couple of weeks most of the parts have arrived and I’ve started the build. Whilst it isn’t finished yet I thought I’d give a few updates on the build and some lessons that have been learned.

Preparing for the Build

When preparing for the build I decided to take the opportunity to fix a few other things on my businesses home lab that I have – namely some items I never sorted out first time around that have caused frustration. In addition to the SAN itself I decided to:

  • Replace/repair my broken KVM (see this post for repair tips)
  • Replace my very messy and slightly broken HP rackmount monitor with a much older but immaculate Compaq 2U keyboard, mouse and trackball combination (again see this post for some mounting issues)
  • Add an Apple Xserve Raid to my setup (very cheap storage and since I do a lot of work on Macs it’s a useful bit of equipment to be able to support and adds useful storage to my SAN)
  • Re-cable my entire rack – it’s become somewhat “organic” in its nature since I first acquired it
  • Add in better power management support including UPS and PDU for the equipment

In addition to the items that I described for the SAN itself last time I’ve also purchased the following – all from eBay:

Brother P-Touch 1000 labeller 11.25 link
Additional label tape 2.70 link
IEC C20 to UK socket adapter 12.69 link
Xserve Raid with 14x400GB disks and rails 225.00 link
Xserve Raid full spares kit (1 of controller, PSU, fan, disk caddy and disk) originally sealed 100.00
APC 2200VA 2U UPS with rails 139.99 link
Batteries for UPS 76.50 link
C19 mains cable for UPS with standard 13A UK plug 4.99 link
UPS network management interface 38.98 link
Various CAT5 cables (price each) 0.99 link
Dual faceplate CAT5 socket 3.49 link
RJ11 connectors 1.99 link
200x Velcro cable ties 19 link
Energy meter 10.79 link
6x 1U rack blanking plates 20.94 link
24 port CAT5e patch panel 10.80 link
Pack of M6 rack screws and bolts 7.99 link
HP KVM interface cables 7.48 link
Selection of C13-C14 power cables for PDU 32.40 link
APC AP7920 PDU (price each, 2 purchased) 109.99 link
APC serial cable 3.20 link

All together this came to under £1,000 with absolutely everything to finish off my home lab with the exclusions of some VOIP phones and upgraded wireless systems which I’ll finally sort later in the year. A fair bit of money but considering the volume of kit that was purchased an absolute steal.

One key from there is the inclusion of the RJ11 cables. I decided to move the VDSL modem that is my main internet connection to be part of my rack and run the VDSL itself over my CAT5 run. To do this I needed an RJ45 to RJ11 cable which I had to hand-make since I wanted it in the correct colours for “telecoms” for my cable layout and since I couldn’t find anything that did quite what I wanted.

From the list of equipment I included last time I decided to not purchase the SSDs for boot drives in the SAN for now but went ahead with most of the other components. The only changes were:

  • The case I wanted was out of stock but the guys at XCase managed to find me a similar one they had sat around and sell me that with some rails
  • Noctura case fans didn’t get purchased in the end, I’ve settled with the default case fans for now – I may revisit this in the future but considering overall noise from my rack with an Xserve and Xserve raid is now significant this new server isn’t adding any additional noise at the moment
  • Slightly different choice on SAS cables and 60GB SSDs for ones with better reviews

Overall due to price changes I ended up saving about £300 here based on my previous estimates – so around £2,500 including all the kit listed above – for what I’ve got I’m happy with that.

Current Build Status

I spent a large chunk of last weekend, whilst suffering from the dreaded man-flu, getting a lot of this started. Some parts hadn’t arrived such as low-profile faceplates or rails from the USA and also the quad-LAN NICs that I ordered so I got cracking with as much as I could.

There are some various pictures from the build progress below. The thing I’m happiest about so far is the cabling in my rack which has gone from being a monstrosity to a thing of beauty in my eyes. This is the first time I’ve ever allocated the time to actually create a tidy-rack. I was working on a client site last year and the rack cabling I was involved in ended up looking like this:

A major multi-national client of mine's attempt at rack cabling
A major multi-national client of mine’s attempt at rack cabling

Whilst this spaghetti is definitely not my fault I certainly piled in additional cables. I’m very happy with my cabling though and the velcro cable ties have been excellent. There’s a defined colour scheme which in its core is:

  • Red – management interfaces for devices
  • Pink – ISCSI traffic
  • Green – telecoms be it POTS or VDSL (in telephone form) or VOIP handsets
  • Yellow – router, modem (VDSL PPOE output) or switch interlinks
  • Purple – workstations
  • Orange – miscellaneous network devices such as printers for CAT5
  • Orange – power cables from PDU or UPS
  • Black – power cables from mains to UPS
  • Blue – KVM over CAT5

A beautiful velcro-braided set of 12 CAT5e cables to be used for iSCSI after routing in rack and also showing power distribution
A beautiful velcro-braided set of 12 CAT5e cables to be used for iSCSI after routing in rack and also showing power distribution

New rack cable routing showing management CAT5e routed through side of rack
New rack cable routing showing management CAT5e routed through side of rack

Picture of everything in new rack with cabling from front
Picture of everything in new rack with cabling from front

At the moment I’ve got everything rack-mounted with a few exclusions:

  • No 4-port NICs for iSCSI installed in anything yet
  • Nothing is actually cabled up (since I’m waiting for these NICs and will want to re-rack all servers anyway)
  • The main SAN box that I’m building has various issues I’ll detail below
  • The monitor/keyboard isn’t yet mounted whilst I await rails
  • KVM isn’t yet cabled whilst I’m waiting for the adapters to turn up
  • APC UPS is sitting at bottom of the rack but not mounted as the correct screws only turned up today
  • Since everything is turned off I’m currently running off a BT Home Hub to power my home and have no access to any of my work servers – thankfully a combination of being ill and working a lot of overtime for my current client has meant that I’ve survived OK for the past few days

I’ve had a selection of problems during the build. The first was rails. There’s always a problem with rails. I’ve already posted that the monitor/keyboard tray I purchased a while ago wouldn’t fit in my rack as it’s designed for racks that are exactly 1 metre in depth and mine’s only an 80cm. Unfortunately the problems didn’t end there.

The APC UPS came with rails but unfortunately not the kind to support M6 screws and cage nuts which are standard for “square-hole” racks. Instead the rails are pre-threaded and designed to sit behind the square holes and requiring 10-32 screws. This meant I had to order a bag of these for the 18 screws that the UPS rails require (it is very heavy).

The 2U case I ordered also had issues with the depth of my rack which was annoying as they were meant to be compatible. The server wouldn’t push back flush again due to the depth of my rack and even though the rails were adjustable. I solved this with a hammer and pair of pliers to “flatten-out” the notch on the back of the outer rails and front of the inner rails which are designed to stop the server going too far one way or the other. I figured that the front of the server will always stop it going too far anyway and since I cannot shut the back of my rack due to overhanging devices such as my Xserve which requires 1m depth I’m at no loss. If you do this careful not to warp the metal on you rails.

The biggest problems I’ve had though have been with the main SAN server build itself and there were plenty of issues here:

  1. The SAS cables I ordered at 1m were too big and have ended up taking up a lot of case space – 50cm would have done
  2. I didn’t order right-angled connectors on my SAS cables which considering the SAS card I have has caused problems getting everything in the case
  3. The consumer Gigabyte motherboard I ordered only has PWM control on its CPU fan despite the fact it has a second 4-pin fan socket on the motherboard. The cabling diagram in the manual shows the fourth pin is “reserved” and not actually used. This wouldn’t be a problem in many systems since it uses voltage to control fan speed instead, however my case has 4x 8cm fans for the HDD bay which are connected to its own fan controller. This fan controller has 2x molex connectors for power and a separate 4 pin fan connector that only uses pins 3 and 4 to report speed and use PWM to control it. This means that they always run at full speed. I can assure you that 4 8cm fans running at 9,000 RPM is very noisy. I looked at different options here (including considering a new motherboard or separate fan controller) and eventually went for a dirty hack. I’ve put the CPU fan attached to the system fan connector and manually defined a fan speed that should (hopefully) cover it in all scenarios. Since the CPU fan will detect the voltage rather than PWM it is being controlled reasonably well. I’ve then connected my 4 HDD fans to the CPU header and again manually defined the rotation speed. This means my fans are not directly connected to temperature sensors so I’ll probably need to tweak this and perhaps add OS-level monitoring if this causes problems but at least my server no longer sounds like a jet engine.
  4. The ServeRaid M1015 card initially went into its configuration menu and showed no drives. I did a “reset to defaults” and since then it freezes the system on start when it initiates its option ROM. I’m going to disable option ROMs in the BIOS with the card removed, flash the device to the standard LSI firmware instead of IBM’s and hope that this solves it. If not I may have somehow bricked it. Four of my hot-swap caddies connect to the SAS ports via a SFF8087 to 4x SATA breakout cable and these, however, are detected fine.
  5. The Fibre HBA card I purchased to integrate with Xserve doesn’t seem to have an option ROM – no idea if this is even supported and I’m currently lacking a low profile bracket for it anyway. I’ll investigate further once I’ve got an OS installed.
  6. The graphics card I purchased ended up being humongous and wide even for a low-profile card once I installed it and ended up either using a PCI-E 16x slot or covering one up (based on locations it could physically be located). Since I want both my 16x slots for HBAs (SAS and Fibre) I decided to be clever and remove the heatsink. I thought this couldn’t be a terrible idea since it’s going to be doing no heavy work at any point and just displaying console mode. In less than 30 seconds the thing died (even without the chip feeling warm). I’ve ordered a 15 year old PCI graphics card from eBay as an alternative for £2 (plus £8 in next day shipping) which should be with me for this weekend and then mean I can fit everything in the case.
  7. With my cheap and quiet PSU being very large there is nowhere to mount the 2x internal SSDs in the case so I’m having to use 2 hotswap bays for the OS drives – not a big issue but frustrating as that’s one reason I picked the case

The final issue I had was that APC UPS came with battery tray and internal cables and I ordered the batteries separately however I couldn’t find any wiring guides for how these should be connected and the only video I saw included simply how to swap out individual cells one-by-one rather than building it from scratch. Since the battery unit is not meant to be user-serviceable the APC manuals were no help. Without wanting to blow up a lot of very large batteries I refreshed my basic electronics from 20 years ago and worked out that to give the required output voltage they had to be wired in series with two sets of series batteries to give the correct output. This means wiring the positive of each battery to the negative of the next battery in sequence. Thankfully my quick maths was correct and nothing blew up but if you’ve got this UPS and want to install fresh batteries remember to do so in series for each of the two sides of the tray.

My biggest concern at the moment is the ServeRaid M1015 as at £100 it’s going to cost a fair bit to replace if I’ve somehow bricked it and since it was working originally on receipt I cannot really claim DOA to the seller. It could be as I didn’t have this in a 16x slot originally (but a 4x with 16x connector) – we’ll see what a re-flash does though.

Lessons Learned

Considering I bought everything at super-knocked-down prices this may be an irony of a statement to make, but don’t scrimp on parts. Second hand kit for your rack is one thing but it would have made a big difference to my build had I purchased a main-brand case/rails, a full depth rack when I originally bought it (but 18U brand new for £200 was a steal) and more importantly a server motherboard. The server motherboard would have negated my graphics card problems, given me enough ports to start with rather than needing to use a ServeRaid and actually offered PWM fan control on all the headers. This would have probably cost me about £300 more for CPUs and motherboard plus an additional £200 on a case – however I’ll have ended up potentially spending £100 of that on dirty-hacks. My case is actually great for the money – but I suspect I’d have felt less stressed overall buying a Supermicro case.

The ESX host I built last year has a server grade board and CPUs and gave me no hassles whatsoever. If I could go back again I’d spend the extra here.

We’ll see in the next part whether I should have also not tried the ServeRaid M1015 but gone for something new and under warranty. This certainly isn’t a build for you to replicate a dozen times over for your large data centres just due to time spent sorting all the parts and getting it all working – but it does come in orders of multitude cheaper than an off-the-shelf solution.

Next Time

Next time I’ll give an update on my finished solution, run through some of the wiring schemes I’ve done throughout the racks and house and cover the main point of this article – setting up ZFS on OpenIndiana. I’ve been doing some more reading up in this area particularly around improving caching and de-duplication considerations as well as key concepts for sharing the same disks across multiple ESX hosts so am looking forward to putting it into practice. I’ll also be covering sharing those Apple Xserve Raid fibre disks back out over iSCSI. Rather than paying for a full SAN setup and installing a fibre switch and Fibre HBAs in each machine I am using the Xserve Raid as direct-attached storage to my new SAN box for now and will be just presenting it as a JBOD so I’ll cover the configuration and management of that as well.

I’ll hopefully get most of this finished in the next week or two depending on when parts turn up and how many more cards I blow up. Stay tuned.

Pictures

UPS at rear of new rack
UPS at rear of new rack
Battery tray for UPS once sealed up
Battery tray for UPS once sealed up
Batteries once cabled in series (positive to negative) for UPS
Batteries once cabled in series (positive to negative) for UPS
Batteries once cabled in series (positive to negative) for UPS
Batteries once cabled in series (positive to negative) for UPS
Batteries once cabled in series (positive to negative) for UPS
Batteries once cabled in series (positive to negative) for UPS
Batteries once cabled in series (positive to negative) for UPS
Batteries once cabled in series (positive to negative) for UPS
Open battery tray completely uncabled
Open battery tray completely uncabled
Open battery tray completely uncabled
Open battery tray completely uncabled
Battery tray removed from UPS
Battery tray removed from UPS
UPS with front cover remved
UPS with front cover remved
ServeRaid M1015 freezing on its option ROM
ServeRaid M1015 freezing on its option ROM
New SAN server with some missing rear brackets for low-profile cards
New SAN server with some missing rear brackets for low-profile cards
Beautiful mess of SAN server cabling
Beautiful mess of SAN server cabling
New SAN server with case on
New SAN server with case on
Caddies for hot swap bays with 4TB WD RE SAS drives
Caddies for hot swap bays with 4TB WD RE SAS drives
Empty caddies for hot swap bays
Empty caddies for hot swap bays
View of interior of new SAN server prior to drive installation - PSU now quite messy
View of interior of new SAN server prior to drive installation – PSU now quite messy
Additional 2 bay SSD caddy for SAN server
Additional 2 bay SSD caddy for SAN server
Heatsink/fan installed into new SAN server
Heatsink/fan installed into new SAN server
Heatsink/fan installed into new SAN server
Heatsink/fan installed into new SAN server
CPU installed in to new SAN server
CPU installed in to new SAN server
CPU installed in to new SAN server
CPU installed in to new SAN server
CPU to go in to new SAN server
CPU to go in to new SAN server
CPU to go in to new SAN server
CPU to go in to new SAN server
RAM installed in new SAN server
RAM installed in new SAN server
RAM for new SAN server
RAM for new SAN server
Motherboard in new case
Motherboard in new case
Motherboard placement plastic for new SAN server
Motherboard placement plastic for new SAN server
New SAN server - with backplane
New SAN server – with backplane
New SAN server - empty case
New SAN server – empty case
New SAN server - empty case
New SAN server – empty case
New SAN server - empty case
New SAN server – empty case
Pins 1-4 wiring I'm using for RJ11 over CAT5e
Pins 1-4 wiring I’m using for RJ11 over CAT5e
Power meter to see how much Im using KWH, current/max/min W, current A and cost
Power meter to see how much Im using KWH, current/max/min W, current A and cost
Original cabling in rack looking a bit of a mess
Original cabling in rack looking a bit of a mess
Original cabling in rack looking a bit of a mess
Original cabling in rack looking a bit of a mess
Rack as it was originally looking sparse and bland
Rack as it was originally looking sparse and bland
Rack as it was originally looking sparse and bland
Rack as it was originally looking sparse and bland
Rack as it was originally looking sparse and bland
Rack as it was originally looking sparse and bland
New rack cable routing showing management CAT5e routed through side of rack
New rack cable routing showing management CAT5e routed through side of rack
Picture of everything in new rack with cabling from front
Picture of everything in new rack with cabling from front
Attempted night picture of everything in new rack
Attempted night picture of everything in new rack
New rack after routing management and iSCSI cables through the rack with required patching
New rack after routing management and iSCSI cables through the rack with required patching
New rack cable routing showing management CAT5e routed through side of rack
New rack cable routing showing management CAT5e routed through side of rack
A beautiful velcro-braided set of 12 CAT5e cables to be used for iSCSI after routing in rack and also showing power distribution
A beautiful velcro-braided set of 12 CAT5e cables to be used for iSCSI after routing in rack and also showing power distribution
A beautiful velcro-braided set of 12 CAT5e cables to be used for iSCSI after routing in rack and also showing power distribution
A beautiful velcro-braided set of 12 CAT5e cables to be used for iSCSI after routing in rack and also showing power distribution
A beautiful velcro-braided set of 12 CAT5e cables to be used ro iSCSI before rack routing
A beautiful velcro-braided set of 12 CAT5e cables to be used ro iSCSI before rack routing
New rack from rear with all cabling for power distribution
New rack from rear with all cabling for power distribution
Start of cabling for switch in new rack
Start of cabling for switch in new rack
Start of cabling for switch in new rack
Start of cabling for switch in new rack
Start of cabling for switch in new rack
Start of cabling for switch in new rack
Start of cabling for switch in new rack
Start of cabling for switch in new rack
New rack from rear with all cabling for power distribution
New rack from rear with all cabling for power distribution
New rack with ESX-01 and some blanking plates added to it
New rack with ESX-01 and some blanking plates added to it
New rack after initial installation with SAN server that doesn't fit
New rack after initial installation with SAN server that doesn’t fit
Existing rack after it has been emptied of everything apart from patch panel and switch
Existing rack after it has been emptied of everything apart from patch panel and switch
Existing rack after it has been emptied of everything apart from patch panel and switch
Existing rack after it has been emptied of everything apart from patch panel and switch
Existing rack after it has been emptied of everything apart from patch panel and switch
Existing rack after it has been emptied of everything apart from patch panel and switch
Graphics card and ServeRaid M1015 for SAN server
Graphics card and ServeRaid M1015 for SAN server
The boot SSDs used in my SAN server
The boot SSDs used in my SAN server
The CPU cooler for my SAN server
The CPU cooler for my SAN server
The CPU cooler for my SAN server
The CPU cooler for my SAN server
The consumer motherboard used in my SAN server
The consumer motherboard used in my SAN server
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
A selection of new components for my homelab / SAN
Advertisements

5 thoughts on “SAN for Homelab – Part 4 – Starting the Build

  1. Great work – I’m looking at doing the same myself with very similar kit; Xserve 2008 8 core, Xserve RAID, probably a brocade switch. Very good read and great pictures.
    One question – is the RAID as loud as everyone says? And is there any chance of replacing the fans in it?

    Like

    1. Thanks for the feedback. It’s definitely loud – more than that it puts out a hell of a lot of heat and uses a lot of power. I got the whole setup working quite well and it’s performance is reasonable considering the age and price of the equipment – but you have to want to have Apple kit to justify it. The self-build SAN is much more quiet and energy efficient with significantly better performance.

      The heat is phenomenal coming out the back of the RAID – I had to get air conditioning installed in my garage for the summer months as without it the exhaust was building the room up to lethal temperatures even if just 30 degrees C outside. Turn off the RAID and everything was fine.

      My entire rack runs at about 1KWh – turning off the RAID reduces this to around 600Wh and the A/C added the best part of another KWh during 3 months this summer. It’s certainly pretty – but that’s got to be the main reason you want it, as a RAID array it has a lot of limitations.

      If you do go down the Apple route before wiping OSX off your Xserve itself I’d edit the power settings within Mac OS to be “always on”. Once you’ve installed ESX and removed OSX there’s no way to toggle this setting which is a bit frustrating unless you want to do a whole install just to change that setting. It’s worth keeping an OSX image for the Xserve as well if you are planning on sticking on ESXi as the main OS.

      Like

  2. Just came across your site as I’m also in the process of setting up something quite similar (own an Xserve + a couple RAID units too, even). Much gratitude for detailing the process so thoroughly; you’ve saved at least one person a countless number of hours of research / development. ; )))

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s