Installing Gentoo Linux on ZFS with NVME Drive (Part 2)

Ready for More?

If you missed it my last post was looking at how to get Gentoo Linux installed on a ZFS filesystem with NVME disks.  Due to the size of this tutorial I’ve split it into two parts and if you’re here you should definitely have read the first part of this tutorial for any of the rest of this to make sense.

Last time we got our disks setup, installed the core Gentoo system, chrooted into a kernel-free Gentoo system and built a bespoke kernel.  Today we’re going to install our kernel, setup Grub, finish configuring our system and boot it for the first time straight from ZFS.

Back to the Kernel

Last time we’d build the kernel but not done anything else with it.  It’s time to copy it to your /boot folder and to install the modules to your system.

make modules_install
make install

Your kernel has now been copied to /boot ready for us to use later on.

We’re now ready to get everything setup for Grub.  We first need to edit the Portage configuration and tell it that we’re using EFI-64 Grub.

nano -w /etc/portage/make.conf

Add the line.

GRUB_PLATFORMS="efi-64"

Next we want to edit the list of which packages we accept.

nano -w /etc/portage/package.accept_keywords

We add in four packages that are defined as “in testing” rather than stable.  Normally testing packages would be filtered but we want the latest-and-greatest of these to enable ZFS support.  The “~” is the “in-testing” indicator and this file expressly includes packages normally ignored.

sys-kernel/spl ~amd64
sys-fs/zfs ~amd64
sys-fs/zfs-kmod ~amd64
sys-boot/grub ~amd64

We can then edit the default configuration for how we will build Grub:

nano -w /etc/portage/package.use/grub

And we add a line saying that we want to compile it against libzfs.

sys-boot/grub libzfs

Once we’ve got those settings done we can pull down the tools to create a ZFS initramfs and Grub itself.  Grub will also pull down the ZFS kernel modules and other utilities.

emerge bliss-initramfs grub

Depending on how old your Stage 3 tarball is you may also want to update all packages already installed:

emerge -uDNav @world

Next we’re ready to install Grub.  First run some grub-probe commands to ensure that “ZFS” is returned.  If it is not something is wrong – do not carry on.

cd /dev/disk/by-partlabel
grub-probe /boot
grub-probe /boot/efi

Now we’re ready to install Grub.  If you get any warnings about EFIVars (or anything else for that matter) then something is wrong – do not carry on.

grub-install --efi-directory=/boot/efi /dev/nvme0n1

We can then open up our Grub configuration file.

nano -w /boot/grub/grub.cfg

This will be empty so we can supply a default set of values.  Make sure to change the kernel version number here from 4.9.16-gentoo-FC.01 to whatever version you have.  You can confirm this by looking in /lib/modules.

 set timeout=1
 set default=0
 insmod part_gpt
 insmod fat
 insmod efi_gop
 insmod efi_uga
 menuentry "Gentoo" {
   linux /@/kernels/4.9.16-gentoo-FC.01/vmlinuz root=rpool/ROOT/gentoo by=id elevator=noop quiet logo.nologo
   initrd /@/kernels/4.9.16-gentoo-FC.01/initrd
}

We’re now ready to create an initial ramfs for ZFS that we’ll modify to support nvme.  Run the blizz-initramfs tool.

bliss-initramfs

When prompted select option “1“ for ZFS, “n” as you don’t want to use the suggested kernel and then enter your kernel version (for example 4.9.16-gentoo-FC.01).

We can now create the path for our kernel and initramfs to be stored in, copy the initramfs and then move the kernel and its related files to this folder:

mkdir -p /boot/kernels/4.9.16-gentoo-FC.01
mv initrd-4.9.16-gentoo-FC.01 /boot/kernels/4.9.16-gentoo-FC.01/initrd
cd /etc/boot
mv config-4.9.16-gentoo-FC.01 kernels/4.9.16-gentoo-FC.01/config
mv vmlinuz-4.9.16-gentoo-FC.01 kernels/4.9.16-gentoo-FC.01/vmlinuz
mv System.map-4.9.16-gentoo-FC.01 kernels/4.9.16-gentoo-FC.01/System.map

We now need to modify the initramfs.  This is where it can get a little tricky.  We start by going back to our home directory and then open up a new file called extract.sh.

nano -w extract.sh

The contents of this file, as shown below, take a copy of the initrd we just created and extract it to a folder called “ird” in root’s home directory.  This allows us to edit it.

mkdir /root/ird 2>/dev/null
rm -Rf /root/ird/*
cd /root/ird
zcat /boot/kernels/4.9.16-gentoo-FC.01/initrd |cpio -idmv

We then open make.sh:

nano -w make.sh

And use the following contents to re-create our initrd and copy it back to /boot.

cd /root/ird
find . -print0 |cpio --null -ov --format=newc |gzip -9 >  /boot/kernels/4.9.16-gentoo-FC.01/initrd

Finally we set both of these to be executable and then extract the initrd

bash ./extract.sh

You will now be in /root/ird and can edit the initial ramdisk.  We need to turn off the default ZFS-mounting behaviour and expressly mount our zpools.  Open up the init file.

nano -w init

Then go to the very end of the file (CTRL+W, CTRL+X in nano) and change these three lines:

CheckAndRunTriggers
MountRoot
MountUsrIfNeeded

To these two lines:

zpool import rpool -R /mnt/root
zpool import boot -R /mnt/root

If you have additional ZFS pools you have created now, or in the future, you may as well add them in here too.

Once you’ve saved your init file update the initrd.

bash /root/make.sh

Now you have a custom initramfs that should mount your ZFS pools even on your nvme disk.

Before we reboot we want to enable some ZFS services and disable hardware clock (since we’re using local time).

rc-update add zfs-mount boot
rc-update add zfs-share default
rc-update add zfs-zed default
rc-update delete hwclock boot

Then we can set a password for root.

passwd

We finish off some of our locale settings – first by looking at the locale list, seeing which number matches our requirements and then passing this into locale set before updating our environment settings.  We can then confirm this was a success by checking the locale.

eselect locale list
eselect locale set 3
env-update && source /etc/profile
locale

It’s then time to edit your system keymap.

nano -w /etc/conf.d/keymaps

In my case I just wanted to change the first conf setting

keymap="uk”

Due to the changes I’ve made to my initramfs and potential issues I saw arrise in other testing I have disabled parallel start-up on my system and enabled interactive boot mode so that I can stop certain services from starting.  I’d recommend you do the same by editing your OpenRC config.

nano -w /etc/rc.conf

Then ensuring these two lines have the following settings and are not commented out:

rc_parallel="NO”
rc_interactive="YES”

If you’re using an ATI card you’re going to need your GPU firmware which can be easily installed:

emerge sys-kernel/linux-firmware

We’re then ready to specify which network card we want to use in Gentoo.  You can confirm this again by checking what is already in use:

ip addr

In my case this is enp6s0.  For now I just want to tell the system to enable it and use DHCP at start-up which is as simple as including the network card in the startup scripts.

cd /etc/init.d
ln -s net.lo net.enp6s0
rc-update add net.enp6s0 default

There are then a couple of settings to make to specify my domain name.  I can edit the network configuration file.

nano -w /etc/conf.d/net

And specify my local domain name.

dns_domain_lo="guytp.org"
dns_domain_enp6s0="guytp.org"

I can then also edit my hosts file.

nano -w /etc/hosts

Here I want to put my computer name (GuyPc) at the end of the 127.0.0.1 and ::1 lines.

127.0.0.1    localhost GuyPc
::1          localhost GuyPc

The very last step is to modify the clock configuration.

nano -w /etc/conf.d/hwclock

Here I want to reaffirm I am using local time.

clock="local"

At this point it is time to exit the chroot and reboot.

exit
reboot

If everything goes to plan you will very shortly be at your Gentoo login prompt where you can log back in as root.

Now it’s time to add a few system tools you’ll want – I have added sudo (so I do not have to login as root), htop (for tracking how everything is running), sysklogd with lograte (as the most simple syslog setup), cronie (for running scheduled tasks), mlocate (for helping me find missing files) and a couple of network utilities I will inevitably need at some future point.  I also enable SSH.  I haven’t run through the SSHD or Sudo configuration here as that’s up to your own preference.

emerge sudo htop gentoolkit app-admin/sysklogd  app-admin/logrotate sys-process/cronie sys-apps/mlocate net-ftp/ftp net-misc/telnet-bsd
rc-update add sysklogd default
rc-update add cronie default
crontab /etc/crontab
rc-update add sshd default

I then create myself a local user with access to sudo and some physical devices and set myself as password.

useradd -m -G users,wheel,audio,cdrom,cdrw,usb,video -s /bin/bash guytp
passwd guytp

The very final step is to snapshot all of this so that I have a known state I can return to at a future date.

zfs snapshot rpool/ROOT/gentoo@2017-04-11-0000-01-INSTALL
zfs snapshot rpool/HOME@2017-04-11-0000-01-INSTALL
zfs snapshot boot@2014-04-11-0000-01-INSTALL

We’re Done!

Quite the install.  It’s certainly not just an Ubuntu next, next, next, done but it is a lot more interesting.  You’ll have learned a lot about how your system is setup, have already discovered where most of the system configuration is and even hand-crafted some start-up scripts to run before your filesystem is even mounted.  That’s a lot more fun than a day in the land of many Linux users.

I had a tonne of problems before I wrote this guide – ranging from not booting into EFI mode to start with and then trying to recover it (oops) to issues with NVME that we’ve addressed here and my network card failing to work within Gentoo (a firmware update was required from Intel).

I’m now on a fully working Linux system (and I’m writing this up in Abiword) but these first steps with ZFS were fundamental to my understanding of the Gentoo package management and ethos and I thoroughly enjoyed two days of frustration to get here.  I’m going to have a separate video documenting my tribulations getting X working and will probably do one in the future when I move over to Wayland as well.

For now I hope this was helpful and let me know how it went for you.

Advertisements

4 thoughts on “Installing Gentoo Linux on ZFS with NVME Drive (Part 2)

  1. Hi Guy, Robot,

    Hope you are well!

    So finally my questions are:

    1. Should I use gpt or mbr? – I will not be dual-booting
    2. I like the zfs backup/rescue options, can I install zfs on mbr and on an old hdd?
    3. Is it worth it since I will never run servers or raid connections?… (he says now)
    4. Can I use gpt on a bios machine? (The Dell Latitude I mentioned)

    What I know – (hope I am right)

    GPT – for UEFI (used from now on), parted-mklabel gpt

    MBR – for bios, legacy bios (old grub bootloader anyway, becoming obsolete …true?), fdisk-mklabel dos

    All this scratching is making me itch … please help!!

    Apologies for my ignorance, newbieness and being a pain.

    Have a lovely day!

    Kyri

    PS: Is it too much to ask for a run down of a Gentoo ZFS pro-audio low-latency install with both bios and uefi. I would just like to learn more as it’s been a while since I dabbed into my past IT knowledge and it’s a kind of project I would like to delve into. It’s the time constraints that kill me and Summer time is the only rest-bite I get to really fully submerge myself. Yeah I know it probably is, sorry!

    Like

    1. So if we think of a solid state or hard drive as a book then GPT and MBR are just like the contents page of your HDD or SSD. They announce the “chapters” that the book contains. In this case the chapters are partitions – logical divisions of your book (or disk in this case). Both GPT and MBR serve the same purpose but GPT is newer and can address bigger “books” (MBR is a maximum of 2TB disk space).

      BIOS is a way to start up a computer and act as a very basic interface between hardware and software. For most of the last 30 years it’s existed solely to be skipped but used to be more in-the-way back in the heady 1980s. It knows nothing about modern hardware and basically just gives direct access to things the operating system may want. UEFI is a much more modern solution to do the same thing and can include much more “higher level” knowledge of hardware (i.e. what is a network card) before control is handed over the an operating system.

      UEFI and BIOS can both boot GPT/MBR but can also not do this depending on the operating system and many other variables. Rule of thumb – Windows needs UEFI for GPT, Linux is a happy chappy.

      I first used GPT/UEFI in 2006 on a MacBook Pro (which I also ran Windows on). So these are not new technologies. I most recently setup a new system with BIOS/MBR two weeks ago – so they’re far from extinct.

      There are a couple of rules – if you have a disk with more than 2TB in size you have to go GPT otherwise you’ll only get access to 2TB of disk space. If you’re booting Windows with GPT then you’ll almost certainly need UEFI. Beyond that it’s up to you. There are far more tutorials from years-gone-by based in BIOS/MBR so there’s nothing to stop you using that if you don’t need more than 2TB on your disk. Additionally if your boot disk is under 2TB you can still use BIOS/MBR on that and GPT for secondary disks with no issue.

      If you can a MBR/BIOS setup is probably more simple and with more examples online but if you can go with UEFI/GPT as it is “the latest” and there’s really no reason (other than less examples and some lacking support still) to use UEFI/GPT. That being said I’ve had some operating systems refuse to boot on certain motherboards with one or the other despite the fact they should – so it can be a trying and fiddle around issue.

      Regarding other questions on fdisk/mklabel/etc. – let’s look at the book analogy a little more. In this book each “chapter” contains something specific and could even be in a different language. A chapter in a book in this context is a partition in a disk. So if we have a disk split up in to three chunks these are three partitions. Each partition has a file system which defines how it is structured. A common filesystem (very dated now) is FAT (file allocation table). FAT in effect has it’s own index at the start of the chapter which says where every file in that chapter is and what it’s name is – so it might say “myfile.txt” starts on page 5 line 3 then continues on to page 7 line 8. FAT is univerally supported and woefully limited in functionality so tends to only get used for things that need to be cross-platform (Windows, Mac, Linux), don’t use much disk space and don’t need security (SD cards, memory stikcs, etc.). Windows uses NTFS and Linux traditionally uses Ext4 whilst MacOS uses HFS+. All of these perform the same function (storing files and providing some sort of index for where they physically are) but also can offer additional functionality – security permissions for specific user access to files, directory structures, allowing different machines to access the same file simultaneously without corrupting it, restoring old versions of the data, etc. A basic Linux MBR system will have a single ext4 partition and a swap partition. Using UEFI tends to require an extra small partition to store UEFI information and this must be formatted as fat32. To setup these partitions a utility like fdisk, cfdisk or parted is used (Computer Management -> Disks in Windows). To set a particular partition to use a particular filesystem you must format it (this creates a totally empty chapter in the book ready for whatever language/structure you specify) – so to format a partition as FAT32 would mean it is then useable as FAT32. When you format it any existing contents is deleted.

      So to answer your questions:

      1. I’d say use MBR unless you have a 2TB or more boot disk.

      2. Yes – the Arch Linux wiki has a good tutorial for getting it up and running using MBR with ZFS – I did this recently when I was testing out various install options for this machine

      3. ZFS is definitely worth is for many reasons other than redundancy, but does add complexity that you may wish to avoid at the moment. It’s certainly not a pre-requisite. If you get in to multiple disks and want performance later it’s an option. Booting off ZFS in Linux is not trivial.

      4. Yes for Linux but no if you want Windows on the same disk

      Regarding doing a specific low-latency audio tutorial unfortunately I just don’t know the knowledge to offer a good tutorial. I may be able to get something up and working but without knowing what “good” looks like I couldn’t comment if it was remotely usable for intended purposes.

      Like

  2. Hi Guy,

    Hope you are well. Thank you for your excellent feedback, it was very helpful. I will definitely fill you in on how it all pans out. Once again thank you so much!!!

    Kyri

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s