Apply ZFS driver settings without reboot

Apply your modprobe.d values without rebooting:

egrep -v '^#|^\s*$' zfs.conf \
| while read L; do
   M=($L)
   N=${M[2]}
   P=(${N/=/ })
   echo "${P[1]}" > /sys/module/zfs/parameters/${P[0]}
done

#zfs #linux #bash

Advertisements

ZFS Rebuild Script

I’ve rebuilt my zfs modules often enough that I’ve written a script to do a clean build that should avoid old kernel modules and old libraries.

#!/bin/bash
sudo find /lib/modules -depth -type d -iname "spl" -exec rm -rf {} \;
sudo find /lib/modules -depth -type d -iname "zfs" -exec rm -rf {} \;
sudo find /usr/local/src/ -type d -a \( \
   -iname "spl-*" \
   -o -iname "zfs-*" \
   \) -exec rm -rf {} \;

sudo find /usr/local/lib/ -type f -a \( \
   -iname "libzfs*" \
   -o -iname "libzpool*" \
   -o -iname "libnvpair*" \
   \) -exec rm -f {} \;

cd spl
git reset --hard HEAD
git checkout master
git pull
git tag | tail -1 | xargs git checkout
./autogen.sh && ./configure && make -j13 && sudo make install
cd ../zfs
git reset --hard HEAD
git checkout master
git pull
git tag | tail -1 | xargs git checkout
./autogen.sh && ./configure && make -j13 && sudo make install

sudo update-initramfs -u
sudo update-grub2

Build OpenZFS on Ubuntu 16.04 from git

I have to import a zpool from one Ubuntu workstation with recent features to a new Ubuntu workstation. The new workstation being a fresh Ubuntu 16.04 Server install. It only has ubuntu-mate-desktop and build-essentials installed. Below is an aggregation of some of the apt install commands I preformed to get things going:

apt install dkms
apt install automake autoconf
apt install uuid-dev
sudo apt install libblkid-dev
sudo apt install -y libattr1-dev
apt install libnvpair1linux

This should get to to the point where you can do these commands:

$ git clone https://github.com/zfsonlinux/spl
$ ./configure && make -j13 && make install
$ git clone https://github.com/zfsonlinux/zfs
$ ./configure && make -j13 && make install
$ update-grub2

Notes on Updating Centos7 to 4.3.3

Upgrades! Sometimes they are a lot of homework.

Enabled Centos-Plus repos and elrepo for recent kernels. I figured out I want to install kernel-ml, kernel-ml-headers, kernel-ml-devel. That last one escaped me but is necessary because you need it when you do a dkms install.

So after updating that stuff, I was able to dkms install spl/0.6.5.3, dkms install zfs/0.6.5.3. I also made sure to modprobe spl zfs. Linked /usr/lib/systemd/system/zfs.target into /etc/systemd/system/sysinit.target.wants. Reboot. Make sure zfs pool returns.

To get vboxdrv all working I needed to make sure I installed the vboxdriver. That is /usr/sbin/rcvboxdrv. Easy to forget, that’s rather new. Then (as root) do a vboxmanage extpack install ~/Downloads/VirtualBox-extpack-5.0.6.extpack and that will keep your guest vms from blocking on missing USB-UHCI drivers.

Then go back and read my prev post on updating grub. I kept booting into kernel 3.10.x and wondering why grub2-install /dev/sda wasn’t doing it’s job. Ubuntu did the right amount of work with the update-grub2 script, I’ll say.

Ubuntu 15.10, ZFS 0.6.5.3…Fragile

Quick update on using a more recent version of zfs:

  • very glad I set up a user who’s home directory is not /home, because if zfs doesn’t finish a scan, no /home
  • attempting to install zfsnap, simplesnapshot, things that depend on zfsutils…all mess with kernel module
  • this was much easier back when there were less options and ppa:zfslinux-stable was available, didn’t have near this amount of difficulty
  • glad that zfs-auto-snapshot, zxfer are pretty easy to install with a Makefile
  • intrigued by zfSnap and simplesnapshot as backup tools but damned if I’m going to install them again after my .ko’s got all messed up.
  • Was hoping that 0.6.5.3 would have been promoted into 15.10 by now.

So, here’s hoping to getting a properly booting system :-)

Ubuntu 15.10 and ZFS

Screenshot-root@cholla:~

Some quick thots on doing this for my workstation:

  1. I have six 2TB drives in raid 10 zfs pool, and they would not import to 15.10 because 15.10 ships with (or tries to) zfs 0.6.4.2
  2. I decided on /boot, swap, /, mdadm partitions for OS install
  3. needed to do 15.10 server cmdline install for mdadm raid setup
  4. glad to not have attempted zfs-on-root for this distro
  5. setup three extra partitions on my two 120GB SSDS, using them for
    1. tank zil
    2. tank l2arc
    3. home pool (second pool named homer :-)
  6. Do not attempt to use PPA ubuntu/zfs-stable anymore, 15.10 will not accept it and it WILL mess with your zfsutils-linux recommended install.
  7. Somehow it did end up installing zfs-fuse. Somehow trying to install spl-dkms and zfs-dkms and uninstalling zfsutils-linux apt-get chose it. Why?
  8. I purged zfsutils, zfs/spl-dkms and did git clones on github/zfsonlinux/{spl,zfs}
  9. All of this required starting off with build-essential, autotools, automake, auto… and libuuid and … stuff. Not difficult to chase down.
  10. ./autoconfig.sh, ./configure && make -j10 && make install for spl and zfs
  11. updated /etc/rc.local to modprobe spl zfs, zpool import tank; zpool import homer; zfs mount tank ; zfs mount homer

I am able to reboot and import without pool version warnings.

Why did I move off 14.04.x? I really want to do video editing for kid videos and all the video packages for 14.04 are way ancient.

Also:

  1. get first server install working
  2. install lubuntu-desktop
  3. replace /etc/default/grub hidden -> false
  4. default/grub -> replace “splash quiet” with “nofb”
  5. once LXDE displays, then I do a “apt-get install mate-desktop-*” which seems to work just fine.
  6. Why? lubuntu-desktop flawlessly sets up Xorg dependencies and gives me a desktop the first time without messing around wondering why mate-desktop didn’t.

Merry Xmas!

Crazy Times with zxfer

I’ve started using zxfer that @AllanJude referred me to recently. It does a nice job. My main difficulty was how to get it to work efficiently over the 10Mbps that’s my effective DSL speed.

First, I made a copy of zxfer (zmxfer) that incorporates mbuffer. This is a crude hack, but helps me ensure that I’m getting around the mysterious hanging transmits I have previously seen sending zfs to zfs. Mbuffer seems to smooth this out well.

$LZFS send -i "$copyprev" "$copysrc" \| \
/usr/local/bin/mbuffer -q -s 128k -m 128M \
| /usr/local/bin/mbuffer -q -s 128k -m 128M \
| $RZFS receive $option_F "$copydest" \
|| { echo "Error when zfs send/receiving."; beep; exit 1; }

My off-site transfer script ssh’s to the primary backup server, queries a list of zfs filesystems to replicate and copies that back:

#~/bin/bash
CMDLIST=/tmp/zxfer_cmds.txt
XFPRE=/tmp/zxfer_batch_
SK=.ssh/backup_dsa
rm -f /tmp/zxfer_cmds*
if [ `ls /tmp/xfer-* 2>/dev/null | wc -l` -gt 0 ] ; then
   echo "Previous transfer in progress, bye."
   exit 1
fi
ssh -i $SK juno ./mk_fs_list.sh || \
   ( echo "Crap, didn't generate file-system list, bye."; exit 1 )
scp -i $SK juno:/tmp/vol_list /tmp || \
   ( echo "Crap, didn't copy file-system list, bye."; exit 1 )

We need to turn that list of filesystems into actual transfer commands. I create a file that full of the commands to execute later:

while read FS ; do
   [ -z "$FS" ] && continue;
   PFS=`dirname $FS`
   if [ "$PFS" == "." ] ; then 
      PFS=tank
   else
      PFS="tank/$PFS"
   fi
   echo "[ ! -f /tmp/stop-xfer ] && sudo zmxfer -dFPsv \
 -O \"-i .ssh/backup_dsa ctbu@juno sudo \" \
 -N tank/$FS $PFS"
done < /tmp/vol_list > $CMDLIST

You might think, “what a lot of sudo!” It’s good practice. I have dedicated a backup user to do this instead of root. I’ve configured the necessary sudoers file entries to make this work.

TIP: disable requiretty in sudoers [S.O.]

We want to increase the parallelism of these zfs transfers as much as possible. The time it takes to transfer zero-length snapshots in serial is prohibitive.

L=`wc -l < $CMDLIST`
Q=$[ $[ $L + 8 ] / 8 ]
split -l $Q $CMDLIST $XFPRE

Now we run these in screen, partly because ssh and sudo and mbuffer all tend to get a bit grouchy if they can’t agree on if the really need a tty or not…and mostly because I want to keep tabs on where any transfer hangups are. This keeps script output collated. First we test for and fire up a detached screen as necessary:

screen -ls xfer | fgrep -q '.xfer' || screen -dmS xfer
sleep 1

And then we fill the screen with some commands. (We need to have a .screenrc that defines eight screens.)

i=0
for x in $XFPRE* ; do
   echo "rm /tmp/xfer-$i" >> $x
   cmd="touch /tmp/xfer-$i"
   screen -S xfer -p$i -X stuff $"$cmd\n"
   screen -S xfer -p$i -X stuff $"time bash -x $x\n"
   i=$[ $i + 1 ]
done

Once this pxfer-lists.sh script of mine is run, you can connect to the screen using:

screen -S xfer -x

And watch the scripts do their stuff. (That stuff command is actually a true screen directive: stuff $crap into terminal $p.)

I’ve been able to get my transfer time down from 140 minutes to about 14 minutes. Also many of the backups I started transferring I figured out how to reduce in scope by stopping hourly snapshots on file systems that don’t require them.

Replacing a loud fan in my home ZFS NAS

The blue-LED fan I have in my ZFS on Linux NAS is a bit louder than the other fans that I’ve been hoping for. I am going to replace it with a 600rpm fan.

image

 

You will notice that I have drilled extra ventilation into the top case panel.

image

 

Notice the clear-plastic fan. It is held in with zip ties.

image

Zip tied those SAS cables to tidy them up.

image

See the 140mm low profile fan on the cpu? Pretty quiet. 120mm 600 rpm exit fan.

image

Let’s snip some zippies!

image

image

image

Getting the fan power connected is always a chore. My fingers are almost too big.

image

image

 

Fan is now attached and power cables are managed well enough.

image

Trim it up.

image

Plenty of inlet.

image

 

Now I put it up on it’s shelf and get it plugged in.

image

and there’s a power switch at the farthest point back there.

image

Alright. Powered up and out of the way.

image