Don’t see this too often

This was a … memory error … caused by overclocking?

centos-crash

Advertisements

ZFS Rebuild Script

I’ve rebuilt my zfs modules often enough that I’ve written a script to do a clean build that should avoid old kernel modules and old libraries.

#!/bin/bash
sudo find /lib/modules -depth -type d -iname "spl" -exec rm -rf {} \;
sudo find /lib/modules -depth -type d -iname "zfs" -exec rm -rf {} \;
sudo find /usr/local/src/ -type d -a \( \
   -iname "spl-*" \
   -o -iname "zfs-*" \
   \) -exec rm -rf {} \;

sudo find /usr/local/lib/ -type f -a \( \
   -iname "libzfs*" \
   -o -iname "libzpool*" \
   -o -iname "libnvpair*" \
   \) -exec rm -f {} \;

cd spl
git reset --hard HEAD
git checkout master
git pull
git tag | tail -1 | xargs git checkout
./autogen.sh && ./configure && make -j13 && sudo make install
cd ../zfs
git reset --hard HEAD
git checkout master
git pull
git tag | tail -1 | xargs git checkout
./autogen.sh && ./configure && make -j13 && sudo make install

sudo update-initramfs -u
sudo update-grub2

Made it Fit

This is a Digium card, clearly intended for a 1U or ATX case. One of my goals is to reduce the number of high speed fans in the lab, so I repurposed my Lanner chassis. Using a typical twist drill bit is a poor choice for the job of an end mill, but it came out ok when I put a rotary steel brush to the aluminum plate.

20180216_152343-asterisk1

Soldered new cabling

20180216_152355-asterisk2

Heat shrinked cable ends fit nicely

lsblk trick

Here’s a fun trick to list the serial numbers and sizes of your hard drives:

 $ lsblk --nodeps -o name,serial,size
NAME SERIAL             SIZE
sda  50026B77640B3E09 223.6G
sdb  50026B77640B4B39 223.6G
sdc  YGGU3EZD           1.8T
sdd  W1E15D5G           1.8T
sde  W1E16ACY           1.8T
sdf  W1E16BJB           1.8T
sdg  W1E5W99Y           1.8T
sdh  YFGR1V3A           1.8T

Recent Samba Tips

I’ve been having some difficulty with old systems brought up to recent patch levels sharing directories. Some of these settings in smb.cnf have helped me out:

security = user
ntlm auth = yes
debug level = 8
min protocol = SMB2

File System Thots

A brief experiment in calculating a histogram of file sizes:

$ find -type f -size -128000c -printf "%k\n" \
| sort -n \
| perl -ne 'BEGIN{ %h=(); } 
{chomp $_; $h{$_}++;} 
END { foreach my $k (sort {$h{$a} <=> $h{$b}} keys(%h)) { 
      print "$k $h{$k}\n"; }}'
137 3
145 3
121 3
129 5
113 7
25 10
105 14
97 21
89 29
81 35
73 38
65 60
57 92
49 165
1 221
41 317
33 781
9 4220

Ubuntu 14.04 Bonding is Bonkers

It took reading through this launchpad bug to find ideas on how to get a bonding interface working on Ubuntu. This is dumb and why people hate computers: could they at least have provided a more useful syntax or better warning messages?

auto eth7
allow-bond0 eth7
iface eth7 inet manual
   bond-master    bond0
   mtu            9000

auto eth8
allow-bond0 eth8
iface eth8 inet manual
   bond-master    bond0
   mtu            9000

auto bond0
iface bond0 inet static
   address        10.52.0.1
   netmask        255.255.255.0
   network        10.52.0.0
   gateway        10.52.0.2
   bond-slaves    eth7 eth8
   bond-mode      balance-rr
   bond-miimon    100
   bond-downdelay 200
   bond-updelay   200
   mtu            9000
   use-carrier    1
   pre-up (sleep 2 && ifup eth7) &
   pre-up (sleep 2 && ifup eth8) &

And you want to make sure all interfaces are down. Then rmmod bonding. At this point, ifup bond0 should complain a bit but it should work.

Challenge of Two Cases

Small and portable PCs are an attractive computing option. Unfortunately, they are at odds with much of the technical networking world. If you merely need one large graphics card or one beefy 10GbE networking card, you can get away with your MiniITX form factor system.

Contrast that with doing WiFi and wired network testing: often you want a system that can emulate an upstream network and emulate user clients on WiFi. This means two 1GbE ports to bond with a 4×4 Access Point these days. You can maybe get by with a mini ITX if it somehow had multiple 1GbE ports (like an AsRock Rack motherboard would), but that’s not the common request I’m hearing.

Let’s go for two 4×4 nics, 1 3×3 nic, and one 2x 10GbE card. Four slots. First challenge: a reliable MicroATX motherboard: a SuperMicro x11ssm-f will work pretty well. Second challenge: a case. Well, people often don’t consider a 2U rack mount case “portable.” The dimensions on that are often 17x14x3.5in. Most home theater PC cases are actually quite close to that size, or larger. Most of SilverStones HTPC cases are 18x15x4.5.

Antec has an attractive case: VSK2000-u3: 14 x 13 x 4in. This case can sit horizontal or vertical. It appears to be the smallest MicroATX case on the market. It comes with a 92mm case fan that is PWM* (once you strip and re-wire the plug). It requires a TFX power supply which limits us to 350W. This is sufficient, but we lack air draw thru such a small PSU.

Rosewill has a very small MicroATX case that might be more useful: 15.74 x 14.4 x 7.3in. This case is bigger. It’s a mini-tower and has a vertical tower orientation. Anything bigger might be harder to ship, but doesn’t have much bearing on the weight. We can fit an ATX power supply in this case, allowing us up to 750W with ease, and plenty of air draw through all parts of the case.

A desktop environment is the typical setting for a portable case unit. Fans are a challenge, and the premium silent fans (think Noctua) just don’t produce adequate airflow for such high heat density. We’re combining an 80W processor plus ~50W of network cards right next to each other with only about 30-50 CFM airflow through the whole case. Coolers that fit a HTPC form factor case come with 92mm x 15mm fans which move about 28-35CFM, and tend barely to keep the system below 73C. That is not adequate. A 2U server does a much better job at cooling at the cost of noise, however, with 3000RPM fans.

So which case is better? The smaller case that you have to discard your stock fans out of (discard the vertical heat sink fan for a 92x25mm ~50CFM fan, along with case fan)? Extra effort, waste. Or the bigger system that will allow a 120mm fan on a tower cooler?