FreeNAS: Installing Dovecot

Various notes on installing dovecot on FreeNAS 11. I understand this has no dovecot security applied. This is a tutorial for a LAN lab environment.

    1. If the FreeNAS is a VM, make sure the virtual network adapter permits permiscuous mode. This allows jails to network.
    2. Create dataset.
    3. Adjust Jails Setting, disable DHCP fn-jails
    4. Create a jail with an IP address and allow.raw_sockets=true. This allows ping. Make sure that VIMAGE is unselected.fn-jail1fn-jail2
    5. Add storage to jail. Both /usr/ports and /mnt/pool/jails/foo.
    6. Install vim:
      1. jls
      2. sudo jexec foo sh
      3. # cd /usr/ports/editors/vim
      4. make install
    7. Update pkg metadata for jail
      1. # pkg update
    8. Install screen, dovecot from package
      1. # pkg install dovecot
      2. # pkg install screen
    9. Edit dovecot stuff
    10. Message from dovecot-
      You must create the configuration files yourself. Copy them over
      to /usr/local/etc/dovecot and edit them as desired:
      cp -R /usr/local/etc/dovecot/example-config/* \
      The default configuration includes IMAP and POP3 services, will
      authenticate users agains the system's passwd file, and will use
      the default /var/mail/$USER mbox files.
      Next, enable dovecot in /etc/rc.conf:
      To avoid a risk of mailbox corruption, do not enable the
      security.bsd.see_other_uids or .see_other_guids sysctls if Dovecot
      is storing mail for multiple concurrent users (PR 218392).
      If you want to be able to search within attachments using the
      decode2text plugin, you'll need to install textproc/catdoc, and
      one of graphics/xpdf or graphics/poppler-utils.
    11. We’ll skip the imap search features for now
    12. Let’s create a user inside this jail
      1. adduser, nologin, use a password
    13. Verify /var/mail/kathy exists
    14. Check in on QuickConfiguration
    15. in /usr/local/etc/dovecot…
      1. should be using conf.d/auth-system
      2. conf.d/10-auth: disable_plaintext = no
    16. /etc/pam.d
      1. create dovecot:
      2. auth    required
        account required
    17. conf.d/10-mail.conf : mail_location = maildir:~/Maildir
    18. 10-master.conf: comment out pop3
    19. 10-ssl.conf : ssl=no
    20. in ../dovecot.conf: remove ‘::’ as interface to listen on
    21. # service dovecot restart
    22. Now attempt to hit it with thunderbird, use configs like this:fn-tbird-imap

Next is Outlook

Follow these directions on adding a IMAP account to Outlook.

If it’s really old, microsoft suggests this article.



Make New Human Language for Open Source

I was listening to Changelog episode 242 and when James Long describes an typical yet ideal experience of posting libre code to Github as an unexpected burden, it jumped out to me that Github, Google Code, Sourceforge, and a dozen other libre code hosting sites can step forward to change that burden by altering the human language around posting a new project.

An earnest developer wants to be known for creative thinking and their thesis is a code dump. Yet the environment of coders is young, scrappy and hungry: if it’s useful, the originator is bombarded by the obligations of maintenance and interaction of the audience.

If things that were experiments, proofs of concept, or insights, were not subject to maintenance requests, what would happen? Would the authors gain a kind of reprieve from obligation to participate? Should services like GitHub develop a code diary publishing form such that these ideas can be read in terse form but not pulled?

What form should appropriate feedback take? Comments, or a higher bar, such as patches? Or should we prohibit code in order to reduce obligation and encourage a longer form conversation: a code diary reply post?

The anxious among us immediately worry that anything synthesized even as pseudocode is somehow salient knowledge: anyone reading and not replying might be copying and posting an app while the original author is doing his best to maintain a healthy sleep schedule. Is this anxiety actually unfounded, though? Unlikely.

My point is really to focus on the how we can better steward notions of experiments, proofs of concept, and insights in a libre coding realm. The notion of having a one shot idea is perfectly valid for some people: they publish a proof of concept, something larger than a gist, yet beneath the definition of library or application. Maybe those notions can be automatically published only a markdown, or PDF. Or maybe they cannot be pulled or cloned. Or maybe the appropriate mechanism is for the publishing platform to synthesize an animation of the usage and help the author produce a technical blog post?

What other language distinctions have I not touched on that might be pertinent? Are there other situations that could be considered? As a parent challenged with compulsive coding urges, I love this topic. Chime in!

Updated Bash CPU Meter

In August, I did a post on a pretty simple bash cpu meter. This one is still Intel-only, but is records the range of frequencies used during a report.

function find_lines {
   local i=0
   local j=0
   while read line; do
      if [[ $line =~ cpu\ MHz ]]; then
   done < /proc/cpuinfo
function get_mhz() {
   local cpulines=()
   local line hunks m L i c
   for i in `seq 1 15`; do
      readarray cpulines < /proc/cpuinfo
      for L in "${cpu[@]}"; do
          mhz[c]+="${m%.*} "
       sleep 0.1s
# main
while [[ 1 = 1 ]]; do
    COLS=`tput cols`
    for A in ${cpu[@]}; do
       for H in ${mhz[$cpunum]}; do
         (( $H > $highest)) && highest=$H
         (( $lowest == 0    )) && lowest=$H
         (( $H < $lowest )) && lowest=$H
      bars=$(( ($lowest * ($COLS-$COLWID) )/$MAXTURBO))
      for (( L=1; L<=$bars; L++ )); do
      bars=$(( (((1+$highest)-$lowest) * ($COLS-$COLWID))/$MAXTURBO))
      for (( L=1; L<=$bars; L++ )); do
      echo "${d##99} $lowest-$highest $outline"
   echo ""
   sleep 0.1

LZOP is my friend

I’ve been doing a lot of disk cloning lately, working up instructions for duplicating a “sysprep” style Fedora image for the LANforge product. Now that live CDs tend to allow live installation of packages, I can boot one, plugin in a stick with the archive of my raw image, and install pv and lzop and I’m off to the races.

First you need to destroy whatever was on /dev/sda first of course:

wipefs -af /dev/sda

Install your utilities:

dnf install -y pv lzop

Then throw on your image:

pv -pet -B $((1024*1024*1024)) /media/images/frozen-sda.raw.lzop \
| lzop -dc | dd iflag=fullblock oflag=direct bs=1M of=/dev/sda

I think this is super cool. After that gets thrown on the drive, you need to run parted on it to correct any partition glitches. Fsck the paritions after that.


Beware the Hubris of FreeBSD

I hear a lot of very proud talk from various BSD zealots, mostly about how if you want better uptime/throughput/correctness, you should drop Linux like a rock crawling with centipedes and pick up the shinny golden nugget of FreeBSD. This will not work for me.

While Linux might be “only good enough” in their eyes, in my eyes its been better than windows for over 20 years. I hear quiet, off-hand mumbling about the state of graphics drivers and laptop support on FreeBSD. I hear they just recently got preliminary support for UEFI booting.

Various interviews with FreeBSD proponents often start with “oh, I started with Linux in the early nineties and it was a trash fire and I loved FreeBSD 4.x and haven’t looked back.” And “I hear Linux lacks proper jail support” and “Linux has a broken security model” and “SystemD is going to be the death of Linux.” Are these actually helpful points of view?

Not everyone is suffering under Linux. I’ve been putting Linux under punishing workloads for decades now and the important bit of wisdom I want to remind you of is this:

You cannot support something you are not familiar with.

Linux still makes a great desktop, server and embedded system. It’s got great tools. It runs zillions of servers and there is not a mass exodus to FreeBSD. To give a BSD or Linux box the kind of uptime, performance or security you want takes years of experience and knowledge of the subtleties of the platform. If you’re actually considering switching, start by doing a pilot project: build an evaluation stack and see if your workflow matches up to it. Bet you a buck the first thing you find is that paths, utilities and configuration defaults are going to get in your way for months until you build up your mental tree.

FreeBSD zealots behave just as much like Linux zealots behave to Windows users. Often haughty, rude and dismissive. Let’s none of us be that way.

Ubuntu 15.10 and ZFS


Some quick thots on doing this for my workstation:

  1. I have six 2TB drives in raid 10 zfs pool, and they would not import to 15.10 because 15.10 ships with (or tries to) zfs
  2. I decided on /boot, swap, /, mdadm partitions for OS install
  3. needed to do 15.10 server cmdline install for mdadm raid setup
  4. glad to not have attempted zfs-on-root for this distro
  5. setup three extra partitions on my two 120GB SSDS, using them for
    1. tank zil
    2. tank l2arc
    3. home pool (second pool named homer :-)
  6. Do not attempt to use PPA ubuntu/zfs-stable anymore, 15.10 will not accept it and it WILL mess with your zfsutils-linux recommended install.
  7. Somehow it did end up installing zfs-fuse. Somehow trying to install spl-dkms and zfs-dkms and uninstalling zfsutils-linux apt-get chose it. Why?
  8. I purged zfsutils, zfs/spl-dkms and did git clones on github/zfsonlinux/{spl,zfs}
  9. All of this required starting off with build-essential, autotools, automake, auto… and libuuid and … stuff. Not difficult to chase down.
  10. ./, ./configure && make -j10 && make install for spl and zfs
  11. updated /etc/rc.local to modprobe spl zfs, zpool import tank; zpool import homer; zfs mount tank ; zfs mount homer

I am able to reboot and import without pool version warnings.

Why did I move off 14.04.x? I really want to do video editing for kid videos and all the video packages for 14.04 are way ancient.


  1. get first server install working
  2. install lubuntu-desktop
  3. replace /etc/default/grub hidden -> false
  4. default/grub -> replace “splash quiet” with “nofb”
  5. once LXDE displays, then I do a “apt-get install mate-desktop-*” which seems to work just fine.
  6. Why? lubuntu-desktop flawlessly sets up Xorg dependencies and gives me a desktop the first time without messing around wondering why mate-desktop didn’t.

Merry Xmas!

ZFS on Linux machine

Beavertaill Cactus [Wikipedia]

Beavertaill Cactus [Wikipedia]

Here is my ZFS on Linux story, and some of you might have seen these pictures when I started this project last year: I recycled an old Athlon system from work and placed an 35 Watt AMD A2 processor with 8GB 1600 ram on an Asus mobo in it. I name my home systems after after cacti, and after I installed Ubuntu 12.04 on it, I named this one

My previous experiences with storage systems involved them dying from heat. So I decided I would avoid full sized drives and stick with laptop drives and boot off an SSD. I have the SSD partitioned with a /boot, root, and two more partitions for ZIL and L2ARC. The bulk of the storage is a mix of 750GB Hitachi and 500GB Toshiba laptop hard drives, 16 total.  I have lost two drives in this system, which I would label “normal drive attrition.” Boot drive is a 128GB OCZ Vertex 2.


Half the drives are on the bottom, and half are on top. At work I have access to gobs of full-height card brackets and this is what I built drive cages out of.

To get all the drives wired up, I started with a bunch of 1x and 2x PCIe sata expanders and used up all my mobo sata ports, but by the time I got to about 12 drives, I only had a PCI slot left, so had to use that. When looking at my disk utilization in iostat -Nx and dstat --disk-util it was plainly clear that I had a swath of drives underperforming and they were all connected to the slowest PCI controller.

Supermicro HBA 8-port SAS controllers

Supermicro HBA 8-port SAS controllers

I saved up and remedied that by purchasing two SuperMicro SAS HBA’s with Marvel chipsets. They are only 3G SATA (equivalent) but they each control eight drives, and they do so consistently. They take 8x PCIe lanes, and that’s great use for the two 16x PCIe slots on the mobo.

02:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev c3)
04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection

It took me a while to find out my network bandwidth issues. The problem was my motherboard: it has an onboard Realtek chipset. It would max out at 500Mbps download and 250Mbps upload…and very often wedge the system. I got a PCIe 1x Instell card and I got a good clean 955Mbps both ways out of that with one iperf stream, and 985+Mpbs with two iperf streams. To actually achieve this, I needed to put an Intel nic in my workstation as well. (My switch is a 16-port unmanaged Zyxel).

picture of drives

Eight drives on top

I am able to push close to full network capacity to Beavertail. As you can see, the results speak for themselves: the screenie below shows iftop displaying better than 880Mps and I saw it grab 910Mbps during this backup. Clearly part of the success is having a Samsung 840EVO in my laptop, but having a stripe of four zvols clearly allows plenty of IO headroom.

screen capture of iftop

910Mbps transfer from laptop to NAS.


Here are some other nerdy stats, mostly on how my drives are arranged:

 > zpool status -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
  scan: scrub repaired 0 in 4h43m with 0 errors on Sat Sep  6 00:13:22 2014

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G9PDPC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G9SBBC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G6GMGC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G95REC  ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G9LH9C  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G95JPC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G6LUDC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G5PXYC  ONLINE       0     0     0
          raidz1-2                                      ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_X3EJSVUOS            ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_X3EJSVUNS            ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_933PTT11T            ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_933PTT17T            ONLINE       0     0     0
          raidz1-3                                      ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_933PTT12T            ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_933PTT13T            ONLINE       0     0     2
            ata-TOSHIBA_MQ01ABD050_933PTT14T            ONLINE       0     0     2
            ata-TOSHIBA_MQ01ABD050_933PTT0ZT            ONLINE       0     0     0
          ata-OCZ-AGILITY4_OCZ-77Z13FI634825PNW-part5   ONLINE       0     0     0
          ata-OCZ-AGILITY4_OCZ-77Z13FI634825PNW-part6   ONLINE       0     0     0

errors: No known data errors

And to finish up, this system has withstood a series of in-place Ubuntu upgrades. It is now running 14.04. My advice on this, and Linux kernels is this:

  • Do not rush to install new mainline kernels, you have to wait for dkms and spl libraries to sync up with mainline and to send out PPA updates through the ubuntu-zfs ppa.
  • If you do a dist-upgrade and reboot, and your zpool does not return on reboot, this is easily fixed by doing a ubuntu-zfs reinstall: apt-get install --reinstall ubuntu-zfs. This will re-link your kernel modules and you should be good to go.

Like I said, this has been working for four releases of Ubuntu for me, along with replacing controllers anbd drives. My only complaint is that doing sequences of small file operations in it tends to bring the speed down a lot (or has, have not recreated on 14.04 yet). But for streaming large files, I get massive throughput…which is great for my large photo collection!

Happy Penguins at LinuxFest Northwest 2014 Party

I felt like I had a lot of camera issues at the start (which first was needing to go back home in order to retrieve all my SD cards, pppt.) Indoor pictures are always difficult, and as such, to make these look less like snapshots, I squirted a lot of Sriracha on these. Here are your spicy penguin pals: