Updated Bash CPU Meter

In August, I did a post on a pretty simple bash cpu meter. This one is still Intel-only, but is records the range of frequencies used during a report.

#!/bin/bash
COLWID=17
MAXTURBO=4200
function find_lines {
   local i=0
   local j=0
   while read line; do
      if [[ $line =~ cpu\ MHz ]]; then
         cpu[i]=$j
         ((i++))
      fi
      ((j++))
   done < /proc/cpuinfo
}
function get_mhz() {
   mhz=()
   local cpulines=()
   local line hunks m L i c
   for i in `seq 1 15`; do
      c=0;
      readarray cpulines < /proc/cpuinfo
      for L in "${cpu[@]}"; do
          line="${cpulines[$L]}"
          hunks=($line)
          m=${hunks[3]}
          mhz[c]+="${m%.*} "
          ((c++))
       done
       sleep 0.1s
    done
}
# main
find_lines
while [[ 1 = 1 ]]; do
    COLS=`tput cols`
    mhz=()
    get_mhz
    cpunum=0
    for A in ${cpu[@]}; do
       lowest=0
       highest=0
       for H in ${mhz[$cpunum]}; do
         (( $H > $highest)) && highest=$H
         (( $lowest == 0    )) && lowest=$H
         (( $H < $lowest )) && lowest=$H
      done      
      outline=""      
      bars=$(( ($lowest * ($COLS-$COLWID) )/$MAXTURBO))
      for (( L=1; L<=$bars; L++ )); do
         outline=$outline"-"
      done
      bars=$(( (((1+$highest)-$lowest) * ($COLS-$COLWID))/$MAXTURBO))
      for (( L=1; L<=$bars; L++ )); do
         outline=$outline"="
      done
      d=$(($cpunum+9900))
      echo "${d##99} $lowest-$highest $outline"
      ((cpunum++))
   done
   echo ""
   sleep 0.1
done
#
Advertisements

Emulating Web Browsers

Here’s a tedious task for you to consider. When you log-in to your favorite website these days, you’re creating a JSON document and posting it to the service you’re logging into. Long gone are the days when you just posted simple form parameters from a whole post. I work on an emulation platform: one of our features is to use Perl to emulate hundreds of users logging into a captive portal. This requires an economy of memory and time: creating hundreds of “firefox -p ~/.cache/firefox/xzf30d.userprofile” profile sessions is clearly not:

  1. memory efficient
  2. bound to specific network interfaces
  3. time efficient
  4. scriptable

So we use Perl. This requires reading through the F12 -> Networking tab of your browser’s debugging window and emulating the AJAX post to login. Fun once. Wouldn’t want to live there.

customer-tedium

Thot on Linux Video Editing

The other night I ran thru what might be the gamut of video editors because kdenlive was crashing on me. I’m going to run down this list.

Openshot: am finding it doesn’t import clips and align tracks to base audio comfortably. Did not like having to right-click-properties for fade-in, fade-out this and thats.

Pitivi: gave up on almost two years ago. Was not stable, was not useful for aligning base audio and video clips.

Blender: that looks like a lot to wade thru…to much so to just learn about the NLE features I’d use. Would love to…someday.

Avidemux: I guess it’s OK for processing clips or stitching a few together, but what I was interested in was it’s ability to batch-straighten & noise-remove (visual noise) video clips before importing them. This required I spend about a day digging into why their example java script on wiki did not work and had to abandon. Useful if you have one-three clips a day, but I can’t justify that much time to convert 20 clips once a year. Did not get any clips converted. I’m not a shy programmer, just skeptical.

Cinelerra: Oh my. Was this thing created in Borland Turbo Pascal in 1997? It does not support clip alignment except by typing in time offsets. That took two hours to determine…and I gave up.

LiVES: installed easily. Then importing clips and aligning audio became very confusing. Gave up in 45 minutes.

Lightworks: complained about not having Nvidia hardware. Gave up.

Flowblade: they polished this nicely. Really tried to align some video clips to audio but to move things and add spacers was just so unintuitive I had to abandon ship.

Vivia: not in repo, looks abandoned.

Kdenlive: went back to old trick of doing a FULL KDE install on 15.10 and created a new kde user. Finally stopped crashing when I made sure to chown -R kde:kde /home/video. Still wasn’t perfectly stable. Very disappointed that lack of error messages. KDE software loves to spam the terminal console you start it from, so things like “cannot move clip” or “cannot find clip to move” … no root causes. Eventually produced video with 15 clips, one base audio track, and 45 stills.

Kdenlive is probably the easiest to use NLE for Linux I’ve found.

Crazy Times with zxfer

I’ve started using zxfer that @AllanJude referred me to recently. It does a nice job. My main difficulty was how to get it to work efficiently over the 10Mbps that’s my effective DSL speed.

First, I made a copy of zxfer (zmxfer) that incorporates mbuffer. This is a crude hack, but helps me ensure that I’m getting around the mysterious hanging transmits I have previously seen sending zfs to zfs. Mbuffer seems to smooth this out well.

$LZFS send -i "$copyprev" "$copysrc" \| \
/usr/local/bin/mbuffer -q -s 128k -m 128M \
| /usr/local/bin/mbuffer -q -s 128k -m 128M \
| $RZFS receive $option_F "$copydest" \
|| { echo "Error when zfs send/receiving."; beep; exit 1; }

My off-site transfer script ssh’s to the primary backup server, queries a list of zfs filesystems to replicate and copies that back:

#~/bin/bash
CMDLIST=/tmp/zxfer_cmds.txt
XFPRE=/tmp/zxfer_batch_
SK=.ssh/backup_dsa
rm -f /tmp/zxfer_cmds*
if [ `ls /tmp/xfer-* 2>/dev/null | wc -l` -gt 0 ] ; then
   echo "Previous transfer in progress, bye."
   exit 1
fi
ssh -i $SK juno ./mk_fs_list.sh || \
   ( echo "Crap, didn't generate file-system list, bye."; exit 1 )
scp -i $SK juno:/tmp/vol_list /tmp || \
   ( echo "Crap, didn't copy file-system list, bye."; exit 1 )

We need to turn that list of filesystems into actual transfer commands. I create a file that full of the commands to execute later:

while read FS ; do
   [ -z "$FS" ] && continue;
   PFS=`dirname $FS`
   if [ "$PFS" == "." ] ; then 
      PFS=tank
   else
      PFS="tank/$PFS"
   fi
   echo "[ ! -f /tmp/stop-xfer ] && sudo zmxfer -dFPsv \
 -O \"-i .ssh/backup_dsa ctbu@juno sudo \" \
 -N tank/$FS $PFS"
done < /tmp/vol_list > $CMDLIST

You might think, “what a lot of sudo!” It’s good practice. I have dedicated a backup user to do this instead of root. I’ve configured the necessary sudoers file entries to make this work.

TIP: disable requiretty in sudoers [S.O.]

We want to increase the parallelism of these zfs transfers as much as possible. The time it takes to transfer zero-length snapshots in serial is prohibitive.

L=`wc -l < $CMDLIST`
Q=$[ $[ $L + 8 ] / 8 ]
split -l $Q $CMDLIST $XFPRE

Now we run these in screen, partly because ssh and sudo and mbuffer all tend to get a bit grouchy if they can’t agree on if the really need a tty or not…and mostly because I want to keep tabs on where any transfer hangups are. This keeps script output collated. First we test for and fire up a detached screen as necessary:

screen -ls xfer | fgrep -q '.xfer' || screen -dmS xfer
sleep 1

And then we fill the screen with some commands. (We need to have a .screenrc that defines eight screens.)

i=0
for x in $XFPRE* ; do
   echo "rm /tmp/xfer-$i" >> $x
   cmd="touch /tmp/xfer-$i"
   screen -S xfer -p$i -X stuff $"$cmd\n"
   screen -S xfer -p$i -X stuff $"time bash -x $x\n"
   i=$[ $i + 1 ]
done

Once this pxfer-lists.sh script of mine is run, you can connect to the screen using:

screen -S xfer -x

And watch the scripts do their stuff. (That stuff command is actually a true screen directive: stuff $crap into terminal $p.)

I’ve been able to get my transfer time down from 140 minutes to about 14 minutes. Also many of the backups I started transferring I figured out how to reduce in scope by stopping hourly snapshots on file systems that don’t require them.