economy

ZFS on Linux machine

Beavertaill Cactus [Wikipedia]

Beavertaill Cactus [Wikipedia]


Here is my ZFS on Linux story, and some of you might have seen these pictures when I started this project last year: I recycled an old Athlon system from work and placed an 35 Watt AMD A2 processor with 8GB 1600 ram on an Asus mobo in it. I name my home systems after after cacti, and after I installed Ubuntu 12.04 on it, I named this one Beavertail.bitratchet.net.

My previous experiences with storage systems involved them dying from heat. So I decided I would avoid full sized drives and stick with laptop drives and boot off an SSD. I have the SSD partitioned with a /boot, root, and two more partitions for ZIL and L2ARC. The bulk of the storage is a mix of 750GB Hitachi and 500GB Toshiba laptop hard drives, 16 total.  I have lost two drives in this system, which I would label “normal drive attrition.” Boot drive is a 128GB OCZ Vertex 2.

image

Half the drives are on the bottom, and half are on top. At work I have access to gobs of full-height card brackets and this is what I built drive cages out of.

To get all the drives wired up, I started with a bunch of 1x and 2x PCIe sata expanders and used up all my mobo sata ports, but by the time I got to about 12 drives, I only had a PCI slot left, so had to use that. When looking at my disk utilization in iostat -Nx and dstat --disk-util it was plainly clear that I had a swath of drives underperforming and they were all connected to the slowest PCI controller.

Supermicro HBA 8-port SAS controllers

Supermicro HBA 8-port SAS controllers

I saved up and remedied that by purchasing two SuperMicro SAS HBA’s with Marvel chipsets. They are only 3G SATA (equivalent) but they each control eight drives, and they do so consistently. They take 8x PCIe lanes, and that’s great use for the two 16x PCIe slots on the mobo.

02:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev c3)
04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection

It took me a while to find out my network bandwidth issues. The problem was my motherboard: it has an onboard Realtek chipset. It would max out at 500Mbps download and 250Mbps upload…and very often wedge the system. I got a PCIe 1x Instell card and I got a good clean 955Mbps both ways out of that with one iperf stream, and 985+Mpbs with two iperf streams. To actually achieve this, I needed to put an Intel nic in my workstation as well. (My switch is a 16-port unmanaged Zyxel).

picture of drives

Eight drives on top

I am able to push close to full network capacity to Beavertail. As you can see, the results speak for themselves: the screenie below shows iftop displaying better than 880Mps and I saw it grab 910Mbps during this backup. Clearly part of the success is having a Samsung 840EVO in my laptop, but having a stripe of four zvols clearly allows plenty of IO headroom.

screen capture of iftop

910Mbps transfer from laptop to NAS.

 

Here are some other nerdy stats, mostly on how my drives are arranged:

 > zpool status -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 4h43m with 0 errors on Sat Sep  6 00:13:22 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G9PDPC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G9SBBC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G6GMGC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G95REC  ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G9LH9C  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G95JPC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G6LUDC  ONLINE       0     0     0
            ata-Hitachi_HTS547575A9E384_J2190059G5PXYC  ONLINE       0     0     0
          raidz1-2                                      ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_X3EJSVUOS            ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_X3EJSVUNS            ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_933PTT11T            ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_933PTT17T            ONLINE       0     0     0
          raidz1-3                                      ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_933PTT12T            ONLINE       0     0     0
            ata-TOSHIBA_MQ01ABD050_933PTT13T            ONLINE       0     0     2
            ata-TOSHIBA_MQ01ABD050_933PTT14T            ONLINE       0     0     2
            ata-TOSHIBA_MQ01ABD050_933PTT0ZT            ONLINE       0     0     0
        logs
          ata-OCZ-AGILITY4_OCZ-77Z13FI634825PNW-part5   ONLINE       0     0     0
        cache
          ata-OCZ-AGILITY4_OCZ-77Z13FI634825PNW-part6   ONLINE       0     0     0

errors: No known data errors

And to finish up, this system has withstood a series of in-place Ubuntu upgrades. It is now running 14.04. My advice on this, and Linux kernels is this:

  • Do not rush to install new mainline kernels, you have to wait for dkms and spl libraries to sync up with mainline and to send out PPA updates through the ubuntu-zfs ppa.
  • If you do a dist-upgrade and reboot, and your zpool does not return on reboot, this is easily fixed by doing a ubuntu-zfs reinstall: apt-get install --reinstall ubuntu-zfs. This will re-link your kernel modules and you should be good to go.

Like I said, this has been working for four releases of Ubuntu for me, along with replacing controllers anbd drives. My only complaint is that doing sequences of small file operations in it tends to bring the speed down a lot (or has, have not recreated on 14.04 yet). But for streaming large files, I get massive throughput…which is great for my large photo collection!

Pedal cleats

image

These cleats have been with me for over a year. They got worn smooth from walking on them. The previous pair I left in for two years and I had to drill one of them out. Advice for cleats: use some white lith grease on the bolts when you apply them. Use a long handled hex wrench or ratchet to tighten them. When removing them, drip on some light oil like TriFlow to work into the seams. Wait at least ten minutes for the oil to work in. Take something g sharp like  n awl or a pocket knife or the tip of a new drywall screw to dig out all the crap in the bolt head. Even after that prep, you might not be able to fit your hex bit in. Next try a Torx bit of the same size. The wear on the bolt head might have screwed up the insides of the bolt head, but if you can mallet a torx bit in there, it should grip long enough to use a ratchet to back it out. Otherwise you will want to go to the screw-reverser bit in your drill.

Lesson: use that white lith grease first when applying new bolts!

Net Neutrality | Electronic Frontier Foundation

FTA:

The FCC has a poor track record of getting net neutrality right. In January 2014, a federal court rejected the bulk of the FCC’s 2010 Open Internet order. The rules that the court threw out, however, were deeply flawed. Protecting net neutrality is a hard problem, with no easy solutions. … [W]e are asking folks to contact both the FCC and Congress and send a clear message: It’s our internet, we won’t let you damage it, and we won’t let you help others damage it.

via Net Neutrality | Electronic Frontier Foundation.

When will we get true civilian Internet? — “Internet’s Own Boy” Briefly Knocked Off YouTube With Bogus DMCA Claim – Slashdot

This is why we need copyright reform, and we need to invest in truly civilian Internet spaces. Things like youtube are entirely taken for granted but they live at nothing more than the whim of corporations, with no actual rights of free speech on them.

The whole Google and “right to be forgotten” drama is another case where civilian government is unable to form a basis for a search engine for a search engine ruled by civilian law. We act as if Google is a utility…and while it might be a de-facto utility, it is not, and it is ephemeral and as temporary as its stock price lives well.

“Internet’s Own Boy” Briefly Knocked Off YouTube With Bogus DMCA Claim – Slashdot.

Losing Net Neutrality Is The Symptom, Not The Problem: Now Is The Time To Focus On Real Competition | Techdirt

This is a really great point because national broadband providers act not only as utility monopolies (Internet service) but also as content providers. This should be considered conflict of interest. A logical role for municipalities to adopt would be to become local broadband providers by buying out the POTS copper lines that the telcos are actually selling off and ripping out when installing FIOS (optical fiber to the curb). This is a logical way of increasing competition and bolstering local economies.

 

Losing Net Neutrality Is The Symptom, Not The Problem: Now Is The Time To Focus On Real Competition | Techdirt.

Naomi Klein: How science is telling us all to revolt

This is a good fresh take on why our planet is about to boil: capitalism. How do you change capitalism? Social resistance: uprising.

So it stands to reason that, “if we’re thinking about the future of the earth, and the future of our coupling to the environment, we have to include resistance as part of that dynamics”. And that, Werner argued, is not a matter of opinion, but “really a geophysics problem”.

Fancy words for: people who are telling us to “be reasonable” are making money ruining the earth.

He isn’t saying that his research drove him to take action to stop a particular policy; he is saying that his research shows that our entire economic paradigm is a threat to ecological stability. And indeed that challenging this economic paradigm – through mass-movement counter-pressure – is humanity’s best shot at avoiding catastrophe.

via Naomi Klein: How science is telling us all to revolt.

Too Many Linux Distros? And Does Progress Justify Injustice?

Are there too many Linux distros? Michael Dominick, in Episode 23 of Coder Radio clearly says that there are too many distros. This is not a fresh dilema, and I’ve written about it in the past. It is a basic point: in any community where proficiency is valued and lumber is free, you will never find two carpenters who build the same chair. The thousands of Linux distributions we find today are an evolutionary explosion submitting solutions to the needs of the few in the long, long tail of possible requirements. And there will never be a tangible method to restrict this growth.

Commercial vendors have always had trouble with what they viewed as a slithering mass of mostly academic and hobbyist level distributions. Clearly, Redhat, SuSE and Debian/Canonical have been the most stable presence in this realm. Software availability is more ubiquitous, more consumer focused, and more competitive than ever. The Linux Standard Board has always been a watered down standard that never lead the distros, but merely tailed them. And now when OS vendors tend to proffer “blessed” software development paths, the LSB has yet to address this, or even think ahead to application development life-cycle standards. This is a failure in stewardship of Redhat, Novell, and Canonical in general.

Long has it been clear that each large Linux vendor has their own software architecural style. Seeing how much effort software developers put into competing with iOS and Android requirements, developing desktop software for Linux is only going to be more neglected if these vendors don’t provide a unified approach to for desktop application development. Just having a pretty desktop is insufficient. Just having an app-store is insufficient.

Providing a reference-distribution application development life-cycle and a stable reference desktop application API is crucial for Linux to be competitive in the network-enabled software market. This is a vision that Sun took a stab at, Trolltech, RealBasic and Bryan Lunduke have all taken their own stabs at, but seems roundly ignored by the large distribution providers. I think that Bryan Lunduke’s Illumination Software Creator was an earnest answer, as well has been Qt, and Java to this problem (at various points in time). Michael Dominick articulates these issues very clearly on Coder Radio.

What kind of compromises are necessary for this to happen? Will distros need to focus less on architectural evolution and more on community economic development? Would the most indignant and proud developers have to get a) offfended b) dismissed and c) ignored in favor of the vanilla approach?

And is this an unjust route? Michael Dominick and Chris Fisher were discussing Alan Cox’s upset over the Nvidia kernel code sumissions, and it reveals a core tenant of the Free Software and Open Source origins of Linux: progress made in spite of the licensing of the progress is fundamentally damaging to the rights and mores of the project and its license. The wholly opposite side, but on the same axis or licensing and rights is the realm of DRM. If Nvidia’s patches are unacceptable and have to only live on as tainted modules or binary blobs, is progress actually lost? Which is the greater loss, if by accepting proprietary property into a code-base you open the door to other corporate exceptions to a community effort? I can see both sides and clearly there must be some compromise possible.

Linux, Free Software and Open Source have deep academic roots as well. The merits of the “more correct language” and the “more refined approach” have always held sway within many of the developer communities that provided many of the packages present in all Linux distros. What happens to this varied set of rarefied projects, hardly even a community by many standards, if Canonical and RedHat unexpectedly decide that desktop applications need to standardize on Qt…and that any other languages may not be available thru app-stores on both their distros? Many would likewise call this unjust as well. Would it actually be progress? Would it also be unjust? Would focusing on enrolling developers by restricting choice raise Linux adoption?

I think that there will always be a libre-Linux ecosystem on the internet. The benefits of providing a competitive similar commerce-oriented (if not commercial, and certainly not proprietary) desktop/mobile software platform on Linux might never be known if it is never attempted, however. Isn’t it possible that the LSB could define a fully featured language, development- and deployment-life-cycle that can enrol or even entice developers and shops presently producing titles for iOS, Windows and Android?