I’ve used my Larry v. Harry Bullitt with my Bicycle Revolutions EcoShopper trailer a surprising number of times. Just today I delivered about 200lbs of computer recycling to Haywire Computer in Bellingham. One of the conveniences is that I can yank the cotter pins on the trailer wheels and pack the wheels into the trailer and load the trailer into my bakfiets.
Here is my ZFS on Linux story, and some of you might have seen these pictures when I started this project last year: I recycled an old Athlon system from work and placed an 35 Watt AMD A2 processor with 8GB 1600 ram on an Asus mobo in it. I name my home systems after after cacti, and after I installed Ubuntu 12.04 on it, I named this one Beavertail.bitratchet.net.
My previous experiences with storage systems involved them dying from heat. So I decided I would avoid full sized drives and stick with laptop drives and boot off an SSD. I have the SSD partitioned with a /boot, root, and two more partitions for ZIL and L2ARC. The bulk of the storage is a mix of 750GB Hitachi and 500GB Toshiba laptop hard drives, 16 total. I have lost two drives in this system, which I would label “normal drive attrition.” Boot drive is a 128GB OCZ Vertex 2.
Half the drives are on the bottom, and half are on top. At work I have access to gobs of full-height card brackets and this is what I built drive cages out of.
To get all the drives wired up, I started with a bunch of 1x and 2x PCIe sata expanders and used up all my mobo sata ports, but by the time I got to about 12 drives, I only had a PCI slot left, so had to use that. When looking at my disk utilization in iostat -Nx and dstat --disk-util it was plainly clear that I had a swath of drives underperforming and they were all connected to the slowest PCI controller.
I saved up and remedied that by purchasing two SuperMicro SAS HBA’s with Marvel chipsets. They are only 3G SATA (equivalent) but they each control eight drives, and they do so consistently. They take 8x PCIe lanes, and that’s great use for the two 16x PCIe slots on the mobo.
02:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev c3) 04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
It took me a while to find out my network bandwidth issues. The problem was my motherboard: it has an onboard Realtek chipset. It would max out at 500Mbps download and 250Mbps upload…and very often wedge the system. I got a PCIe 1x Instell card and I got a good clean 955Mbps both ways out of that with one iperf stream, and 985+Mpbs with two iperf streams. To actually achieve this, I needed to put an Intel nic in my workstation as well. (My switch is a 16-port unmanaged Zyxel).
I am able to push close to full network capacity to Beavertail. As you can see, the results speak for themselves: the screenie below shows iftop displaying better than 880Mps and I saw it grab 910Mbps during this backup. Clearly part of the success is having a Samsung 840EVO in my laptop, but having a stripe of four zvols clearly allows plenty of IO headroom.
Here are some other nerdy stats, mostly on how my drives are arranged:
> zpool status -v pool: tank state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://zfsonlinux.org/msg/ZFS-8000-9P scan: scrub repaired 0 in 4h43m with 0 errors on Sat Sep 6 00:13:22 2014 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ata-Hitachi_HTS547575A9E384_J2190059G9PDPC ONLINE 0 0 0 ata-Hitachi_HTS547575A9E384_J2190059G9SBBC ONLINE 0 0 0 ata-Hitachi_HTS547575A9E384_J2190059G6GMGC ONLINE 0 0 0 ata-Hitachi_HTS547575A9E384_J2190059G95REC ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 ata-Hitachi_HTS547575A9E384_J2190059G9LH9C ONLINE 0 0 0 ata-Hitachi_HTS547575A9E384_J2190059G95JPC ONLINE 0 0 0 ata-Hitachi_HTS547575A9E384_J2190059G6LUDC ONLINE 0 0 0 ata-Hitachi_HTS547575A9E384_J2190059G5PXYC ONLINE 0 0 0 raidz1-2 ONLINE 0 0 0 ata-TOSHIBA_MQ01ABD050_X3EJSVUOS ONLINE 0 0 0 ata-TOSHIBA_MQ01ABD050_X3EJSVUNS ONLINE 0 0 0 ata-TOSHIBA_MQ01ABD050_933PTT11T ONLINE 0 0 0 ata-TOSHIBA_MQ01ABD050_933PTT17T ONLINE 0 0 0 raidz1-3 ONLINE 0 0 0 ata-TOSHIBA_MQ01ABD050_933PTT12T ONLINE 0 0 0 ata-TOSHIBA_MQ01ABD050_933PTT13T ONLINE 0 0 2 ata-TOSHIBA_MQ01ABD050_933PTT14T ONLINE 0 0 2 ata-TOSHIBA_MQ01ABD050_933PTT0ZT ONLINE 0 0 0 logs ata-OCZ-AGILITY4_OCZ-77Z13FI634825PNW-part5 ONLINE 0 0 0 cache ata-OCZ-AGILITY4_OCZ-77Z13FI634825PNW-part6 ONLINE 0 0 0 errors: No known data errors
And to finish up, this system has withstood a series of in-place Ubuntu upgrades. It is now running 14.04. My advice on this, and Linux kernels is this:
- Do not rush to install new mainline kernels, you have to wait for dkms and spl libraries to sync up with mainline and to send out PPA updates through the ubuntu-zfs ppa.
- If you do a dist-upgrade and reboot, and your zpool does not return on reboot, this is easily fixed by doing a ubuntu-zfs reinstall: apt-get install --reinstall ubuntu-zfs. This will re-link your kernel modules and you should be good to go.
Like I said, this has been working for four releases of Ubuntu for me, along with replacing controllers anbd drives. My only complaint is that doing sequences of small file operations in it tends to bring the speed down a lot (or has, have not recreated on 14.04 yet). But for streaming large files, I get massive throughput…which is great for my large photo collection!
These cleats have been with me for over a year. They got worn smooth from walking on them. The previous pair I left in for two years and I had to drill one of them out. Advice for cleats: use some white lith grease on the bolts when you apply them. Use a long handled hex wrench or ratchet to tighten them. When removing them, drip on some light oil like TriFlow to work into the seams. Wait at least ten minutes for the oil to work in. Take something g sharp like n awl or a pocket knife or the tip of a new drywall screw to dig out all the crap in the bolt head. Even after that prep, you might not be able to fit your hex bit in. Next try a Torx bit of the same size. The wear on the bolt head might have screwed up the insides of the bolt head, but if you can mallet a torx bit in there, it should grip long enough to use a ratchet to back it out. Otherwise you will want to go to the screw-reverser bit in your drill.
Lesson: use that white lith grease first when applying new bolts!
The FCC has a poor track record of getting net neutrality right. In January 2014, a federal court rejected the bulk of the FCC’s 2010 Open Internet order. The rules that the court threw out, however, were deeply flawed. Protecting net neutrality is a hard problem, with no easy solutions. … [W]e are asking folks to contact both the FCC and Congress and send a clear message: It’s our internet, we won’t let you damage it, and we won’t let you help others damage it.
This is why we need copyright reform, and we need to invest in truly civilian Internet spaces. Things like youtube are entirely taken for granted but they live at nothing more than the whim of corporations, with no actual rights of free speech on them.
The whole Google and “right to be forgotten” drama is another case where civilian government is unable to form a basis for a search engine for a search engine ruled by civilian law. We act as if Google is a utility…and while it might be a de-facto utility, it is not, and it is ephemeral and as temporary as its stock price lives well.
This is a really great point because national broadband providers act not only as utility monopolies (Internet service) but also as content providers. This should be considered conflict of interest. A logical role for municipalities to adopt would be to become local broadband providers by buying out the POTS copper lines that the telcos are actually selling off and ripping out when installing FIOS (optical fiber to the curb). This is a logical way of increasing competition and bolstering local economies.