Here I start a series of posts talking about my home workstation getting file system corruption.
I hear a lot of very proud talk from various BSD zealots, mostly about how if you want better uptime/throughput/correctness, you should drop Linux like a rock crawling with centipedes and pick up the shinny golden nugget of FreeBSD. This will not work for me.
While Linux might be “only good enough” in their eyes, in my eyes its been better than windows for over 20 years. I hear quiet, off-hand mumbling about the state of graphics drivers and laptop support on FreeBSD. I hear they just recently got preliminary support for UEFI booting.
Various interviews with FreeBSD proponents often start with “oh, I started with Linux in the early nineties and it was a trash fire and I loved FreeBSD 4.x and haven’t looked back.” And “I hear Linux lacks proper jail support” and “Linux has a broken security model” and “SystemD is going to be the death of Linux.” Are these actually helpful points of view?
Not everyone is suffering under Linux. I’ve been putting Linux under punishing workloads for decades now and the important bit of wisdom I want to remind you of is this:
You cannot support something you are not familiar with.
Linux still makes a great desktop, server and embedded system. It’s got great tools. It runs zillions of servers and there is not a mass exodus to FreeBSD. To give a BSD or Linux box the kind of uptime, performance or security you want takes years of experience and knowledge of the subtleties of the platform. If you’re actually considering switching, start by doing a pilot project: build an evaluation stack and see if your workflow matches up to it. Bet you a buck the first thing you find is that paths, utilities and configuration defaults are going to get in your way for months until you build up your mental tree.
FreeBSD zealots behave just as much like Linux zealots behave to Windows users. Often haughty, rude and dismissive. Let’s none of us be that way.
Before and after some fiddling in Darktable.
Here’s a tedious task for you to consider. When you log-in to your favorite website these days, you’re creating a JSON document and posting it to the service you’re logging into. Long gone are the days when you just posted simple form parameters from a whole post. I work on an emulation platform: one of our features is to use Perl to emulate hundreds of users logging into a captive portal. This requires an economy of memory and time: creating hundreds of “firefox -p ~/.cache/firefox/xzf30d.userprofile” profile sessions is clearly not:
- memory efficient
- bound to specific network interfaces
- time efficient
So we use Perl. This requires reading through the F12 -> Networking tab of your browser’s debugging window and emulating the AJAX post to login. Fun once. Wouldn’t want to live there.
There’s a decision to make when you want to write your scripts when it’s time to back things up: use tar or rsync?