You are currently viewing all posts tagged with linux.

I celebrated World Backup Day by increasing the resiliency of data in my life.

Four encrypted 2TB hard drives, stored in a Pelican 1200, with Abloy Protec2 PL 321 padlocks as tamper-evident seals. Having everything that matters stored in git-annex makes projects like this simple: just clone the repositories, define the preferred content expressions, and watch the magic happen.

Cold Storage

Isolating Chrome Apps with Firejail

Despite its terse man page, Chromium provides a large number of command-line options. One of these is app-id, which tells Chromium to directly launch a specific Chrome App. Combined with the isolation provided by Firejail, this makes using Chrome Apps a much more enjoyable experience.

For instance, I use the Signal Desktop app. When I received the beta invite, I created a new directory to act as the home directory for the sandbox that would run the app.

$ mkdir -p ~/.chromium-apps/signal

I then launched a sandboxed browser using that directory and installed the app.

$ firejail --private=~/.chromium-apps/signal /usr/bin/chromium

After the app was installed, I added an alias to my zsh configuration to launch the app directly.

alias signal="firejail --private=~/.chromium-apps/signal /usr/bin/chromium --app-id=bikioccmkafdpakkkcpdbppfkghcmihk"

To launch the application I can now simply run signal, just as if it was a normal desktop application. I don’t have to worry about it accessing private information, or even care that it is actually running on Chromium underneath. I use this method daily for a number of different Chrome Apps, all in different isolated directories in ~/.chromium-apps. As someone who is not a normal Chromium user, it makes the prospect of running a Chrome App much more attractive.

Firewarden

I’ve previously mentioned the Firejail sandbox program. It’s an incredibly useful tool. I use it to jail pretty much all the things. Over the past six months, I’ve found that one of my primary use cases for Firejail is to create private, temporary sandboxes which can be destroyed upon closure. I wrote Firewarden, a simple wrapper script around Firejail, to reduce the keystrokes needed for this type of use.

Disposable Browsers

Prepend any program with firewarden and it will launch the program inside a private Firejail sandbox. I use Firewarden to launch disposable Chromium instances dozens of times per day. When the program passed to Firewarden is chromium or google-chrome, Firewarden will add the appropriate options to the browser to prevent the first run greeting, disable the default browser check, and prevent the WebRTC IP leak. The following two commands are equivalent:

$ firejail --private chromium --no-first-run --no-default-browser-check --enforce-webrtc-ip-permission-check
$ firewarden chromium

Firewarden also provides a few options to request a more restricted Firejail sandbox. For instance, you may want to open a URL in Chromium, but also use an isolated network namespace and create a new /dev directory (which has the effect of disabling access to webcams, speakers and microphones). The following two commands are equivalent:

$ firejail --private --net=enp0s25 --netfilter --private-dev chromium --no-first-run --no-default-browser-check --enforce-webrtc-ip-permission-check https://example.org
$ firewarden -d -i chromium https://example.org

In this example, Firewarden used NetworkManager to discover that enp0s25 was the first connected device, so it used that for the network namespace.

Local Files

Firewarden isn’t just useful for browsers. It can be used with any program, but my other major use case is safely viewing local files. File types like PDF and JPG can include malicious code and are a primary vector for malware. I use zathura as my PDF reader, which is a simple and lightweight viewer that doesn’t include anywhere near the number of potential vulnerabilities as something like Adobe Acrobat, but I still think it prudent to take extra precautions when viewing PDF files downloaded from the internet.

If Firewarden thinks the final argument is a local file, it will create a new directory in /tmp, copy the file into it, and launch the program in a sandbox using the new temporary directory as the user home directory1. Firewarden will also default to creating a new /dev directory when viewing local files, as well as disabling network access (thus preventing a malicious file from phoning home). When the program has closed, Firewarden removes the temporary directory and its contents

$ firewarden zathura notatrap.pdf

The above command is the equivalent of:

$ export now=`date --iso-8601=s`
$ mkdir -p /tmp/$USER/firewarden/$now
$ cp notatrap.pdf /tmp/$USER/firewarden/$now/
$ firejail --net=none --private-dev --private=/tmp/$USER/firewarden/$now zathura notatrap.pdf
$ rm -r /tmp/$USER/firewarden/$now

I use this functionality numerous times throughout the day. I also include Firewarden in my mailcap, which goes a long way to reducing the dangers of email attachments.

Firewarden doesn’t add any new functionality to Firejail, but it does make it easier to take advantage of some of the great features that Firejail provides. Check it out if you’re interested in reducing the keystrokes required to Jail All The Things™.

Notes

  1. This is similar to using Firejail's old --private-home option, which was removed in 0.9.38. However, that option was limited to files in the user's home directory. It couldn't be easily used with a file from a USB drive mounted at /media/usb, for instance.

Using Network Trust

Work continues on Spark, my Arch Linux provisioning system. As the project has progressed, it has created some useful tools that I’ve spun off into their own projects. One of those is nmtrust.

The idea is simple. As laptop users, we frequently connect our machines to a variety of networks. Some of those networks we trust, others we don’t. I trust my home and work networks because I administer both of them. I don’t trust networks at cafes, hotels or airports, but sometimes I still want to use them. There are certain services I want to run when connected to trusted networks: mail syncing, file syncing, online backups, instant messaging and the like. I don’t want to run these on untrusted networks, either out of concern over the potential leak of private information or simply to keep my network footprint small.

The solution is equally simple. I use NetworkManager to manage networks. NetworkManager creates a profile for every network connection. Every profile is assigned a UUID. I can decide which networks I want to trust, lookup their UUID with nmcli conn, and put those strings into a file somewhere. I keep them in /usr/local/etc/trusted_networks.

nmtrust is a small shell script which gets the UUIDs of all the active connections from NetworkManager and compares them to those in the trusted network file. It returns a different exit code depending on what it finds: 0 if all connections are trusted, 3 if one or more connections are untrusted, and 4 if there are no active connections.

This makes it extremely easy to write a script that executes nmtrust and takes certain action based on the exit code. For example, you may have a network backup script netbackup.sh that is executed every hour by cron. However, you only want the script to run when you are connected to a network that you trust.

1
2
3
4
5
6
7
8
9
#!/bin/sh

# Execute nmtrust
nmtrust

# Execute backups if the current connection(s) are trusted.
if [ $? -eq 0 ]; then
    netbackup.sh
fi

On machines running systemd, most of the things that you want to start and stop based on the network are probably described by units. ttoggle is another small shell script which uses nmtrust to start and stop these units. The units that should only be run on trusted networks are placed into another file. I keep them in /usr/local/etc/trusted_units. ttoggle executes nmtrust and starts or stops everything in the trusted unit file based on the result.

For example, I have a timer mailsync.timer that periodically sends and receives my mail. I only want to run this on trusted networks, so I place it in the trusted unit file. If ttoggle is executed when I’m connected to a trusted network, it will start the timer. If it is run when I’m on an untrusted network or offline, it will stop the timer, ensuring my machine makes no connection to my IMAP or SMTP servers.

These scripts are easy to use, but they really should be automated so that nobody has to think about them. Fortunately, NetworkManager provides a dispatcher framework that we can hook into. When installed, the dispatcher will execute ttoggle whenever a connection is activated or deactivated.

The result of all of this is that trusted units are automatically started whenever all active network connections are trusted. Any other time, the trusted units are stopped. I can connect to shady public wifi without worrying about network services that may compromise my privacy running in the background. I can connect to my normal networks without needing to remember to start mail syncing, backups, etc.

All of this is baked in to Spark, but it’s really just two short shell scripts and a NetworkManager dispatcher. It provides a flexible framework to help preserve privacy that is fairly easy to use. If you use NetworkManager, try it out.

Spark: Arch Linux Provisioning with Ansible

Arch has been my Linux distribution of choice for the past 5 years or so. It’s a fairly simple and versatile distribution that leaves most choices up the user, and then gets out of your way. Although I think it makes for a better end experience, the Arch Way does mean that it takes a bit more time to get a working desktop environment up and running.

At work I use Ansible to automate the provisioning of FreeBSD servers. It makes life easier by not only automating the provisioning of machines, but also by serving as reference documentation for The One True Way™. After a short time using Ansible to build servers, the idea of creating an Ansible playbook to provision my Arch desktop became attractive: I could pop a new drive into a machine, perform a basic Arch install, run the Ansible playbook and, in a very short period of time, have a fresh working environment – all without needing to worry about recalling arcane system configuration or which obscure packages I want installed. I found a few projects out there that had this same goal, but none that did things in the way I wanted them done. So I built my own.

Spark is an Ansible playbook meant to provision a personal machine running Arch Linux. It is intended to run locally on a fresh Arch install (ie, taking the place of any post-installation), but due to Ansible’s idempotent nature it may also be run on top of an already configured machine.

My machine is a Thinkpad, so Spark includes some tasks which are specific to laptops in general and others which only apply to Thinkpads. These tasks are tagged and isolated into their own roles, making it easy to use Spark to build desktops on other hardware. A community-contributed Macbook role exists to support Apple hardware. In fact, everything is tagged, and most of the user-specific stuff is accomplished with variables. The idea being that if you agree with my basic assumptions about what a desktop environment should be, you can use Spark to build your machine without editing much outside of the variables and perhaps the playbook.

The roles gather tasks into logical groups, and the tasks themselves are fairly simple. A quick skim through the repository will provide an understanding of everything Spark will do a matter of minutes. Basically: a simple i3 desktop environment, with GUI programs limited to web browsers and a few media and office applications (like GIMP and LibreOffice), everything else in the terminal, most network applications jailed with Firejail, and all the annoying laptop tasks like lid closure events and battery management automated away. If you’re familiar with my dotfiles, there won’t be any surprises.

Included in Spark is a file which describes how I install Arch. It is extremely brief, but provides everything needed to perform a basic installation – including full disk encryption with encrypted /boot – which can then be filled out with Ansible. I literally copy/paste from the doc when installing Arch. It takes about 15 minutes to complete the installation. Running Ansible after that takes about an hour, but requires no interaction after entering a passphrase for the SSH key used to clone the dotfiles. Combined with backups of the data in my home dir, this allows me to go from zero to hero in less than a couple hours without needing to really think about it.

If you use Arch, fork the repository and try it out.

Jailing the Browser

The web browser is one of our computers’ primary means of interaction with the unwashed mashes. Combined with the unfortunately large attack surface of modern browsers, this makes a sandbox which does not depend on the browser itself an attractive idea.

Firejail is a simple, lightweight sandbox that uses linux namespaces to prevent programs from accessing things they do not need.

Firejail ships with default profiles for Firefox and Chromium. These profiles drop capabilities, filter syscalls, and prevent access to common directories like /sbin, ~/.gnupg and ~/.ssh. This is a good start, but I see little reason to give the browser access to much of anything in my home directory.

The --private flag instructs Firejail to mount a new user home directory in a temporary filesystem. The directory is empty and all changes are discarded when the sandbox is closed – think of it as a more effective private browsing or incognito mode that also resets your browser to factory defaults.

$ firejail --private firefox

A more useful option for normal browsing is to specify a directory that Firejail should use as the user home. This allows you to keep a consistent browser profile and downloads directory, but still prevents the browser from accessing anything else in the normal user home.

$ mkdir ~/firefox
$ mv ~/.mozilla ~/firefox/
$ firejail --private=firefox firefox

This is the method I default to for my browsing. I’ve created my own Firejail profile for Firefox at ~/.config/firejail/firefix.profile which implements this.

include /etc/firejail/disable-mgmt.inc
caps.drop all
seccomp
netfilter
noroot

# Use ~/firefox as user home
private firefox

The only inconvenience I’ve discovered with this is that linking my Vimperator configuration files into the directory from my dotfiles repository creates a dangling link from the perspective of anything running within the jail. Since it cannot access my real home directory, it cannot see the link target in the ~/.dotfiles directory. I have to copy the configuration files into ~/firefox and then manually keep them in sync. I modify these files infrequently enough that for me this is worth the trade-off.

The temporary filesystem provided by --private is still useful when accessing websites that are especially sensitive (such as a financial institution) or especially shady. In my normal browser profiles, I have a number of extensions installed that block ads, disable scripts, etc. If these extensions completely break a website, and I don’t want to take the time to figure out which of the dozens of things I’m blocking are required for the website to function, I’ll just spin up a sandboxed browser with the --private flag, comfortable in the knowledge that whatever dirty scripts the site is running are limited in their ability to harm me.

I perform something like 90% of my web browsing in Firefox, but use Chromium for various tasks throughout the day. Both run in Firejail sandboxes, helping to keep me safe when surfing the information superhighway. Other programs, like torrent applications and PDF readers, also make good candidates for running within Firejail.

Optical Backups of Photo Archives

I store my photos in git-annex. A full copy of the annex exists on my laptop and on an external drive. Encrypted copies of all of my photos are stored on Amazon S3 (which I pay for) and box.com (which provides 50GB for free) via git-annex special remotes. The photos are backed-up to an external drive daily with the rest of my laptop hard drive via backitup.sh and cryptshot. My entire laptop hard drive is also mirrored monthly to an external drive stored off-site.

(The majority of my photos are also on Flickr, but I don’t consider that a backup or even reliable storage.)

All of this is what I consider to be the bare minimum for any redundant data storage. Photos have special value, above the value that I assign to most other data. This value only increases with age. As such they require an additional backup method, but due to the size of my collection I want to avoid backup methods that involve paying for more online storage, such as Tarsnap.

I choose optical discs as the medium for my photo backups. This has the advantage of being read-only, which makes it more difficult for accidental deletions or corruption to propagate through the backup system. DVD-Rs have a capacity of 4.7 GBs and a cost of around $0.25 per disc. Their life expectancy varies, but 10-years seem to be a reasonable low estimate.

Preparation

I keep all of my photos in year-based directories. At the beginning of every year, the previous year’s directory is burned to a DVD.

Certain years contain few enough photos that the entire year can fit on a single DVD. More recent years have enough photos of a high enough resolution that they require multiple DVDs.

Archive

My first step is to build a compressed archive of each year. I choose tar and bzip2 compression for this because they’re simple and reliable.

1
2
$ cd ~/pictures
$ tar cjhf ~/tmp/pictures/2012.tar.bz 2012

If the archive is larger than 3.7 GB, it needs to be split into multiple files. The resulting files will be burned to different discs. The capacity of a DVD is 4.7 GB, but I place the upper file limit at 3.7 GB so that the DVD has a minimum of 20% of its capacity available. This will be filled with parity information later on for redundancy.

1
$ split -d -b 3700M 2012.tar.bz 2012.tar.bz.

Encrypt

Leaving unencrypted data around is bad form. The archive (or each of the files resulting from splitting the large archive) is next encrypted and signed with GnuPG.

1
2
$ gpg -eo 2012.tar.bz.gpg 2012.tar.bz
$ gpg -bo 2012.tar.bz.gpg.sig 2012.tar.bz.gpg

Imaging

The encrypted archive and the detached signature of the encrypted archive are what will be burned to the disc. (Or, in the case of a large archive, the encrypted splits of the full archive and the associated signatures will be burned to one disc per split/signature combonation.) Rather than burning them directly, an image is created first.

1
$ mkisofs -V "Photos: 2012 1/1" -r -o 2012.iso 2012.tar.bz.gpg 2012.tar.bz.gpg.sig

If the year has a split archive requiring multiple discs, I modify the sequence number in the volume label. For example, a year requiring 3 discs will have the label Photos: 2012 1/3.

Parity

When I began this project I knew that I wanted some sort of parity information for each disc so that I could potentially recover data from slightly damaged media. My initial idea was to use parchive via par2cmdline. Further research led me to dvdisaster which, despite being a GUI-only program, seemed more appropriate for this use case.

Both dvdisaster and parchive use the same Reed–Solomon error correction codes. Dvdidaster is aimed at optical media and has the ability to place the error correction data on the disc by augmenting the disc image, as well as storing the data separately. It can also scan media for errors and assist in judging when the media is in danger of becoming defective. This makes it an attractive option for long-term storage.

I use dvdisaster with the RS02 error correction method, which augments the image before burning. Depending on the size of the original image, this will result in the disc having anywhere from 20% to 200% redundancy.

Verify

After the image has been augmented, I mount it and verify the signature of the encrypted file on the disc against the local copy of the signature. I’ve never had the signatures not match, but performing this step makes me feel better.

1
2
3
$ sudo mount -o loop 2012.iso /mnt/disc
$ gpg --verify 2012.tar.bz.gpg.sig /mnt/disc/2012.tar.bz.gpg
$ sudo umount /mnt/disc

Burn

The final step is to burn the augmented image. I always burn discs at low speeds to diminish the chance of errors during the process.

1
$ cdrecord -v speed=4 dev=/dev/sr0 2012.iso

Similar to the optical backups of my password database, I burn two copies of each disc. One copy is stored off-site. This provides a reasonably level of assurance against any loss of my photos.

I occasionally find myself editing video for work.

In the past I’ve used OpenShot, but I was hit with some bugs that made me search for alternatives. Now I use Kdenlive, which I’ve found to be very capable. OpenShot currently has a Kickstarter campaign on to fund further development. Even though I’m no longer a user, I happily backed the project.