A Mountain Loop Tour

Late Saturday morning I loaded up my bike and took off for the mountains. The plan was to head out to Granite Falls, pick up the Mountain Loop Highway, and spend the night somewhere around Barlow Pass. The following day I would continue along the highway to Darrington and head back to Everett to complete the loop.

The way out of town is an unremarkable ride along the shoulders of two state highways. It doesn’t take long to reach Granite Falls, where the Mountain Loop Highway begins.

Granite Falls

I’ve driven the highway into the mountains countless of times – many of my favorite trailheads are located off of it – but this was the first time I had pedalled it. The highway itself doesn’t get too high. The highest point is probably a little under 3,000 feet, but I was starting from sea-level, so there were a few hills to climb. I shifted down to my granny gear and prepared to spin my way up, past the quarries and mines and the masonic park that always gives me a slight feeling of discomfort. The National Forest begins near the top.

Entering the Forest

After entering the forest the highway follows along next to the south fork of the Sauk, which makes for a pleasant, flat-ish ride, with very little motorized traffic to interfere. Most of the established camp grounds are closed this time of year – “closed” being National Forest Service code for empty and free.

South Fork Sauk

I had initially intended to spend the night somewhere along this stretch, but it was only mid-afternoon. I decided to head a bit further and camp around Barlow Pass. The pavement ends at the pass and the highway loops down and out of the mountains along a narrow gravel road. When I reached the pass the Sheriff was there, sorting through his ring of hundreds of keys, attempting to locate the one to unlock the gate at the start of the road to Monte Cristo. Apparently a young boy was hiking out there with his family and cut himself on a piece of old mining equipment. The search and rescue team were on their way up but would need to get through the gate. It must have been some cut to warrant the response.

Sheriff at Barlow Pass

There was plenty of light left at the pass. I amended my plan once more and took off down the unpaved section of road, deciding to camp somewhere in the woods below. The road is in pretty decent condition, by Forest Service standards. There aren’t too many potholes and the gravel isn’t layed on too thick. It makes for a bumpy ride, but my skinny 700x25 tires handled it fine. I pulled over for a quick dinner at 6PM and then got back on the road, intending to ride till 7PM. That would give me about 30 minutes to make camp before it got dark.

Dinner

At 7PM I found myself near a picnic area with a short trail that led into the woods to a small clearing with a table and enough room for a tarp. I pitched camp there and went to bed not long after.

The morning was chill. I rose with the sun. After fueling up with oatmeal I shook the frost off my tarp and packed everything back on the bike. I only had a couple bumpy miles to pedal before reaching the pavement again.

Back on the Pavement

I was perhaps too optimistic when I packed only fingerless gloves. The cold air caused a sharp pain in my fingertips. I stoked the furnace a bit by breaking off a few hunks from a bar of dark chocolate that I kept readily accessible in my frame bag. Either the chocolate or the rising sun worked.

North Fork Sauk

At Darrington I reached the end of the Mountain Loop Highway. The next leg of the trip would be along SR 530. It was the stretch I was looking forward to most. I had never ridden it before, but each time I drove along it the road struck me as a wonderful stretch of pavement to pedal. It travels through the foothills of the Cascades, along pastoral scenes set against mountain backdrops. Horses and cows outnumber motorized traffic.

Along the Arlington-Darrington Road

The road lived up to my expectations. I cruised along the meandering highway until reaching Arlington in the late morning. There I stopped at the Shire Cafe (in the same building as the Mirkwood game store, Mordor tattoo, and Rivendell hair salon) for a breakfast burrito – enough fuel for 53 miles. A block away I picked up the Centennial Trail.

Centennial Trail

I pedalled down the trail, though the woods of the county, before cutting west out to Marysville, from where I went over the sloughs and the Snohomish river to complete my loop back in Everett. In town I detoured to the waterfront farmer’s market to conclude the trip with an apple and baguette of victory.

Apple and Victory Baguette in Everett

This is the best ride I’ve yet to do all within Snohomish County. The route looked roughly like this. I think that my mileage was closer to 130 miles. The trip could probably be done in a full day, but I enjoyed it as a leisurely overnight trip. It took me 26 hours, door to door.

The swinging pendulum of computing freedom.

Jacques Matthjeij discusses the history of computing as a pendulum swinging between closed, walled-gardens and open, free systems.

If my observations are correct then such a swing is about to happen, and this time we had better get it right. Things that point in the direction of a swing are an increasing awareness of ordinary computer users with respect to their privacy and who actually owns all that data. The fragmenting of the smartphone and tablet markets will lead to some more openness and at some point all the bits and pieces to create true open hardware will fall into place.

Remember that there are two possible outcomes, one where the internet successfully manages to cause a swing to the edge of freedom, and another where it is successfully co-opted by big money and governments in a concerted effort to give us all a subscription to online Life-As-A-Service where you will be beholden to some party for the ability to gain access to knowledge, information, the right to communicate and so on and where the act of programming will be as tightly regulated as the export of cryptography was.

Discussion on Hacker News.

Cryptshot: Automated, Encrypted Backups with rsnapshot

Earlier this year I switched from Duplicity to rsnapshot for my local backups. Duplicity uses a full + incremental backup schema: the first time a backup is executed, all files are copied to the backup medium. Successive backups copy only the deltas of changed objects. Over time this results in a chain of deltas that need to be replayed when restoring from a backup. If a single delta is somehow corrupted, the whole chain is broke. To minimize the chances of this happening, the common practice is to complete a new full backup every so often – I usually do a full backup every 3 or 4 weeks. Completing a full backup takes time when you’re backing up hundreds of gigabytes, even over USB 3.0. It also takes up disk space. I keep around two full backups when using Duplicity, which means I’m using a little over twice as much space on the backup medium as what I’m backing up.

The backup schema that rsnapshot uses is different. The first time it runs, it completes a full backup. Each time after that, it completes what could be considered a “full” backup, but unchanged files are not copied over. Instead, rsnapshot simply hard links to the previously copied file. If you modify very large files regularly, this model may be inefficient, but for me – and I think for most users – it’s great. Backups are speedy, disk space usage on the backup medium isn’t too much more than the data being backed up, and I have multiple full backups that I can restore from.

The great strength of Duplicity – and the great weakness of rsnapshot – is encryption. Duplicity uses GnuPG to encrypt backups, which makes it one of the few solutions appropriate for remote backups. In contrast, rsnapshot does no encryption. That makes it completely inappropriate for remote backups, but the shortcoming can be worked around when backing up locally.

My local backups are done to an external, USB hard drive. Encrypting the drive is simple with LUKS and dm-crypt. For example, to encrypt /dev/sdb:

$ cryptsetup --cipher aes-xts-plain --key-size 512 --verify-passphrase luksFormat /dev/sdb

The device can then be opened, formatted, and mounted.

$ cryptsetup luksOpen /dev/sdb backup_drive
$ mkfs.ext4 -L backup /dev/mapper/backup_drive
$ mount /dev/mapper/backup_drive /mnt/backup/

At this point, the drive will be encrypted with a passphrase. To make it easier to mount programatically, I also add a key file full of some random data generated from /dev/urandom.

$ dd if=/dev/urandom of=/root/supersecretkey bs=1024 count=8
$ chmod 0400 /root/supersecretkey
$ cryptsetup luksAddKey /dev/sdb /root/supersecretkey

There are still a few considerations to address before backups to this encrypted drive can be completed automatically with no user interaction. Since the target is a USB drive and the source is a laptop, there’s a good chance that the drive won’t be plugged in when the scheduler kicks in the backup program. If it is plugged in, the drive needs to be decrypted before calling rsnapshot to do its thing. I wrote a wrapper script called cryptshot to address these issues.

Cryptshot is configured with the UUID of the target drive and the key file used to decrypt the drive. When it is executed, the first thing it does is look to see if the UUID exists. If it does, that means the drive is plugged in and accessible. The script then decrypts the drive with the specified key file and mounts it. Finally, rsnapshot is called to execute the backup as usual. Any argument passed to cryptshot is passed along to rsnapshot. What that means is that cryptshot becomes a drop-in replacement for encrypted, rsnapshot backups. Where I previously called rsnapshot daily, I now call cryptshot daily. Everything after that point just works, with no interaction needed from me.

If you’re interested in cryptshot, you can download it directly from GitHub. The script could easily be modified to execute a backup program other than rsnapshot. You can clone my entire backups repository if you’re also interested in the other scripts I’ve written to manage different aspects of backing up data.

I spent the equinox in the Glacier Peak Wilderness.

A few days. Me and a pack and some mountains.

Goodbye summer, hello fall.

Ruck

Currently reading: Journey to the Centre of the Earth by Richard and Nicholas Crane.

If you’re at all interested in bikes, lightweight backpacking, or a combination thereof, you must read this book.

In 1986, Dick and Nick rode lightweight, steel race bikes from the Bay of Bengal across Bangladesh, up and over the Himalaya, across the Tibetan Plateau, and through the Gobi desert to the point of the earth furthest from the sea. They were sawing their toothbrushes in half and cutting extraneous buckles off of their panniers before “bikepacking” (or “ultralight backpacking”) was a thing. The appendix includes a complete gear list and relevant discussion.

A snowy night in Tibet

The book is currently out of print, but used copies can be found. A PDF version is available here.

I've long been an 8-speed man on my bikes.

More gears seem unnecessary, but the market has other ideas. I wanted to upgrade my brifters. There were practically no options, so last week I made the jump to 9-speed. Now I’m running an 11-32 9-speed cassette and a 30/42/52 triple chainring.

9 Speed

Tarsnapper: Managing Tarsnap Backups

Tarsnap bills itself as “online backups for the truly paranoid”. I began using the service last January. It fast became my preferred way to backup to the cloud. It stores data on Amazon S3 and costs $0.30 per GB per month for storage and $0.30 per GB for bandwidth. Those prices are higher than just using Amazon S3 directly, but Tarsnap implements some impressive data de-duplication and compression that results in the service costing very little. For example, I currently have 67 different archives stored in Tarsnap from my laptop. They total 46GB in size. De-duplicated that comes out to 1.9GB. After compression, I only pay to store 1.4GB. Peanuts.

Of course, the primary requirement for any online backup service is encryption. Tarsnap delivers. And, most importantly, the Tarsnap client is open-source, so the claims of encryption can actually be verified by the user. The majority of for-profit, online backup services out there fail on this critical point.

So Tarsnap is amazing and you should use it. The client follows the Unix philosophy: “do one thing and do it well”. It’s basically like tar. It can create archives, read the contents of an archive, extract archives, and delete archives. For someone coming from an application like Duplicity, the disadvantage to the Tarsnap client is that it doesn’t include any way to automatically manage backups. You can’t tell Tarsnap how many copies of a backup you wish to keep, or how long backups should be allowed to age before deletion.

Thanks to the de-duplication and compression, there’s not a great economic incentive to not keep old backups around. It likely won’t cost you that much extra. But I like to keep things clean and minimal. If I haven’t used an online backup in 4 weeks, I generally consider it stale and have no further use for it.

To manage my Tarsnap backups, I wrote a Python script called Tarsnapper. The primary intent was to create a script that would automatically delete old archives. It does this by accepting a maximum age from the user. Whenever Tarsnapper runs, it gets a list of all Tarsnap archives. The timestamp is parsed out from the list and any archive that has a timestamp greater than the maximum allowed age is deleted. This is seamless, and means I never need to manually intervene to clean my archives.

Tarsnapper also provides some help for creating Tarsnap archives. It allows the user to define any number of named archives and the directories that those archives should contain. On my laptop I have four different directories that I backup with Tarsnap, three of them in one archive and the last in another archive. Tarsnapper knows about this, so whenever I want to backup to Tarsnap I just call a single command.

Tarsnapper also can automatically add a suffix to the end of each archive name. This makes it easier to know which archive is which when you are looking at a list. By default, the suffix is the current date and time.

Configuring Tarsnapper can be done either directly by changing the variables at the top of the script, or by creating a configuration file named tarsnapper.conf in your home directory. The config file on my laptop looks like this:

1
2
3
4
5
6
[Settings]
tarsnap: /usr/bin/tarsnap

[Archives]
nous-cloud: /home/pigmonkey/work /home/pigmonkey/documents /home/pigmonkey/vault/
nous-config: /home/pigmonkey/.config

There is also support for command-line arguments to specify the location of the configuration file to use, to delete old archives and exit without creating new archives, and to execute only a single named-archive rather than all of those that you may have defined.

$ tarsnapper.py --help
usage: tarsnapper.py [-h] [-c CONFIG] [-a ARCHIVE] [-r]

A Python script to manage Tarsnap archives.

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        Specify the configuration file to use.
  -a ARCHIVE, --archive ARCHIVE
                        Specify a named archive to execute.
  -r, --remove          Remove archives old archives and exit.

It makes using a great service very simple. My backups can all be executed simply by a single call to Tarsnapper. Stale archives are deleted, saving me precious picodollars. I use this system on my laptop, as well as multiple servers. If you’re interested in it, Tarsnapper can be downloaded directly from GitHub. You can clone my entire backups repository if you’re also interested in the other scripts I’ve written to manage different aspects of backing up data.

It's better when you break things completely.

When things are only partly broken your inbox gets flooded with error messages…