I don’t want this to be a blog that is only about backpacking, or bikes, or Linux. I want to cover all of my interests – which range widely – in a way that others don’t (or can’t). I dislike it when other blogs publish long posts that only regurgitate what others wrote, or simply latch onto a popular topic without adding anything new to the discussion. I only began blogging again in September, which makes for a small selection of posts in this year’s archives to evaluate. Looking back, I’m pleased with the diversity of topics covered.
I first heard of the Defeet Duragloves when Andrew Skurka included the wool gloves in his gear list for the Alaska-Yukon Expedition back in 2010. I was impressed that he chose to carry only one pair of seemingly thin gloves for all but the coldest portions of the trip. At the time I had an old pair of Outdoor Research PL 400 Gloves that I was happy with for winter backcountry use, but I kept the wool Duragloves in the back of my head as a possible replacement when the time came.
I still have Outdoor Research PL 400 Gloves and I am still pleased with the warmth they provide for cold weather hiking, but I’ve never been satisfied with them on the bike. Even in cold temperatures, I always manage to work up a sweat when cranking on the pedals. The PL 400 gloves are just too warm for that application. This past October I decided to purchase a pair of the wool Duragloves to see if they would be better suited to fill that niche.
The wool Duragloves are a blend of 40% merino, 40% Cordura, and 20% Lycra. Their thickness is that of your typical liner glove. The palm and fingers are covered with rubber grippy things. My medium sized pair weigh exactly 2 oz.
The gloves are designed specifically to address the problem that I was having. They are meant as a cool weather cycling glove that don’t cause your hands to overheat and sweat on hard hill climbs, but still keep you warm on the descent.
Duragloves are not made to climb Mt. Everest. They are made to climb mountains at a hard pace, descend the other side, and do it over and over until your ride is done. Thin enough to ride at your maximum effort and still give you dexterity to fiddle around in your jersey for food. Thick enough to keep frostbite off your fingers at 50mph down alps still laden with snow.
I’ve worn my pair on every commute for the past three months and can report that they do exactly what they claim to do. Temperatures this winter have been anywhere between 30° and 45° Fahrenheit and raining more often than not. Throughout it all the Duragloves have kept me warm and comfortable, even when wet, and I’ve never felt the desire to take them off to cool down. The thinness of the glove means that very little dexterity is lost. I do not need to remove the gloves to manipulate objects with my hands. I’ve had them for too short a time to comment on durability – and, until a crash, riding a bike tends to not be very demanding of a glove’s durability – but given the blend of wool and synthetic materials, I’m confident that the gloves will fully satisfy my demands in that department.
I have tried using the gloves on backcountry trips. I still prefer the Outdoor Research gloves for that application. Walking does not consistently get my heart-rate up to the extent that riding a bike does, and the Duragloves are simply not warm enough to keep my hands comfortable during that activity. Used as a liner with an insulated mitten I’m sure they would be adequate, but I prefer to use a thicker glove with an uninsulated shell mitten.
My only complaint concerning the Duragloves are the rubber grippy things. They could be grippier. When wearing the gloves it is difficult for me to twist the bezel of my Fenix LD20 headlights to adjust their mode. This is not the case on a more general-purpose glove, such as Mechanix gloves or Kuiu Guide Gloves. Even the grippy things on my Outdoor Research PL 400 gloves, despite having been worn down for years, do a better job adjusting the lights.
I can work around the lack of grip (adjusting the lights is the only application where this has come up) and everything else about the gloves is close to perfect. They are an affordable, American-made glove intended for aerobic activity in cool conditions. If you’re looking for a glove in that department, the Duragloves are well worth your consideration.
The blogroll is a standard feature of most blogs that is conspicuously absent from the current version of this website. In the past I’ve struggled to keep my blogroll up-to-date. In order to be useful, I think the blogroll should contain only blogs that I am currently reading. That list fluctuates frequently, and I have a poor track record of keeping my blogroll in sync with my feed reader. One of the problems is that I regularly read many blogs, but there are very few blogs out there whose every post I enjoy.
But I use microblogging to share links. So, why not just continue with that method? When I created the latest version of this blog I decided to do away with the blogroll entirely. Now, when I read a blog post that I particularly enjoy, I blog about it. Like here or here. When I come across a blog that is full of wonderful posts, I blog about it too – and, chances are, I’ll still end up blogging about individual articles on those websites. All of these microposts are assigned the blogroll tag.
To me this seems like a much more meaningful way to share links. Rather than maintaining a separate page with a list of not-frequently updated links, you have the blogroll tag archive. Links are timestamped and curated, which makes it more useful to my readers. And I think that linking to specific content rather than full domains makes for a more useful and rewarding metric for the owners of the linked blogs.
(The greatest, of course, is Dune.) In 1973, the BBC recorded an 8-episode radio series of Asimov‘s Foundation Trilogy. The show is now in the public domain and available for download at the Internet Archive. It’s well done. When I read Foundation I failed to continue past the original trilogy into the later work. This show has encouraged me to revisit the books.
As a long-time user of Unix-like systems, I prefer to do as much work in the command-line as possible. I store data in plain text whenever appropriate. I edit in vim and take advantage of the pipeline to manipulate the data with powerful tools like awk and grep.
Notes are one such instance where plain text makes sense. All of my notes – which are scratch-pads for ideas, reference material, logs, and whatnot – are kept as individual text files in the directory ~/documents/notes. The entire ~/documents directory is synced between my laptop and my work computer, and of course it is backed up with tarsnap.
When I want to read, edit or create a note, my habit is simply to open the file in vim.
1
$ vim ~/documents/notes/todo.txt
When I want to view a list of my notes, I can just ls the directory. I pass along the -t and -r flags. The first flag sorts the files by modification date, newest first. The second flag reverses the order. The result is that the most recently modified files end up at the bottom of the list, nearest the prompt. This allows me to quickly see which notes I have recently created or changed. These notes are generally active – they’re the ones I’m currently doing something with, so they’re the ones I want to see. Using ls to see which files have been most recently modified is incredibly useful, and a behaviour that I use often enough to have created an alias for it.
The first function, n(), takes the name of the note as an argument – minus the file extension – and opens it.
123
function n {
nano ~/Dropbox/Notes/$1.txt
}
I liked the idea. It would allow me to open a note from anywhere in the filesystem without specifying the full path. After changing the editor and the path, I could open the same note as before with far fewer keystrokes.
1
$ n todo
I needed to make a few changes to the function to increase its flexibility.
First, the extension. Most of my notes have .txt extensions. Some have a .gpg extension.1 Some have no extension. I didn’t want to force the .txt extension in the function.
If I specified a file extension, that extension should be used. If I failed to specify the extension, I wanted the function to open the file as I specified it only if it existed. Otherwise, I wanted it to look for that file with a .gpg extension and open that if it was found. As a last resort, I wanted it to open the file with a .txt extension regardless of whether it existed or not. I implemented this behaviour in a separate function, buildfile(), so that I could take advantage of it wherever I wanted.
1 2 3 4 5 6 7 8 9101112131415161718192021
# Take a text file and build it with an extension, if needed.# Prefer gpg extension over txt.
buildfile(){# If an extension was given, use it.if[["$1"== *.* ]];thenecho"$1"# If no extension was given...else# ... try the file without any extension.if[ -e "$1"];thenecho"$1"# ... try the file with a gpg extension.elif[ -e "$1".gpg ];thenecho"$1".gpg
# ... use a txt extension.elseecho"$1".txt
fifi}
I then rewrote the original note function to take advantage of this.
1 2 3 4 5 6 7 8 910111213
# Set the note directory.NOTEDIR=~/documents/notes
# Create or edit notes.
n(){# If no note was given, list the notes.if[ -z "$1"];then
lt "$NOTEDIR"# If a note was given, open it.else$EDITOR$(buildfile "$NOTEDIR"/"$1")fi}
Now I can edit the note ~/documents/notes/todo.txt by simply passing along the filename without an extension.
1
$ n todo
If I want to specify the extension, that will work too.
1
$ n todo.txt
If I have a note called ~/documents/notes/world-domination.gpg, I can edit that without specifying the extension.
1
$ n world-domination
If I have a note called ~/documents/notes/readme, I can edit that and the function will respect the lack of an extension.
1
$ n readme
Finally, if I want to create a note, I can just specify the name of the note and a file with that name will be created with a .txt extension.
The other change I made was to make the function print a reverse-chronologically sorted list of my notes if it was called with no arguments. This allows me to view my notes by typing a single character.
1
$ n
The original blog post also included a function ns() for searching notes by their title.
123
function ns {
ls -c ~/Dropbox/Notes | grep $1}
I thought this was a good idea, but I considered the behaviour to be finding a note rather than searching a note. I renamed the function to reflect its behaviour, took advantage of my ls alias, and made the search case-insensitive. I also modified it so that if no argument was given, it would simply print an ordered list of the notes, just like n().
12345678
# Find a note by title.
nf(){if[ -z "$1"];then
lt "$NOTEDIR"else
lt "$NOTEDIR"| grep -i $1fi}
I thought it would be nice to also have a quick way to search within notes. That was accomplished with ns(), the simplest function of the trio.
12
# Search within notes.
ns(){cd$NOTEDIR; grep -rin $1;cd"$OLDPWD";}
I chose to change into the note directory before searching so that the results do not have the full path prefixed to the filename. Thanks to the first function I don’t need to specify the full path of the note to open it, and this makes the output easier to read.
123
$ ns "take over"
world-domination.txt:1:This is my plan to take over the world.
$ n world-domination
To finish it up, I added completion to zsh so that I can tab-complete filenames when using n.
12
# Set autocompletion for notes.
compctl -W $NOTEDIR -f n
If you’re interested in seeing some of the other way that I personalize my working environment, all of my dotfiles are on GitHub. The note functions are in shellrc, which is my shell-agnostic configuration file that I source from both zshrc and bashrc2.
Notes
↵ I use vim with the gnupg.vim plugin for seamless editing of PGP-encrypted files.
↵ I prefer zsh, but I do still find myself in bash on some machines. I find it prudent to maintain configurations for both shells. There's a lot of cross-over between them and I like to stick to the DRY principle.
They are trying to purchase another petabox (that’s one quadrillion bytes). Donations are being matched 3-for-1 till the end of the year, so now is a good time to give them money and support digital archiving.
Statistically, 38° is the “oh-my-god-we’re-all-gonna-die slope”. An inclinometer is a useful tool to carry to help evaluate the potential of a particular slope. There’s no replacing hands-on training, but Bruce Temper’s Staying Alive in Avalanche Terrain is an excellent resource for learning more than you want to know about avalanches. (If you live in the northwest, you should give money to the Northwest Weather and Avalanche Center. They do good work.)
Flickr has launched a new iOS application. I’ve never owned a smartphone or a tablet, so I don’t have any experience with applications in the mobile space, but I was linked to a review of the application at the New York Times which discusses the future of Flickr in light of the app. It struck me with one particular statement:
The updated mobile experience now feels like a social network that focuses on photography, not a photography Web site that happens to have a social network.
I signed up for Flickr in 2007. It was my first plunge into the whole Web 2.0 thing, and the first web application that I ever payed for. As I wrote back then, what appealed to me about Flickr was that it a social-networking site that was built around something. The reason I have never gotten into services like MySpace and Facebook is that they seem to me to be websites that do social networking for the sake of social networking. That has never appealed to me. They offer no services that don’t exist elsewhere.
In contrast, take a look at the social networking websites that I am active on. Flickr is a photo hosting website that happens to have social networking features. Github is a code hosting website that happens to have social networking features. I take advantage of the primary function of those websites – photo or code hosting – but I also gain extra value with the secondary social-networking features. It surprised me to see that the inverse was true for the author of the New York Times piece.
My Flickr Pro account expired last month. As has been my habit for the past couple of years, I debated if I wanted to renew it, or to use the opportunity to jump ship to a different service like OpenPhoto or MediaGoblin. In the end, I decided to renew and put off the move till next year. If Flickr is indeed changing from a photography website that happens to have a social network to a social network that happens to have photography, my move may be expedited.
In his talk from Defcon 18 (transcript available), Moxie argues that what we were preparing for was fascism and what we got was social democracy. For me it was an eye-opening explanation, and one that I think is important to understand given the ever-increasing network effect of technologies that are a not only a danger to personal privacy but can also grow to threaten free thought.
When the book was first published I assumed it would be just another entry into the media hubbub around WikiLeaks. When I saw that John Young – cranky old man of the cypherpunk movement – gave it a positive review I decided that it would be worth a read. While the book does center on Assange, Greenberg does an admirable job of tracing the history of the cypherpunks and describing what in the future we will probably refer to as a sequel to the cryptowars. It is a recommended read.
Last March I bought a pair of Continental Ultra Gatorskin tires. Their flat protection proved to be excellent – I have not had a single flat with them – but I found the durability of the tread to be wanting. They are now worn down to the point where they are basically racing slicks, which, while fun, is certainly not appropriate for wet weather riding. I don’t own a cycling computer or keep track of my miles in any other way, so I’m not sure how many miles the tires have on them. I think it’s fair to say that I average about 500 miles a month. The tires are likely to be just shy of 5,000 miles. For a pricey tire like the Gatorskins, I’d prefer to see a bit more longevity.
The various tires in the Marathon series from Schwalbe have an excellent reputation among long distance riders, both in terms of flat protection and durability. Peter White maintains a description of the various models which helped me to understand the differences between them. I decided that the Marathon Supremes would be a good fit for my needs. They are normally absurdly expensive, but I was able to find a good deal and pick up a pair of them for about the same as what the Gatorskins would cost.
I’ve been using the Marathon Supremes for a week of wet riding now and I’m very pleased with them. They certainly offer better grip than my worn-down Gatorskins. I feel more confident when aggressively cornering with them. The reflective sidewall is a welcome addition to my dark commutes. Despite the Gatorskins being considered a “race” tire and the Marathon Supremes more of a general road-riding/commuting tire, I haven’t noticed a significant difference in speed or rolling resistance. With the Marathon Supremes, I’ve gone back to a 700x28 tire in place of the skinnier 700x25 size of my Gatorskins.
The Marathon Supremes were much easier to get on my rims than the Gatorskins, which makes me feel a little better about the prospect of fixing a flat with these tires. Of course, if I get a flat anytime within the next 5,000 miles, the tires will receive a negative mark in comparison against the Gatorskins.
The real question about the Marathon Supremes is durability. I’m looking forward to see how they handle this winter and how long they last into 2013.
I use two Fenix LD20 lights mounted to my handlebars via Twofish Lockblocks as headlights. An old TAD-branded JETBeam Jet 1 MkII is mounted on my helmet. As a result, whenever I have my bike – which is the majority of the time – I have a light. When I’m not around my bike, I tend to be left in the dark. I do carry a Photon Freedom Micro on my keychain, which is a great little device, but no replacement for a hand-held torch. A more substantial light has not been part of my on-body EDC for a few years.
Last month I decided to change that. I wanted to find a small, unobtrusive LED light that I could carry in a pocket. My JETBeam helmet light is powered by a single AA battery. While the height on that light is about right, I felt the diameter was too large for what I had in mind. I decided to look for a light powered by a single AAA. Two options presented themselves: the Maratac AAA Rev 2 and the Foursevens Preon P1. They both have similar specifications and both seem to earn equally positive reviews. The Maratac light is less expensive and features a knurled body, as opposed to the Preon P1’s smooth body, which made me initially favor the Maratac. Unfortunately, both lights are twist-activated. One of my requirements for any light is that it can be activated with one hand. That necessitates a clicky tailcap.
A review of the Preon P1 on ITS presented a solution. The Preon P1 is compatible with pieces from the Preon P2, which is a double-AAA flashlight that does have a clicky tailcap. I could purchase a replacement Preon clicky tailcap and install that on a Preon P1 to get the light I wanted. The Preon P1 and the tailcap cost $45. That’s a lot for a small light, and quite a bit more than the $25 price of the Maratac light, but it would make for a system that fit my requirements. I chose to purchase the Preon.
I have been carrying the Preon P1 for a couple weeks now. At a height of about 3 inches and a diameter just over half an inch, I can clip it in my pocket and completely forget that it is there until I need it, which is exactly what I was looking for. All the various modes of the light can be cycled through by clicking the tailcap, although I don’t particularly care about them – I only use the light with its standard output setting.
The clicky tailcap could be improved. I’m accustomed to tailcaps being inset into the light slightly. The Preon tailcap juts out from the top of the body of the light, which I think increases the chances of it accidentally being clicked on and draining the battery. It also has an amount of free movement in it. The button can be depressed about halfway before it actually hits the clicky part of the mechanism, which makes the button feel a bit loose. I don’t know that this affects the functionality of the tailcap, but it does make it feel cheaper.
Despite the less-than-perfect tailcap, I’m happy with the light. The runtime isn’t incredible, but that’s acceptable for my use. I intend to use the light only as a backup to those on my bike. When the battery does die, I just throw it in the charger and install another Eneloop. If you’re looking for a small EDC light, the Preon P1 is worth consideration.
They focus on General Electric, who found that moving some manufacturing back to the US has resulted in better and less expensive products.
So a funny thing happened to the GeoSpring on the way from the cheap Chinese factory to the expensive Kentucky factory: The material cost went down. The labor required to make it went down. The quality went up. Even the energy efficiency went up.
I went with the Maha PowerEx MH-C9000 that I mentioned last year. Since purchasing the charger a couple weeks ago, I’ve been geeking out about batteries. I’ve labelled all of my Eneloops and started a database where I log the purchase date, capacity, and other information. I’ve put the database in git so that I can track the performance of an individual battery over time. The database is on GitHub.
Andrew’s reviews and analysis are intelligent and refreshingly concise, particularly when some people feel the need to put out 45-minute videos in order to communicate 3 minutes of content. Unfortunately Vuurwapen Blog will no longer concern guns. Fortunately, Andrew will be writing about firearms at LuckyGunner Labs and Vuurwapen will still be around to focus on other topics.
The web has been moving more and more towards a centralized structure. Services like Twitter, Facebook, Google and Flickr are all examples of this. To me, it is a disturbing trend. It’s bad for the internet as a whole, and on a more personal level is damaging to individual liberty and freedom. Lately, I’ve been making a stronger effort to forgo these services.
During the year that I took off from blogging, I maintained a steady presence on Twitter. When I decided to relaunch my blog, I knew I wanted to integrate Twitter-like microblogging into the site somehow. Looking over my Twitter history, it was clear that I was predominately using the service for one thing: sharing links. There’s no reason that I couldn’t do that on my blog. In fact, some of the earliest web logs were simply lists of interesting links.
I rarely participate in conversations on Twitter. I find it to be a horrible medium for that. Conversations that begin with microblog posts can be handled with any blog comment software (and I think the resulting experience is much improved over what Twitter can offer). If you use Twitter to contact people and start conversations, a blog probably won’t work for you. (But there’s a great distributed social networking platform out there that you might want to look into. It’s called email.)
Initially I considered adding a new model to vellum for microposts. When I thought about what a micropost is and what I wanted to do with them, I decided that modifying vellum was unnecessary. I’ve seen some people claim that microposts don’t have titles, but I think that’s incorrect: the primary content is the title. In addition to the title, the micropost needs a place to include the URL that is being shared. Why not just put that in the post body? The URL can simply be pasted into the field, ala Twitter, or included as an anchor tag contained within a new sentence. All that I needed was to uniquely style the microposts.
I decided to place all microposts in a new micro category. Posts in that category are then styled differently. This allows the user to quickly differentiate these short microposts from the more traditional, long-form articles. It also helps to represent the relationship of the title and the rest of the post.
With this in place, I no longer had a need for Twitter, but I still wanted to feed all of my posts into the service. I know some people use Twitter as a sort of weird feed reader, and I have no problem pumping a copy of my data into centralized services. As it turns out, there are a number of services out there that will monitor a feed and post the results to services like Twitter. I started out with FeedBurner, but this seemed like overkill as I had no intention of utilizing the other FeedBurner offerings (giving up control of the namespace of your feed is another instance of the craziness associated with the move to a centralized web). After some brief experimentation, I settled on dlvr.it.
This accomplishes everything that I was looking for. All of my blog posts, micro or not, are now on my blog (fancy that). I retain ownership and control of all my data. Everything is archived and searchable. I’m not depending on some fickle, centralized service to shorten the links that I’m trying to share. People who want to follow my updates can subscribe to my feed in their feed reader of choice. My activity can still be followed in Twitter, but I don’t have any active participation in that service.
I’ve found that not attempting to restrain myself to a character limit is like a breath of fresh air. Previously I was able to share links only. There was little-to-no space left over for commentary. Now I can include my thoughts about the link being shared, whether it be a book that I’m reading or a news article that piques my interest. This is more satisfying to me, and I think results in a more meaningful experience for those who are interested in my thoughts.
Since moving to this system, I’ve only launched my Twitter client two or three times. I’ve found that I don’t miss the stream. I never followed too many people on Twitter. Many of those whom I did follow maintain some sort of blog with a feed that I subscribe to. Some don’t. That’s unfortunate, but if your online presence exists solely within a walled garden, I’m ok with not following you.
The book is in a similar vein as Gavin de Becker’s The Gift Of Fear (a book I strongly recommend), but with more acronyms and typos. Clint Emerson focuses on external awareness more than the internal awareness discussed by de Becker. There are some good tidbits in it, but overall I would award the book a “meh” rating.
While I still maintain that their versatility and environmental appropriatedness is limited, I have been coming around to their use a bit more over the past year or so. I have a pair of pants that I am quite smitten with and have been considering giving a jacket another shot. Over at Cold Thistle, Dane Burns recently completed a seriescomparingdifferentsoft-shell jackets and their appropriateness for climbing. Now he has published a selection of readercomments that were elicited by the reviews.
Codegroup is a program written by John Walker that encodes and decodes any file into groups of five letters. For example, take an image, run it through codegroup, and this is what you get:
The resulting code groups lend themselves to being transmitted via low-tech, resilient means, such as continuous wave radio. The ability to do this with any file is a simple but amazingly powerful concept.
I discovered codegroup around the same time that I was learning Morse code. I decided to take advantage of codegroup and put what I was learning into practice. This led to the development of morse.py.
With codegroup, I end up with a series of ASCII characters. I wanted to be able to feed those characters into a program which would convert them to Morse. The program should display the dits and dahs, but more importantly: it should beep them out.
morse.py is a simple script which does just that. It accepts ASCII input and encodes it to International Morse Code. The Morse is printed to the screen, in case you want to key it out yourself. Johnathan Nightingale’s beep.c is used to play the beeps with the terminal bell. The length of dits, dahs, and the pauses in between are configurable, but the defaults conform to International Morse. The input can be a file, but if no file is specified the script simply reads from standard input, which allows it to be piped together with codegroup.
$ morse.py --help
usage: morse.py [-h][-b BEEP][-s SPEED][-f FILE][-q]
Convert an ASCII file to International Morse Code and play it with system
beeps.
optional arguments:
-h, --help show this help message and exit
-b BEEP, --beep BEEP The location of the program that plays the beeps. This
script is intended to be used with Johnathan
Nightingale's beep: http://www.johnath.com/beep/
-s SPEED, --speed SPEED
Reduce the pauses between message characters by the
given amount.
-f FILE, --file FILE The location of the ASCII file to convert.
-q, --quiet Do not print the dots and dashes.
What is the application? Suppose your government has shut down your internet access. You want to send a map to an acquaintance. With these tools, you can encode the map with codegroup, pass the result to morse.py, hold your radio up to your speakers and key the mic. That’s it. Censorship bypassed.
On the receiving end, the Morse needs to be translated back to ASCII characters, which can then be decoded with codegroup. It’s a slow process, but resilient. To speed things up, the file being transmitted can be compressed before being passed to codegroup. (And if privacy is a concern, the file can also be encrypted, but that would be illegal unless you are doing so to protect life or property.)
Gibson was one of the most influential authors of my childhood. I had not kept up with him in this millennium, but have begun to rectify that by reading Pattern Recognition a while ago and now Spook Country.
My first experience with a field message pad was in 2005. I carried a Field Message Pad Cover by Canadian Peacekeeper – now CP Gear – filled with the Canadian military standard issue pad. I was introduced to the concept by one of the early episodes of Patrolling with Sean Kennedy. The idea of having a cover for one’s notepad, which not only protected the pad but also contained pens and the other tools necessary for a dead tree data dump, simply made sense. The system was a pleasure to use, but after filling my last Canadian pad in 2006, I left it in favour of more conventional notepads. The refill pads, being available only from Canada, were difficult to acquire, and all the cool kids were using Moleskines and similar products. I forgot about the old field message pad until this year.
I keep a box that holds all of my filled notepads. Last March I was digging through the box, looking for a particular note (sadly, you can’t grep dead trees), when I came across the Canadian pad and cover. I was reminded of the pleasure the system previously provided me, and the practicality of it. No more digging around for a pen – if you have your notepad, you have your pen. Need to toss your pad into the dirt to free both hands? No problem, it’s protected by the cover. I decided that I would like to revisit the system, but perhaps with a more commonly available pad, and a more mature eye brought to the market’s current offerings.
One of Sean Kennedy’s original requirements for the notepad was that the paper was gridded. I agree with that. Graph paper can be incredibly useful in certain circumstances, and the rest of the time holds no disadvantage for me. The original Canadian pads were gridded, but only on one side of the paper. That made the back of each page less useful, and seemed wasteful to me.
The size of the Canadian pad was acceptable, but a little larger than my ideal. Particularly when the cover was added, it made for just a little bit too much bulk and was sized slightly too large for some cargo pockets. I’d used plenty of the pocket-sized Moleskine notepads. Their size is also acceptable, but if I’m being picky: they’re just a tad too small. Ideally, I would like something sized in between the two.
My third requirements was the the notepad was top-bound. I prefer that over a side-bound pad, as I find them to be easier to access quickly. The binding should be spiral, as that allows the pad to lay open.
It’s a fairly simple criteria, but I was surprised at how difficult it was to accommodate.
The only pad I could find that met the requirements was from Rite in the Rain. Specifically, the 146 (or 946 or 946T for tacticool colors). It is gridded on both sides, is spiral bound on the top, and measures in at 4” x 6” – just slightly larger than a Moleskin, and a bit smaller than the Canadian pad. Rite in the Rain makes an excellent product. I always carry one of their pads when in wilderness environments, but I prefer not to carry them around town. Their waxy paper is unpleasant to write on. If I do not need to worry about the paper getting soaking wet, I prefer to carry a normal notepad. It makes the act of writing more pleasant, which encourages me to write more often.
The nice thing about Rite in the Rain pads, though, is that covers are readily available for them. Tactical Tailor makes a line of covers that Rite in the Rain sells. Maxpedition produces their own. The Tactical Tailor / Rite in the Rain cover for the 146 notepad is the C946. Maxpedition offers a similar cover. They looked great and encouraged me to revisit the notepad search, this time armed with a specific size: 4” x 6”.
Having the dimensions to narrow the search made all the difference. I quickly came upon the Rhodia 13500. It is gridded, 4” x 6”, and top-bound. The exact same as the Rite in the Rain 146, but with normal paper.
These findings allowed me to put together my new field message pad system: a Rite in the Rain pad, Rhodia pad, and a cover. I chose the Tactical Tailor cover simply because it is made about 60 miles from me, where Maxpedition claims that their covers are “imported”. With that cover and two identically sized pads, I could swap in whichever pad was more appropriate for my environment. Around town I carry the Rhodia pad. When I’m heading to the mountains I install the Rite in the Rain pad. Inside the cover I carry a pen (I’m partial to a Parker Jotter with a gel cartridge), a No. 2 pencil, a Sharpie, and a ruler. When I swap the Rhodia pad for the Rite in the Rain, I sometimes also swap the Parker pen for a Fisher Space pen, but in general I don’t like the way the ink comes out of the pressurized cartridges (and I already carry the pencil, which is field-serviceable and is able to write in inclement conditions just as well as the space pen).
I’ve been using this system since April. It is both versatile and functional, and has proved itself perfect for my needs. It is large enough to write on without feeling cramped, and small enough to place in the cargo or ass-cheek pocket of my pants when running around the woods in the middle of the night setting up dead-drops. The cover, Rite in the Rain pad, ruler, Sharpie and pencil are all made in the US. The Parker pen and Rhodia pad are made in France.
Photo comparisons between this and the Canadian system are available on Flickr.
Their prepaid Visa and American Express gift cards can be purchased with cash at any Simon mall. No identification is required. To use the card with online merchants, you will likely need to register the card with an address so that it can pass AVS checks. This can be done through Tor with fake information.
I wish someone would make a tactical messenger bag.
Plenty of companies make what they claim to be a tactical messenger bag: 5.11, Spec-Ops, ITS, TAD, and Maxpedition to name a few. None of these fit the bill. Those bags are all what I would refer to as a side bag or a man-purse. I don’t use that term in a derogatory sense – they’re fine bags, but they’re not messenger bags.
For me, the defining characteristic of a messenger bag is that it designed to be worn on the back, not hanging down on one’s side. Timbuk2, Chrome, Seagull, and R.E.Load are examples of companies that make messenger bags. I want one of those bags, but with the excellent, so-called “tactical” features that the aforementioned companies bring to the market: appropriate use of PALS webbing1, ability to support concealed carry, a quick-access medical compartment2. (Oh, and velcro for my tacticool patches, of course.)
Try riding a bike with a tactical man-purse, and then try riding a bike with a messenger bag. It’s easy to see why anyone with “messenger” in their job title carries an actual messenger bag. The practicality of such a bag isn’t limited to just the bicycle market: try running, or performing any physical activity (particularly a violent one, as might be required by someone with “tactical” in their job title) with a bag flapping on your side. It doesn’t work. Some companies have tried to address this by adding a stabilizing strap to help lock the bag in place. That’s a fine addition, but it is no replacement for a bag that is properly designed in the first place.
I can’t recognize any advantages that the man-purse offers over the messenger bag. The messenger bag, on the other hand, offers distinct advantages over the man-purse.
Notes
↵ Don't try to reinvent the admin compartment. Everyone and their mother makes an admin panel. Just give me some PALS and allow me to mount to the admin panel of my choice to the bag.
↵ We'll agree that an armed citizen is more likely to need a blow-out kit than someone unarmed. Now just think how much more likely an armed biker is to need something of that sort!
Matt from SerePick donated a set of Bogota entry tools, a diamond wire blade, a folding tool that includes a saw and razor, a small button compass, two universal handcuff keys, two handcuff shims, a small ceramic razor blade, Kevlar cord, and steel wire. [The additional items were purchased by ITS for the kits, not donated by SerePick.] TAD Gear (who also provided two students, in the form of Brett, their CEO, and Anthony, their Art Director) added to this kit their brass Survival Spark and four Tinder-Quick tabs.
TAD also developed a custom pouch to hold this kit. It’s similar to a bicycle tool roll, but on a smaller scale. The closure strap allows the pouch to be mounted to any webbing, whether it be a belt or PALS. I think it would be great to see this become a regular product, perhaps co-branded between TAD, ITS and SerePick, but for now the pouches are exclusive to alumni of the ITS Muster.
I began to read his blog last summer, when he was riding from Anchorage down along the continental divide. It’s a great blog, and his gear is clearly heavily used and carefully chosen. Recently, he has discussed his cook kit, tools, luggage, clothing, electronics, and his bike and the changes it has gone through.
Their videos on blow-out kit basics and hemostatics are worth a view. BFE Labs is not updated frequently, but the blog remains one of my favorites for practical skill and tool discussion.
Dick Griffith has pursued human-powered travel in the wilderness areas of the American West since 1946. He pioneered the use of a packraft in 1952. This book chronicles his travels.
The inaugural ITS Tactical Muster was a great multi-disciplinary training event. I had never been involved with planning something like this, and was surprised at how smoothly the whole thing went. A large part of the credit for that goes to the students, who represented a diverse set of backgrounds and energetically attacked all training blocks.
Mike, Eric and myself all flew in to Dallas-Forth Worth on Tuesday. We met up with Bryan, Kelly and Matt Gambrell at the ITS offices and loaded up all the gear needed for the week.
The six of us drove out to Lake Mineral Wells State Park to begin the setup and finalize our class preparations. We had rented a building for our kitchen and indoor classroom. We also had our own private ring of camp sites, far from where any other campers would be.
The following day, Matt Fiddler of SerePick and Nathan Morrison of Morrison Industries and The Morrison System arrived and assisted with our setup. One of the projects that needed to be completed that day was the building of a POW camp – something that I had suggested as part of the FTX that the students would be surprised with on Saturday night.
On Thursday the students arrived, along with Caleb Causey, our final instructor. From there on it was a flurry of activity. I didn’t take many photos.
Classes began on Friday with Bryan teaching knot-tying. That was followed up by a land navigation course, which I instructed. It was difficult to squeeze my instruction into a 4-hour block – we had wanted to start with the absolute basics of how to read a topographic map and go all the way to shooting, plotting and following bearings, resection, and plotting UTM coordinates – but the class seemed to be a success. Everyone succeeded in the exercises I gave them during the class, as well as the navigation aspects of the FTX the following day. After the navigation class, Bryan took over again for the stove building class, in which we had everyone build a fancy feast stove. At the end of the class I interjected some of my thoughts and experiences with using alcohol stoves. Nate Morrison completed the day with a class in fire building.
Saturday began with Nate teaching a basic rappelling class. During the downtime I worked with a few of the students on the navigation skills that I didn’t have time to cover during my class. In the afternoon we transferred back to the dining hall for Matt Fiddler’s lockpicking class. This was the first time I had ever received direct instruction in this skill. It helped me immensely. Lockpicking was followed by a shelter building class, which I was initially scheduled to assist with. Our FTX was scheduled for Saturday night and I had taken point in setting up the navigation element of the exercise. I had to forgo the shelter class and run off with Eric to finalize our preparations. Nate ended up taking over the shelter discussion. As the shelter class reached its end, we let the students know that they would be having a late night. I won’t detail the FTX here as I don’t know which elements of it will be reused next year. I don’t want to ruin the experience for future attendees. I will say that we had a lot of fun planning it and the feedback we received from the students afterwards was all positive. I was a little jealous of them, myself. I would have liked to run the course.
Most of us only had a few hours of sleep that night. Sunday got off to a slow start. The day’s only class was Caleb Causey’s medical course. Afterwards we gave out awards, wrapped-up the event, and said our goodbyes.
Matt Gambrell, who is the artist responsible for the ITS patch and t-shirt designs, is also a cook. He served as chef for the Muster, waking up early to dish up three amazing meals per day. We jokingly referred to the event as an eating experience with a few classes thrown in between meals.
Despite being involved with ITS for over two years, this was my first time meeting the rest of the crew. I enjoyed meeting them all and building our relationships. The other instructors – Nate, Matt and Caleb – are all experts in their fields. I learned something in all of their classes and it was an honor to be included in their company. It was frustrating for all of us to need to compress our subjects into a 4-hour block, but everyone succeeded and the combined total of training that we were able to pull off in two-and-a-half days of instruction was amazing. The students, too, were top notch. Before the event I was anxious about what kind of people would attend. I had no reason to be. Some of them turned out to be readers of this blog (clearly they are gentlemen of good taste). We all enjoyed quality conversation.
As someone who served as both staff and instructor, I have an obvious bias here, but I think that anyone who is interested in this type of thing should strongly consider attending the next event. When pricing for the inaugural Muster was first announced I scoffed a little, thinking “How could a 3-day camping trip be worth that much money?” Now having seen what Bryan and Kelly imagined I think the ticket price is an excellent value. Between food and swag, students had a significant portion of the fee returned to them. And the breadth of skills is unmatched by any other event that I know about. Where else can you attend a class in rappelling, break for lunch, learn how to pick and bypass locks, stay up all night sneaking around the woods, and then attend a class on tactical medicine after breakfast? As Nate wrote, it sets a new standard. Not only is the experience fun, students also leave with at least an introductory understanding of important life skills. If I was not part of the staff, I would certainly pay to attend.
The inaugural event made me proud to be part of ITS. I enjoyed meeting a part of the community around the site, and it was an honor to be included among the high caliber instructors.
Earlier this year I went through a period where I had only intermittent internet access. Constant, high-speed internet access is so common these days that I had forgotten what it was like to work on a computer that was offline more often than it was online. It provided an opportunity to reevaluate some aspects of how I work with data.
E-Mail
For a period of a couple years I accessed my mail through the Gmail web interface exclusively. About a year ago, I moved back to using a local mail client. That turned out to be a lucky move for this experience. I found that I had three requirements for my mail:
I needed to be able to read mail when offline (all of it, attachments included),
compose mail when offline,
and queue messages up to be automatically sent out the next time the mail client found that it had internet access.
Those requirements are fairly basic. Most mail clients will handle them without issue. I’ve always preferred to connect to my mail server via IMAP rather than POP3. Most mail clients offer to cache messages retrieved over IMAP. They do this for performance reasons rather than to provide the ability to read mail offline, but the result is the same. For mail clients like Mutt that don’t have built-in caching, a tool like OfflineImap is great.
Wikipedia
Wikipedia is too valuable a resource to not have offline access to. There are a number of options for getting a local copy. I found Kiwix to be a simple and effective solution. It downloads a compressed copy of the Wikipedia database and provides a web-browser-like interface to read it. The English Wikipedia is just shy of 10 gigabytes (other languages are of course available). That includes all articles, but no pictures, history or talk pages. Obviously this is something you want to download before you’re depending on coffee shops and libraries for intermittent internet access. After Kiwix has downloaded the database, it needs to be indexed for proper searching. Indexing is a resource-intensive process that will take a long time, but it’s worth it. When it’s done, you’ll have a not-insignificant chunk of our species’ combined knowledge sitting on your hard-drive. (It’s the next best thing to the Guide, really.)
Arch Wiki
For folks like myself who run Arch Linux, the Arch Wiki is an indispensable resource. For people who use other distributions, it’s less important, but still holds value. I think it’s the single best repository of Linux information out there.
For us Arch users, getting a local copy is simple. The arch-wiki-docs package provides a flat HTML copy of the wiki. Even better is the arch-wiki-lite package which provides a console interface for searching and viewing the wiki.
Users of other distributions could, at a minimum, extract the contents of the arch-wiki-docs package and grep through it.
Tunneling
Open wireless networks are dirty places. I’m never comfortable using them without tunneling my traffic. An anonymizing proxy like Tor is overkill for a situation like this. A full-fledged VPN is the best option, but I’ve found sshuttle to make an excellent poor-man’s VPN. It builds a tunnel over SSH, while addressing some of the shortcomings of vanilla SSH port forwarding. All traffic is forced through the tunnel, including DNS queries. If you have a VPS, a shared hosting account, or simply a machine sitting at home, sshuttle makes it dead simple to protect your traffic when on unfamiliar networks.
YouTube
YouTube is a great source of both education and entertainment. If you are only going to be online for a couple hours a week, you probably don’t want to waste those few hours streaming videos. There’s a number of browser plugins that allow you to download YouTube videos, but my favorite solution is a Python program called youtube-dl. (It’s unfortunately named, as it also supports downloading videos from other sites like Vimeo and blip.tv.) You pass it the URL of the YouTube video page and it grabs the highest-quality version available. It has a number of powerful options, but for me the killer feature is the ability to download whole playlists. Say you want to grab every episode of the great web-series Sync. Just pass the URL of the playlist to youtube-dl.
That’s it. It goes out and grabs every video. (The -t flag tells it to use the video’s title in the file name.) If you come back a few weeks later and think there might have been a couple new videos added to the playlist, you can just run the same command again but with the -c flag, which tells it to resume. It will see that it already downloaded part of the playlist and will only get videos that it doesn’t yet have.
Even now that I’m back to having constant internet access, I still find myself using youtube-dl on a regular basis. If I find a video that I want to watch at a later time, I download it. That way I don’t have to worry about buffering, or the video disappearing due to DMCA take-down requests.
Backups
I keep backitup.sh in my network profile so that my online backups attempt to execute whenever I get online. If you’re only online once or twice a week, you probably have more important things to do than remembering to manually trigger your online backups. It’s nice to have that automated.
When I first learned about the Bitcoin currency a few years ago, it didn’t excite me. A purely digital currency tied to no material good seemed an interesting project, but I didn’t see that it could have the practical value of, say, a digital gold currency. When the media blitz occurred last year I took another look and reached the same conclusion. A few months later I realized I was looking at the currency all wrong: bitcoins are not a value-store, they’re a means of exchange.
It doesn’t matter that Bitcoins are the digital equivalent of a fiat currency, with no inherent value. It doesn’t matter if their value fluctuates in relation to other currencies. There’s no reason to store wealth in Bitcoins (unless you’re a gambler). When you need to send money, purchase some Bitcoins and send them. When you need to receive money, accept Bitcoins and exchange them immediately for another currency. The value of the bitcoins only need to remain stable for the amount of time it takes to complete a transaction.
You don’t need to trust your service provider. You don’t need to trust your storage provider. You don’t need the law to protect you. You simply need to take a little self-responsibility and encrypt your data.
Any private data stored on hardware that you do not physically control should be encrypted (and it’s a good idea to encrypt private data on hardware that you do physically control). Problem solved. Unless you’re in the UK.
I’ve been following the development of MediaGoblin and OpenPhoto for about a year. Both offer decentralized and federalized photo sharing services, and promise to be excellent solutions for when Flickr finally dies. OpenPhoto currently feels more mature, but MediaGoblin is more ambitious in scope. I hope to see both of them succeed. Today, MediaGoblin announced a crowdfunding campaign to fund development. I’ll be donating.
She quit her job as a London bike messenger and left the UK in September of 2011. Currently she is in Korea, having cycled across Eurasia. I was made aware of her blog a couple months ago and was immediately hooked. I went back to the very beginning and read the blog all the way through. There are not very many blogs out there that I can say that about.
A laptop presents some problems for reliably backing up data. Unlike a server, the laptop may not always be turned on. When it is on, it may not be connected to the backup medium. If you’re doing online backups, the laptop may be offline. If you’re backing up to an external drive, the drive may not be plugged in. To address these issues I wrote a shell script called backitup.sh.
The Problem
Let’s say you want to backup a laptop to an external USB drive once per day with cryptshot.
You could add a cron entry to call cryptshot.sh at a certain time every day. What if the laptop isn’t turned on? What if the drive isn’t connected? In either case the backup will not be completed. The machine will then wait a full 24 hours before even attempting the backup again. This could easily result in weeks passing without a successful backup.
If you’re using anacron, or one of its derivatives, things get slightly better. Instead of specifying a time to call cryptshot.sh, you set the cron interval to @daily. If the machine is turned off at whatever time anacron is setup to execute @daily scripts, all of the commands will simply be executed the next time the machine boots. But that still doesn’t solve the problem of the drive not being plugged in.
The Solution
backitup.sh attempts to perform a backup if a certain amount of time has passed. It monitors for a report of successful completion of the backup. Once configured, you no longer call the backup program directly. Instead, you call backitup.sh. It then decides whether or not to actually execute the backup.
How it works
The script is configured with the backup program that should be executed, the period for which you want to complete backups, and the location of a file that holds the timestamp of the last successful backup. It can be configured either by modifying the variables at the top of the script, or by passing in command-line arguments.
$ backitup.sh -h
Usage: backitup.sh [OPTION...]
Note that any command line arguments overwrite variables defined in the source.
Options:
-p the period for which backups should attempt to be executed
(integer seconds or 'DAILY', 'WEEKLY' or 'MONTHLY')
-b the backup command to execute; note that this should be quoted if it contains a space
-l the location of the file that holds the timestamp of the last successful backup.
-n the command to be executed if the above file does not exist
When the script executes, it reads the timestamp contained in the last-run file. This is then compared to the user-specified period. If the difference between the timestamp and the current time is greater than the period, backitup.sh calls the backup program. If the difference between the stored timestamp and the current time is less than the requested period, the script simply exits without running the backup program.
After the backup program completes, the script looks at the returned exit code. If the exit code is 0, the backup was completed successfully, and the timestamp in the last-run file is replaced with the current time. If the backup program returns a non-zero exit code, no changes are made to the last-run file. In this case, the result is that the next time backitup.sh is called it will once again attempt to execute the backup program.
The period can either be specified in seconds or with the strings DAILY, WEEKLY or MONTHLY. The behaviour of DAILY differs from 86400 (24-hours in seconds). With the latter configuration, the backup program will only attempt to execute once per 24-hour period. If DAILY is specified, the backup may be completed successfully at, for example, 23:30 one day and again at 00:15 the following day.
Use
You still want to backup a laptop to an external USB drive once per day with cryptshot. Rather than calling cryptshot.sh, you call backitup.sh.
Tell the script that you wish to complete daily backups, and then use cron to call the script more frequently than the desired backup period. For my local backups, I call backitup.sh every hour.
The default period of backitup.sh is DAILY, so in this case I don’t have to provide a period of my own. But I also do weekly and monthly backups, so I need two more entries to execute cryptshot with those periods.
All three of these entries are executed hourly, which means that at the top of every hour, my laptop attempts to back itself up. As long as the USB drive is plugged in during one of those hours, the backup will complete. If cryptshot is executed, but fails, another attempt will be made the next hour. Daily backups will only be successfully completed, at most, once per day; weekly backups, once per week; and monthly backups, once per month. This setup works well for me, but if you want a higher assurance that your daily backups will be completed every day you could change the cron interval to */5 * * * *, which will result in cron executing backitup.sh every 5 minutes.
What if you want to perform daily online backups with Tarsnapper?
At the top of every hour your laptop will attempt to run Tarsnap via Tarsnapper. If the laptop is offline, it will try again the following hour. If Tarsnap begins but you go offline before it can complete, the backup will be resumed the following hour.
The script can of course be called with something other than cron. Put it in your ~/.profile and have you backups attempt to execute every time you login. Add it to your network manager and have your online backups attempt to execute every time you get online. If you’re using something like udev, have your local backups attempt to execute every time your USB drive is plugged in.
The Special Case
The final configuration option of backitup.sh represents a special case. If the script runs and it can’t find the specified file, the default behaviour is to assume that this is the first time it has ever run: it creates the file and executes the backup. That is what most users will want, but this behaviour can be changed.
When I first wrote backitup.sh it was to help manage backups of my Dropbox folder. Dropbox doesn’t provide support client-side encryption, which means users need to handle encryption themselves. The most common way to do this is to create an encfs file-system or two and place those within the Dropbox directory. That’s the way I use Dropbox.
I wanted to backup all the data stored in Dropbox with Tarsnap. Unlike Dropbox, Tarsnap does do client-side encryption, so when I backup my Dropbox folder, I don’t want to actually backup the encrypted contents of the folder – I want to backup the decrypted contents. That allows me to take better advantage of Tarsnap’s deduplication and it makes restoring backups much simpler. Rather than comparing inodes and restoring a file using an encrypted filename like 6,8xHZgiIGN0vbDTBGw6w3lf/1nvj1,SSuiYY0qoYh-of5YX8 I can just restore documents/todo.txt.
If my encfs filesystem mount point is ~/documents, I can configure Tarsnapper to create an archive of that directory, but if for some reason the filesystem is not mounted when Tarsnapper is called, I would be making a backup of an empty directory. That’s a waste of time. The solution is to tell backitup.sh to put the last-run file inside the encfs filesystem. If it can’t find the file, that means that the filesystem isn’t mounted. If that’s the case, I tell it to call the script I use to automatically mount the encfs filesystem (which, the way I have it setup, requires no interaction from me).
backitup.sh solves all of my backup scheduling problems. I only call backup programs directly if I want to make an on-demand backup. All of my automated backups go through backitup.sh. If you’re interested in the script, you can download it directly from GitHub. You can clone my entire backups repository if you’re also interested in the other scripts I’ve written to manage different aspects of backing up data.
I’m no Bitcoin evangelist. I have my reservations about the currency. But one common critique that consistently angers me is that bitcoins are not secure because there have been instances of theft. This is equivalent to claiming that Federal Reserve Notes are insecure because people get mugged. Secure your shit.
Late Saturday morning I loaded up my bike and took off for the mountains. The plan was to head out to Granite Falls, pick up the Mountain Loop Highway, and spend the night somewhere around Barlow Pass. The following day I would continue along the highway to Darrington and head back to Everett to complete the loop.
The way out of town is an unremarkable ride along the shoulders of two state highways. It doesn’t take long to reach Granite Falls, where the Mountain Loop Highway begins.
I’ve driven the highway into the mountains countless of times – many of my favorite trailheads are located off of it – but this was the first time I had pedalled it. The highway itself doesn’t get too high. The highest point is probably a little under 3,000 feet, but I was starting from sea-level, so there were a few hills to climb. I shifted down to my granny gear and prepared to spin my way up, past the quarries and mines and the masonic park that always gives me a slight feeling of discomfort. The National Forest begins near the top.
After entering the forest the highway follows along next to the south fork of the Sauk, which makes for a pleasant, flat-ish ride, with very little motorized traffic to interfere. Most of the established camp grounds are closed this time of year – “closed” being National Forest Service code for empty and free.
I had initially intended to spend the night somewhere along this stretch, but it was only mid-afternoon. I decided to head a bit further and camp around Barlow Pass. The pavement ends at the pass and the highway loops down and out of the mountains along a narrow gravel road. When I reached the pass the Sheriff was there, sorting through his ring of hundreds of keys, attempting to locate the one to unlock the gate at the start of the road to Monte Cristo. Apparently a young boy was hiking out there with his family and cut himself on a piece of old mining equipment. The search and rescue team were on their way up but would need to get through the gate. It must have been some cut to warrant the response.
There was plenty of light left at the pass. I amended my plan once more and took off down the unpaved section of road, deciding to camp somewhere in the woods below. The road is in pretty decent condition, by Forest Service standards. There aren’t too many potholes and the gravel isn’t layed on too thick. It makes for a bumpy ride, but my skinny 700x25 tires handled it fine. I pulled over for a quick dinner at 6PM and then got back on the road, intending to ride till 7PM. That would give me about 30 minutes to make camp before it got dark.
At 7PM I found myself near a picnic area with a short trail that led into the woods to a small clearing with a table and enough room for a tarp. I pitched camp there and went to bed not long after.
The morning was chill. I rose with the sun. After fueling up with oatmeal I shook the frost off my tarp and packed everything back on the bike. I only had a couple bumpy miles to pedal before reaching the pavement again.
I was perhaps too optimistic when I packed only fingerless gloves. The cold air caused a sharp pain in my fingertips. I stoked the furnace a bit by breaking off a few hunks from a bar of dark chocolate that I kept readily accessible in my frame bag. Either the chocolate or the rising sun worked.
At Darrington I reached the end of the Mountain Loop Highway. The next leg of the trip would be along SR 530. It was the stretch I was looking forward to most. I had never ridden it before, but each time I drove along it the road struck me as a wonderful stretch of pavement to pedal. It travels through the foothills of the Cascades, along pastoral scenes set against mountain backdrops. Horses and cows outnumber motorized traffic.
The road lived up to my expectations. I cruised along the meandering highway until reaching Arlington in the late morning. There I stopped at the Shire Cafe (in the same building as the Mirkwood game store, Mordor tattoo, and Rivendell hair salon) for a breakfast burrito – enough fuel for 53 miles. A block away I picked up the Centennial Trail.
I pedalled down the trail, though the woods of the county, before cutting west out to Marysville, from where I went over the sloughs and the Snohomish river to complete my loop back in Everett. In town I detoured to the waterfront farmer’s market to conclude the trip with an apple and baguette of victory.
This is the best ride I’ve yet to do all within Snohomish County. The route looked roughly like this. I think that my mileage was closer to 130 miles. The trip could probably be done in a full day, but I enjoyed it as a leisurely overnight trip. It took me 26 hours, door to door.
Jacques Matthjeij discusses the history of computing as a pendulum swinging between closed, walled-gardens and open, free systems.
If my observations are correct then such a swing is about to happen, and this time we had better get it right. Things that point in the direction of a swing are an increasing awareness of ordinary computer users with respect to their privacy and who actually owns all that data. The fragmenting of the smartphone and tablet markets will lead to some more openness and at some point all the bits and pieces to create true open hardware will fall into place.
…
Remember that there are two possible outcomes, one where the internet successfully manages to cause a swing to the edge of freedom, and another where it is successfully co-opted by big money and governments in a concerted effort to give us all a subscription to online Life-As-A-Service where you will be beholden to some party for the ability to gain access to knowledge, information, the right to communicate and so on and where the act of programming will be as tightly regulated as the export of cryptography was.
Earlier this year I switched from Duplicity to rsnapshot for my local backups. Duplicity uses a full + incremental backup schema: the first time a backup is executed, all files are copied to the backup medium. Successive backups copy only the deltas of changed objects. Over time this results in a chain of deltas that need to be replayed when restoring from a backup. If a single delta is somehow corrupted, the whole chain is broke. To minimize the chances of this happening, the common practice is to complete a new full backup every so often – I usually do a full backup every 3 or 4 weeks. Completing a full backup takes time when you’re backing up hundreds of gigabytes, even over USB 3.0. It also takes up disk space. I keep around two full backups when using Duplicity, which means I’m using a little over twice as much space on the backup medium as what I’m backing up.
The backup schema that rsnapshot uses is different. The first time it runs, it completes a full backup. Each time after that, it completes what could be considered a “full” backup, but unchanged files are not copied over. Instead, rsnapshot simply hard links to the previously copied file. If you modify very large files regularly, this model may be inefficient, but for me – and I think for most users – it’s great. Backups are speedy, disk space usage on the backup medium isn’t too much more than the data being backed up, and I have multiple full backups that I can restore from.
The great strength of Duplicity – and the great weakness of rsnapshot – is encryption. Duplicity uses GnuPG to encrypt backups, which makes it one of the few solutions appropriate for remote backups. In contrast, rsnapshot does no encryption. That makes it completely inappropriate for remote backups, but the shortcoming can be worked around when backing up locally.
My local backups are done to an external, USB hard drive. Encrypting the drive is simple with LUKS and dm-crypt. For example, to encrypt /dev/sdb:
At this point, the drive will be encrypted with a passphrase. To make it easier to mount programatically, I also add a key file full of some random data generated from /dev/urandom.
There are still a few considerations to address before backups to this encrypted drive can be completed automatically with no user interaction. Since the target is a USB drive and the source is a laptop, there’s a good chance that the drive won’t be plugged in when the scheduler kicks in the backup program. If it is plugged in, the drive needs to be decrypted before calling rsnapshot to do its thing. I wrote a wrapper script called cryptshot to address these issues.
Cryptshot is configured with the UUID of the target drive and the key file used to decrypt the drive. When it is executed, the first thing it does is look to see if the UUID exists. If it does, that means the drive is plugged in and accessible. The script then decrypts the drive with the specified key file and mounts it. Finally, rsnapshot is called to execute the backup as usual. Any argument passed to cryptshot is passed along to rsnapshot. What that means is that cryptshot becomes a drop-in replacement for encrypted, rsnapshot backups. Where I previously called rsnapshot daily, I now call cryptshot daily. Everything after that point just works, with no interaction needed from me.
If you’re interested in cryptshot, you can download it directly from GitHub. The script could easily be modified to execute a backup program other than rsnapshot. You can clone my entire backups repository if you’re also interested in the other scripts I’ve written to manage different aspects of backing up data.
If you’re at all interested in bikes, lightweight backpacking, or a combination thereof, you must read this book.
In 1986, Dick and Nick rode lightweight, steel race bikes from the Bay of Bengal across Bangladesh, up and over the Himalaya, across the Tibetan Plateau, and through the Gobi desert to the point of the earth furthest from the sea. They were sawing their toothbrushes in half and cutting extraneous buckles off of their panniers before “bikepacking” (or “ultralight backpacking”) was a thing. The appendix includes a complete gear list and relevant discussion.
More gears seem unnecessary, but the market has other ideas. I wanted to upgrade my brifters. There were practically no options, so last week I made the jump to 9-speed. Now I’m running an 11-32 9-speed cassette and a 30/42/52 triple chainring.
Tarsnap bills itself as “online backups for the truly paranoid”. I began using the service last January. It fast became my preferred way to backup to the cloud. It stores data on Amazon S3 and costs $0.30 per GB per month for storage and $0.30 per GB for bandwidth. Those prices are higher than just using Amazon S3 directly, but Tarsnap implements some impressive data de-duplication and compression that results in the service costing very little. For example, I currently have 67 different archives stored in Tarsnap from my laptop. They total 46GB in size. De-duplicated that comes out to 1.9GB. After compression, I only pay to store 1.4GB. Peanuts.
Of course, the primary requirement for any online backup service is encryption. Tarsnap delivers. And, most importantly, the Tarsnap client is open-source, so the claims of encryption can actually be verified by the user. The majority of for-profit, online backup services out there fail on this critical point.
So Tarsnap is amazing and you should use it. The client follows the Unix philosophy: “do one thing and do it well”. It’s basically like tar. It can create archives, read the contents of an archive, extract archives, and delete archives. For someone coming from an application like Duplicity, the disadvantage to the Tarsnap client is that it doesn’t include any way to automatically manage backups. You can’t tell Tarsnap how many copies of a backup you wish to keep, or how long backups should be allowed to age before deletion.
Thanks to the de-duplication and compression, there’s not a great economic incentive to not keep old backups around. It likely won’t cost you that much extra. But I like to keep things clean and minimal. If I haven’t used an online backup in 4 weeks, I generally consider it stale and have no further use for it.
To manage my Tarsnap backups, I wrote a Python script called Tarsnapper. The primary intent was to create a script that would automatically delete old archives. It does this by accepting a maximum age from the user. Whenever Tarsnapper runs, it gets a list of all Tarsnap archives. The timestamp is parsed out from the list and any archive that has a timestamp greater than the maximum allowed age is deleted. This is seamless, and means I never need to manually intervene to clean my archives.
Tarsnapper also provides some help for creating Tarsnap archives. It allows the user to define any number of named archives and the directories that those archives should contain. On my laptop I have four different directories that I backup with Tarsnap, three of them in one archive and the last in another archive. Tarsnapper knows about this, so whenever I want to backup to Tarsnap I just call a single command.
Tarsnapper also can automatically add a suffix to the end of each archive name. This makes it easier to know which archive is which when you are looking at a list. By default, the suffix is the current date and time.
Configuring Tarsnapper can be done either directly by changing the variables at the top of the script, or by creating a configuration file named tarsnapper.conf in your home directory. The config file on my laptop looks like this:
There is also support for command-line arguments to specify the location of the configuration file to use, to delete old archives and exit without creating new archives, and to execute only a single named-archive rather than all of those that you may have defined.
$ tarsnapper.py --help
usage: tarsnapper.py [-h][-c CONFIG][-a ARCHIVE][-r]
A Python script to manage Tarsnap archives.
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
Specify the configuration file to use.
-a ARCHIVE, --archive ARCHIVE
Specify a named archive to execute.
-r, --remove Remove archives old archives and exit.
It makes using a great service very simple. My backups can all be executed simply by a single call to Tarsnapper. Stale archives are deleted, saving me precious picodollars. I use this system on my laptop, as well as multiple servers. If you’re interested in it, Tarsnapper can be downloaded directly from GitHub. You can clone my entire backups repository if you’re also interested in the other scripts I’ve written to manage different aspects of backing up data.
I took a year off from blogging. That wasn’t intentional. I just didn’t have anything to say for a while. Then I did have something to say, but I was tired of how the website looked. If the design doesn’t excite me I tend not to want to blog. (Call me vain, but I want my words to look good.) And redesigning the website – well, that requires an entirely different set of motivations to tackle. It took me some time to get that motivation, and then before I knew it we were here: 10 days short of a year.
During the development process I referred to this design as “mark two”, as it was the second idea I tried out.
The website still runs on Django. The blog is still powered by Vellum, my personal blog application. I’ve been hacking in it for over a year now (even when this website was inactive) and it is much improved since the last time I mentioned it. In the past six months I’ve seen the light of CSS preprocessors. All of the styling for this design is written in SASS and uses the excellent Compass framework. The responsive layout is built with Susy.
If you’re interested in these technical details, you will also be interested to know that the entire website is now open-source. You can find it on GitHub. Fork it, hack it, or borrow some of my CSS for your website.
The other big news is that I have begun to categorize blog posts. Yeah, it’s 2012 and I’m a little late to the party on that one. You may recall that I only began to tag posts in 2008. As it stands right now, all posts are just placed in the great big ameba of a category called “General”. Eventually, they will all have more meaningful categories – I hope. But it will be a while.
Things ought to be more active around here for the foreseeable future.