GOESImage is a bash script which downloads the latest imagery from the NOAA Geostationary Operational Environment Satellites and sets it as the desktop background via feh. If you don’t use feh, it should be easy to plug GOESImage into any desktop background control program.
I wrote GOESImage after using himawaripy for a few years, which is a program that provides imagery of the Asia-Pacific region from the Himawari 8 Japanese weather satellite. I like seeing the Earth, and I’ve found that real time imagery of my location is actually useful for identifying the approach of large-scale weather systems. NOAA’s nighttime multispectral infrared coloring is pretty neat, too.
The first password manager I ever used was a simple text file encrypted with GnuPG. When I needed a password I would decrypt the file, read it in Vim, and copy the required entry to the system clipboard. This system didn’t last. At the time I wasn’t using GnuPG for much else, and this was in the very beginning of my Vim days, when the program seemed cumbersome and daunting. I shortly moved to other, purpose-built password managers.
After some experimentation I landed on KeePassX, which I used for a number of years. Some time ago I decided that I wanted to move to a command-line solution. KeePassX and a web browser were the only graphical applications that I was using with any regularity. I could see no need for a password manager to have a graphical interface, and the GUI’s dependency on a mouse decreased my productivity. After a cursory look at the available choices I landed right back where I started all those years ago: Vim and GnuPG.
These days Vim is my most used program outside of a web browser and I use GnuPG daily for handling the majority of my encryption needs. My greater familiarity with both of these tools is one of the reasons I’ve been successful with the system this time around. I believe the other reason is my more systematic approach.
Structure
The power of this system comes from its simplicity: passwords are stored in plain text files that have been encrypted with GnuPG. Every platform out there has some implementation of the PGP protocol, so the files can easily be decrypted anywhere. After they’ve been decrypted, there’s no fancy file formats to deal with. It’s all just text, which can be manipulated with a plethora of powerful tools. I favor reading the text in Vim, but any text editor will do the job.
All passwords are stored within a directory called ~/pw. Within this directory are multiple files. Each of these files can be thought of as a separate password database. I store bank information in financial.gpg. Login information for various shopping websites are in ecommerce.gpg. My email credentials are in email.gpg. All of these entries could very well be stored in a single file, but breaking it out into multiple files allows me some measure of access control.
Access
I regularly use two computers: my laptop at home and a desktop machine at work. I trust my laptop. It has my GnuPG key on it and it should have access to all password database files. I do not place complete trust in my machine at work. I don’t trust it enough to give it access to my GnuPG key, and as such I have a different GnuPG key on that machine that I use for encryption at work.
Having passwords segregated into multiple database files allows me to encrypt the different files to different keys. Every file is encrypted to my primary GnuPG key, but only some are encrypted with my work key. Login credentials needed for work are encrypted to the work key. I have no need to login to my bank accounts at work, and it wouldn’t be prudent to do so on a machine that I do not fully trust, so the financial.gpg file is not encrypted to my work key. If someone compromises my work computer, they still will be no closer to accessing my banking credentials.
Git
The ~/pw directory is a git repository. This gives me version control on all of my passwords. If I accidentally delete an entry I can always get it back. It also provides syncing and redundant storage without depending on a third-party like Dropbox.
Keys
An advantage of using a directory full of encrypted files as my password manager is that I’m not limited to only storing usernames and passwords. Any file can be added to the repository. I keep keys for backups, SSH keys, and SSL keys (all of which have been encrypted with my GnuPG key) in the directory. This gives me one location for all of my authentication credentials, which simplifies the locating and backing up of these important files.
Markup
Each file is structured with Vim folds and indentation. There are various ways for Vim to fold text. I use markers, sticking with the default {{{/}}} characters. A typical password entry will look like this:
Each file is full of entries like this. Certain entries are grouped together within other folds for organization. Certain entries may have comments so that I have a record of the false personally identifiable information the service requested when I registered.
12345678
Super Ecommerce{{{
user: foobar
pass: g0d
Comments{{{
birthday: 1/1/1911
first car: delorean
}}}
}}}
Following a consistent structure like this makes the file easier to navigate and allows for the possibility of the file being parsed by a script. The fold markers come into play with my Vim configuration.
Vim
I use Vim with the vim-gnupg plugin. This makes editing of encrypted files seamless. When opening existing files, the contents are decrypted. When opening new files, the plugin asks which recipients the file should be encrypted to. When a file is open, leaking the clear text is avoided by disabling viminfo, swapfile, and undofile. I run gpg-agent so that my passphrase is remembered for a short period of time after I use it. This makes it easy and secure to work with (and create) the encrypted files with Vim. I define a few extra options in my vimrc to facilitate working with passwords.
""""""""""""""""""""" GnuPG Extensions """""""""""""""""""""" Tell the GnuPG plugin to armor new files.letg:GPGPreferArmor=1" Tell the GnuPG plugin to sign new files.letg:GPGPreferSign=1
augroup GnuPGExtra
" Set extra file options.
autocmd BufReadCmd,FileReadCmd *.\(gpg\|asc\|pgp\)call SetGPGOptions()" Automatically close unmodified files after inactivity.
autocmd CursorHold *.\(gpg\|asc\|pgp\) quit
augroup END
function SetGPGOptions()" Set updatetime to 1 minute.setupdatetime=60000" Fold at markers.setfoldmethod=marker
" Automatically close all folds.setfoldclose=all" Only open folds with insert commands.setfoldopen=insert
endfunction
The first two options simply tell vim-gnupg to always ASCII-armor and sign new files. These have nothing particular to do with password management, but are good practices for all encrypted files.
The first autocmd calls a function which holds the options that I wanted applied to my password files. I have these options apply to all encrypted files, although they’re intended primarily for use when Vim is acting as my password manager.
Folding
The primary shortcoming with using an encrypted text file as a password database is the lack of protection against shoulder-surfing. After the file has been decrypted and opened, anyone standing behind you can look over your shoulder and view all the entries. This is solved with folds and is what most of these extra options address.
I set foldmethod to marker so that Vim knows to look for all the {{{/}}} characters and use them to build the folds. Then I set foldclose to all. This closes all folds unless the cursor is in them. This way only one fold can be open at a time – or, to put it another way, only one password entry is ever visible at once.
The final fold option instructs Vim when it is allowed to open folds. Folds can always be opened manually, but by default Vim will also open them for many other cases: if you navigate to a fold, jump to a mark within a fold or search for a pattern within a fold, they will open. By setting foldopen to insert I instruct Vim that the only time it should automatically open a fold is if my cursor is in a fold and I change to insert mode. The effect of this is that when I open a file, all folds are closed by default. I can navigate through the file, search and jump through matches, all without opening any of the folds and inadvertently exposing the passwords on my screen. The fold will open if I change to insert mode within it, but it is difficult to do that by mistake.
I have my spacebar setup to toggle folds within Vim. After I have navigated to the desired entry, I can simply whack the spacebar to open it and copy the credential that I need to the system clipboard. At that point I can whack the spacebar again to close the fold, or I can quit Vim. Or I can simply wait.
Locking
The other special option I set is updatetime. Vim uses this option to determine when it should write swap files for crash recovery. Since vim-gnupg disables swap files for decrypted files, this has no effect. I use it for something else.
In the second autocmd I tell Vim to close itself on CursorHold. CursorHold is triggered whenever no key has been pressed for the time specified by updatetime. So the effect of this is that my password files are automatically closed after 1 minute of inactivity. This is similar to KeePassX’s behaviour of “locking the workspace” after a set period of inactivity.
Clipboard
To easily copy a credential to the system clipboard from Vim I have two shortcuts mapped.
12345
" Yank WORD to system clipboard in normal mode
nmap <leader>y "+yE
" Yank selection to system clipboard in visual mode
vmap <leader>y "+y
Selections are “owned” by an application, and disappear when that application (e.g., Vim) exits, thus losing the data, whereas cut-buffers, are stored within the X-server itself and remain until written over or the X-server exits (e.g., upon logging out).
The result is that I can copy a username or password by placing the cursor on its first character and hitting <leader>y. I can paste the credential wherever it is needed. After I close Vim, or after Vim closes itself after 1 minute of inactivity, the credential is removed from the clipboard. This replicates KeePassX’s behaviour of clearing the clipboard so many seconds after a username or password has been copied.
Generation
Passwords should be long and unique. To satisfy this any password manager needs some sort of password generator. Vim provides this with its ability to call and read external commands I can tell Vim to call the standard-issue pwgen program to generate a secure 24-character password utilizing special characters and insert the output at the cursor, like this:
1
:r!pwgen -sy241
Backups
The ~/pw directory is backed up in the same way as most other things on my hard drive: to Tarsnap via Tarsnapper, to an external drive via rsnapshot and cryptshot, rsync to a mirror drive. The issue with these standard backups is that they’re all encrypted and the keys to decrypt them are stored in the password manager. If I loose ~/pw I’ll have plenty of backups around, but none that I can actually access. I address this problem with regular backups to optical media.
At the beginning of every month I burn the password directory to two CDs. One copy is stored at home and the other at an off-site location. I began these optical media backups in December, so I currently have two sets consisting of five discs each. Any one of these discs will provide me with the keys I need to access a backup made with one of the more frequent methods.
Of course, all the files being burned to these discs are still encrypted with my GnuPG key. If I loose that key or passphrase I will have no way to decrypt any of these files. Protecting one’s GnuPG key is another problem entirely. I’ve taken steps that make me feel confident in my ability to always be able to recover a copy of my key, but none that I’m comfortable discussing publicly.
# Set the password database directory.PASSDIR=~/pw
# Create or edit password databases.
pw(){cd"$PASSDIR"if[ ! -z "$1"];then$EDITOR$(buildfile "$1")cd"$OLDPWD"fi}
This allows me to easily open any password file from wherever I am in the filesystem without specifying the full path. These two commands are equivalent, but the one utilizing pw() requires fewer keystrokes:
12
$ vim ~/pw/financial.gpg
$ pw financial
The function changes to the password directory before opening the file so that while I’m in Vim I can drop down to a shell with :sh and already be in the proper directory to manipulate the files. After I close Vim the function returns me to the previous working directory.
This still required a few more keystrokes than I like, so I configured my shell to perform autocompletion in the directory. If financial.gpg is the only file in the directory beginning with an “f”, typing pw f<tab> is all that is required to open the file.
Simplicity
This setup provides simplicity, power, and portability. It uses the same tools that I already employ in my daily life, and does not require the use of the mouse or any graphical windows. I’ve been happily utilizing it for about 6 months now.
Initially I had thought I would supplement the setup with a script that would search the databases for a desired entry, using some combination of grep, awk and cut, and then copy it to my clipboard via xsel. As it turns out, I haven’t felt the desire to do this. Simply opening the file in Vim, searching for the desired entry, opening the fold and copying the credential to the system clipboard is quick enough. The whole process, absent of typing in my passphrase, takes me only a couple of seconds.
Resources
I’m certainly not the first to come up with the idea of managing password with Vim. These resources were particularly useful to me when I was researching the possibilities:
As a long-time user of Unix-like systems, I prefer to do as much work in the command-line as possible. I store data in plain text whenever appropriate. I edit in vim and take advantage of the pipeline to manipulate the data with powerful tools like awk and grep.
Notes are one such instance where plain text makes sense. All of my notes – which are scratch-pads for ideas, reference material, logs, and whatnot – are kept as individual text files in the directory ~/documents/notes. The entire ~/documents directory is synced between my laptop and my work computer, and of course it is backed up with tarsnap.
When I want to read, edit or create a note, my habit is simply to open the file in vim.
1
$ vim ~/documents/notes/todo.txt
When I want to view a list of my notes, I can just ls the directory. I pass along the -t and -r flags. The first flag sorts the files by modification date, newest first. The second flag reverses the order. The result is that the most recently modified files end up at the bottom of the list, nearest the prompt. This allows me to quickly see which notes I have recently created or changed. These notes are generally active – they’re the ones I’m currently doing something with, so they’re the ones I want to see. Using ls to see which files have been most recently modified is incredibly useful, and a behaviour that I use often enough to have created an alias for it.
The first function, n(), takes the name of the note as an argument – minus the file extension – and opens it.
123
function n {
nano ~/Dropbox/Notes/$1.txt
}
I liked the idea. It would allow me to open a note from anywhere in the filesystem without specifying the full path. After changing the editor and the path, I could open the same note as before with far fewer keystrokes.
1
$ n todo
I needed to make a few changes to the function to increase its flexibility.
First, the extension. Most of my notes have .txt extensions. Some have a .gpg extension.1 Some have no extension. I didn’t want to force the .txt extension in the function.
If I specified a file extension, that extension should be used. If I failed to specify the extension, I wanted the function to open the file as I specified it only if it existed. Otherwise, I wanted it to look for that file with a .gpg extension and open that if it was found. As a last resort, I wanted it to open the file with a .txt extension regardless of whether it existed or not. I implemented this behaviour in a separate function, buildfile(), so that I could take advantage of it wherever I wanted.
1 2 3 4 5 6 7 8 9101112131415161718192021
# Take a text file and build it with an extension, if needed.# Prefer gpg extension over txt.
buildfile(){# If an extension was given, use it.if[["$1"== *.* ]];thenecho"$1"# If no extension was given...else# ... try the file without any extension.if[ -e "$1"];thenecho"$1"# ... try the file with a gpg extension.elif[ -e "$1".gpg ];thenecho"$1".gpg
# ... use a txt extension.elseecho"$1".txt
fifi}
I then rewrote the original note function to take advantage of this.
1 2 3 4 5 6 7 8 910111213
# Set the note directory.NOTEDIR=~/documents/notes
# Create or edit notes.
n(){# If no note was given, list the notes.if[ -z "$1"];then
lt "$NOTEDIR"# If a note was given, open it.else$EDITOR$(buildfile "$NOTEDIR"/"$1")fi}
Now I can edit the note ~/documents/notes/todo.txt by simply passing along the filename without an extension.
1
$ n todo
If I want to specify the extension, that will work too.
1
$ n todo.txt
If I have a note called ~/documents/notes/world-domination.gpg, I can edit that without specifying the extension.
1
$ n world-domination
If I have a note called ~/documents/notes/readme, I can edit that and the function will respect the lack of an extension.
1
$ n readme
Finally, if I want to create a note, I can just specify the name of the note and a file with that name will be created with a .txt extension.
The other change I made was to make the function print a reverse-chronologically sorted list of my notes if it was called with no arguments. This allows me to view my notes by typing a single character.
1
$ n
The original blog post also included a function ns() for searching notes by their title.
123
function ns {
ls -c ~/Dropbox/Notes | grep $1}
I thought this was a good idea, but I considered the behaviour to be finding a note rather than searching a note. I renamed the function to reflect its behaviour, took advantage of my ls alias, and made the search case-insensitive. I also modified it so that if no argument was given, it would simply print an ordered list of the notes, just like n().
12345678
# Find a note by title.
nf(){if[ -z "$1"];then
lt "$NOTEDIR"else
lt "$NOTEDIR"| grep -i $1fi}
I thought it would be nice to also have a quick way to search within notes. That was accomplished with ns(), the simplest function of the trio.
12
# Search within notes.
ns(){cd$NOTEDIR; grep -rin $1;cd"$OLDPWD";}
I chose to change into the note directory before searching so that the results do not have the full path prefixed to the filename. Thanks to the first function I don’t need to specify the full path of the note to open it, and this makes the output easier to read.
123
$ ns "take over"
world-domination.txt:1:This is my plan to take over the world.
$ n world-domination
To finish it up, I added completion to zsh so that I can tab-complete filenames when using n.
12
# Set autocompletion for notes.
compctl -W $NOTEDIR -f n
If you’re interested in seeing some of the other way that I personalize my working environment, all of my dotfiles are on GitHub. The note functions are in shellrc, which is my shell-agnostic configuration file that I source from both zshrc and bashrc2.
Notes
↵ I use vim with the gnupg.vim plugin for seamless editing of PGP-encrypted files.
↵ I prefer zsh, but I do still find myself in bash on some machines. I find it prudent to maintain configurations for both shells. There's a lot of cross-over between them and I like to stick to the DRY principle.
A laptop presents some problems for reliably backing up data. Unlike a server, the laptop may not always be turned on. When it is on, it may not be connected to the backup medium. If you’re doing online backups, the laptop may be offline. If you’re backing up to an external drive, the drive may not be plugged in. To address these issues I wrote a shell script called backitup.sh.
The Problem
Let’s say you want to backup a laptop to an external USB drive once per day with cryptshot.
You could add a cron entry to call cryptshot.sh at a certain time every day. What if the laptop isn’t turned on? What if the drive isn’t connected? In either case the backup will not be completed. The machine will then wait a full 24 hours before even attempting the backup again. This could easily result in weeks passing without a successful backup.
If you’re using anacron, or one of its derivatives, things get slightly better. Instead of specifying a time to call cryptshot.sh, you set the cron interval to @daily. If the machine is turned off at whatever time anacron is setup to execute @daily scripts, all of the commands will simply be executed the next time the machine boots. But that still doesn’t solve the problem of the drive not being plugged in.
The Solution
backitup.sh attempts to perform a backup if a certain amount of time has passed. It monitors for a report of successful completion of the backup. Once configured, you no longer call the backup program directly. Instead, you call backitup.sh. It then decides whether or not to actually execute the backup.
How it works
The script is configured with the backup program that should be executed, the period for which you want to complete backups, and the location of a file that holds the timestamp of the last successful backup. It can be configured either by modifying the variables at the top of the script, or by passing in command-line arguments.
$ backitup.sh -h
Usage: backitup.sh [OPTION...]
Note that any command line arguments overwrite variables defined in the source.
Options:
-p the period for which backups should attempt to be executed
(integer seconds or 'DAILY', 'WEEKLY' or 'MONTHLY')
-b the backup command to execute; note that this should be quoted if it contains a space
-l the location of the file that holds the timestamp of the last successful backup.
-n the command to be executed if the above file does not exist
When the script executes, it reads the timestamp contained in the last-run file. This is then compared to the user-specified period. If the difference between the timestamp and the current time is greater than the period, backitup.sh calls the backup program. If the difference between the stored timestamp and the current time is less than the requested period, the script simply exits without running the backup program.
After the backup program completes, the script looks at the returned exit code. If the exit code is 0, the backup was completed successfully, and the timestamp in the last-run file is replaced with the current time. If the backup program returns a non-zero exit code, no changes are made to the last-run file. In this case, the result is that the next time backitup.sh is called it will once again attempt to execute the backup program.
The period can either be specified in seconds or with the strings DAILY, WEEKLY or MONTHLY. The behaviour of DAILY differs from 86400 (24-hours in seconds). With the latter configuration, the backup program will only attempt to execute once per 24-hour period. If DAILY is specified, the backup may be completed successfully at, for example, 23:30 one day and again at 00:15 the following day.
Use
You still want to backup a laptop to an external USB drive once per day with cryptshot. Rather than calling cryptshot.sh, you call backitup.sh.
Tell the script that you wish to complete daily backups, and then use cron to call the script more frequently than the desired backup period. For my local backups, I call backitup.sh every hour.
The default period of backitup.sh is DAILY, so in this case I don’t have to provide a period of my own. But I also do weekly and monthly backups, so I need two more entries to execute cryptshot with those periods.
All three of these entries are executed hourly, which means that at the top of every hour, my laptop attempts to back itself up. As long as the USB drive is plugged in during one of those hours, the backup will complete. If cryptshot is executed, but fails, another attempt will be made the next hour. Daily backups will only be successfully completed, at most, once per day; weekly backups, once per week; and monthly backups, once per month. This setup works well for me, but if you want a higher assurance that your daily backups will be completed every day you could change the cron interval to */5 * * * *, which will result in cron executing backitup.sh every 5 minutes.
What if you want to perform daily online backups with Tarsnapper?
At the top of every hour your laptop will attempt to run Tarsnap via Tarsnapper. If the laptop is offline, it will try again the following hour. If Tarsnap begins but you go offline before it can complete, the backup will be resumed the following hour.
The script can of course be called with something other than cron. Put it in your ~/.profile and have you backups attempt to execute every time you login. Add it to your network manager and have your online backups attempt to execute every time you get online. If you’re using something like udev, have your local backups attempt to execute every time your USB drive is plugged in.
The Special Case
The final configuration option of backitup.sh represents a special case. If the script runs and it can’t find the specified file, the default behaviour is to assume that this is the first time it has ever run: it creates the file and executes the backup. That is what most users will want, but this behaviour can be changed.
When I first wrote backitup.sh it was to help manage backups of my Dropbox folder. Dropbox doesn’t provide support client-side encryption, which means users need to handle encryption themselves. The most common way to do this is to create an encfs file-system or two and place those within the Dropbox directory. That’s the way I use Dropbox.
I wanted to backup all the data stored in Dropbox with Tarsnap. Unlike Dropbox, Tarsnap does do client-side encryption, so when I backup my Dropbox folder, I don’t want to actually backup the encrypted contents of the folder – I want to backup the decrypted contents. That allows me to take better advantage of Tarsnap’s deduplication and it makes restoring backups much simpler. Rather than comparing inodes and restoring a file using an encrypted filename like 6,8xHZgiIGN0vbDTBGw6w3lf/1nvj1,SSuiYY0qoYh-of5YX8 I can just restore documents/todo.txt.
If my encfs filesystem mount point is ~/documents, I can configure Tarsnapper to create an archive of that directory, but if for some reason the filesystem is not mounted when Tarsnapper is called, I would be making a backup of an empty directory. That’s a waste of time. The solution is to tell backitup.sh to put the last-run file inside the encfs filesystem. If it can’t find the file, that means that the filesystem isn’t mounted. If that’s the case, I tell it to call the script I use to automatically mount the encfs filesystem (which, the way I have it setup, requires no interaction from me).
backitup.sh solves all of my backup scheduling problems. I only call backup programs directly if I want to make an on-demand backup. All of my automated backups go through backitup.sh. If you’re interested in the script, you can download it directly from GitHub. You can clone my entire backups repository if you’re also interested in the other scripts I’ve written to manage different aspects of backing up data.
Earlier this year I switched from Duplicity to rsnapshot for my local backups. Duplicity uses a full + incremental backup schema: the first time a backup is executed, all files are copied to the backup medium. Successive backups copy only the deltas of changed objects. Over time this results in a chain of deltas that need to be replayed when restoring from a backup. If a single delta is somehow corrupted, the whole chain is broke. To minimize the chances of this happening, the common practice is to complete a new full backup every so often – I usually do a full backup every 3 or 4 weeks. Completing a full backup takes time when you’re backing up hundreds of gigabytes, even over USB 3.0. It also takes up disk space. I keep around two full backups when using Duplicity, which means I’m using a little over twice as much space on the backup medium as what I’m backing up.
The backup schema that rsnapshot uses is different. The first time it runs, it completes a full backup. Each time after that, it completes what could be considered a “full” backup, but unchanged files are not copied over. Instead, rsnapshot simply hard links to the previously copied file. If you modify very large files regularly, this model may be inefficient, but for me – and I think for most users – it’s great. Backups are speedy, disk space usage on the backup medium isn’t too much more than the data being backed up, and I have multiple full backups that I can restore from.
The great strength of Duplicity – and the great weakness of rsnapshot – is encryption. Duplicity uses GnuPG to encrypt backups, which makes it one of the few solutions appropriate for remote backups. In contrast, rsnapshot does no encryption. That makes it completely inappropriate for remote backups, but the shortcoming can be worked around when backing up locally.
My local backups are done to an external, USB hard drive. Encrypting the drive is simple with LUKS and dm-crypt. For example, to encrypt /dev/sdb:
At this point, the drive will be encrypted with a passphrase. To make it easier to mount programatically, I also add a key file full of some random data generated from /dev/urandom.
There are still a few considerations to address before backups to this encrypted drive can be completed automatically with no user interaction. Since the target is a USB drive and the source is a laptop, there’s a good chance that the drive won’t be plugged in when the scheduler kicks in the backup program. If it is plugged in, the drive needs to be decrypted before calling rsnapshot to do its thing. I wrote a wrapper script called cryptshot to address these issues.
Cryptshot is configured with the UUID of the target drive and the key file used to decrypt the drive. When it is executed, the first thing it does is look to see if the UUID exists. If it does, that means the drive is plugged in and accessible. The script then decrypts the drive with the specified key file and mounts it. Finally, rsnapshot is called to execute the backup as usual. Any argument passed to cryptshot is passed along to rsnapshot. What that means is that cryptshot becomes a drop-in replacement for encrypted, rsnapshot backups. Where I previously called rsnapshot daily, I now call cryptshot daily. Everything after that point just works, with no interaction needed from me.
If you’re interested in cryptshot, you can download it directly from GitHub. The script could easily be modified to execute a backup program other than rsnapshot. You can clone my entire backups repository if you’re also interested in the other scripts I’ve written to manage different aspects of backing up data.
Tarsnap bills itself as “online backups for the truly paranoid”. I began using the service last January. It fast became my preferred way to backup to the cloud. It stores data on Amazon S3 and costs $0.30 per GB per month for storage and $0.30 per GB for bandwidth. Those prices are higher than just using Amazon S3 directly, but Tarsnap implements some impressive data de-duplication and compression that results in the service costing very little. For example, I currently have 67 different archives stored in Tarsnap from my laptop. They total 46GB in size. De-duplicated that comes out to 1.9GB. After compression, I only pay to store 1.4GB. Peanuts.
Of course, the primary requirement for any online backup service is encryption. Tarsnap delivers. And, most importantly, the Tarsnap client is open-source, so the claims of encryption can actually be verified by the user. The majority of for-profit, online backup services out there fail on this critical point.
So Tarsnap is amazing and you should use it. The client follows the Unix philosophy: “do one thing and do it well”. It’s basically like tar. It can create archives, read the contents of an archive, extract archives, and delete archives. For someone coming from an application like Duplicity, the disadvantage to the Tarsnap client is that it doesn’t include any way to automatically manage backups. You can’t tell Tarsnap how many copies of a backup you wish to keep, or how long backups should be allowed to age before deletion.
Thanks to the de-duplication and compression, there’s not a great economic incentive to not keep old backups around. It likely won’t cost you that much extra. But I like to keep things clean and minimal. If I haven’t used an online backup in 4 weeks, I generally consider it stale and have no further use for it.
To manage my Tarsnap backups, I wrote a Python script called Tarsnapper. The primary intent was to create a script that would automatically delete old archives. It does this by accepting a maximum age from the user. Whenever Tarsnapper runs, it gets a list of all Tarsnap archives. The timestamp is parsed out from the list and any archive that has a timestamp greater than the maximum allowed age is deleted. This is seamless, and means I never need to manually intervene to clean my archives.
Tarsnapper also provides some help for creating Tarsnap archives. It allows the user to define any number of named archives and the directories that those archives should contain. On my laptop I have four different directories that I backup with Tarsnap, three of them in one archive and the last in another archive. Tarsnapper knows about this, so whenever I want to backup to Tarsnap I just call a single command.
Tarsnapper also can automatically add a suffix to the end of each archive name. This makes it easier to know which archive is which when you are looking at a list. By default, the suffix is the current date and time.
Configuring Tarsnapper can be done either directly by changing the variables at the top of the script, or by creating a configuration file named tarsnapper.conf in your home directory. The config file on my laptop looks like this:
There is also support for command-line arguments to specify the location of the configuration file to use, to delete old archives and exit without creating new archives, and to execute only a single named-archive rather than all of those that you may have defined.
$ tarsnapper.py --help
usage: tarsnapper.py [-h][-c CONFIG][-a ARCHIVE][-r]
A Python script to manage Tarsnap archives.
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
Specify the configuration file to use.
-a ARCHIVE, --archive ARCHIVE
Specify a named archive to execute.
-r, --remove Remove archives old archives and exit.
It makes using a great service very simple. My backups can all be executed simply by a single call to Tarsnapper. Stale archives are deleted, saving me precious picodollars. I use this system on my laptop, as well as multiple servers. If you’re interested in it, Tarsnapper can be downloaded directly from GitHub. You can clone my entire backups repository if you’re also interested in the other scripts I’ve written to manage different aspects of backing up data.
You may not notice much, but this blog has been completely rewritten.
I started developing in Django last winter and quickly became smitten with both the Django framework and the Python. Most of the coding I’ve done this year has been in Python. Naturally, I had thoughts of moving this website from Wordpress over to a Django-based blog.
For a while I did nothing about it. Then I had another project come up that required some basic blog functionality be added to a Django-based site. A blog is – or, at least, can be – a fairly simple affair, but before writing my own I decided to look around and see what else was out there. There’s a number of Django-based blogs floating around (Kevin Fricovsky has a list), but few of them jumped out at me. Most were not actively developed and depended on too many stale packages for my taste, or they just had a feature set that I didn’t like.
Out of all of them, two presented themselves as possibilities: Mingus (written by the previously mentioned Kevin) and Nathan Borror’s django-basic-apps. Mingus tries to be a full-featured blogging application and was much too complex for the simple project I was then working on. But the blog application in django-basic-apps (a fork of which provides Mingus with its core blog functionality) looked like it would fit the bill. As the name implies, it is meant to be a very basic blog. I dived in to the code I discovered that, with a few modifications, it would do what I needed.
So I finished that project. But now having messed with blogging in Django I was more motivated to get started on rewriting my own site. I took another look at Mingus. Although it was too complex for the previous project, the features it provides are very similar to the features I wanted for this website. I looked at and thought about Mingus for a time, repeatedly turning it down and then coming back to it. The question centered around the project’s staleness more than anything else. Currently, Mingus is built for Django 1.1. That’s an old version. As of this writing, the current version is 1.3. Many improvements have been made in Django since 1.1 and I was not too keen to forgo them and run an old piece of code. Mingus is under active development, and will be updated for Django 1.3, but it’s a hobby-project, so the work is understandably slow.
In the end, I decided that the best thing to do was go my own route, but take some pointers and inspiration from Mingus. I would make my own fork of django-basic-apps, using that blog as the basis, and build a system on top of that. I created my fork last month and have been steadily plodding away on it in my free time. Over the course of the development I created a fewsimpleapplications to complement the core blog, and contributed code to another project.
It’s not quite done – there’s still a few things I want to improve – but it’s good enough to launch. (If you notice any kinks, let me know.) I’m quite pleased with it.
This is a notable occasion. I’ve been using Wordpress since before it was Wordpress, but it is time to move on. (Wordpress is a fork of an old piece of code called b2/cafelog. My database tables have been rocking the b2 prefix since 2002.)
As you’ve no doubt noticed, the look of the site hasn’t changed much. I tweaked a few things here and there, but for the most part just recreated the same template as what I had written for Wordpress. I am planning on a redesign eventually. For now, I wanted to spend my time developing the actual blog rather than screwing with CSS.
So, there you have it. Everything is open source. Download it, fork it, hack it (and don’t forget to send your code changes back my way). Let me know what you think. Build your own blog with it! (There’s even a script to import data from Wordpress.) I think it’s pretty sweet. The only thing lacking is documentation, and that’s my next goal.
Disqus
The biggest change for the user is probably the comments, which are now powered by Disqus. Consider it a trial. I’ve seen Disqus popping up on a number of sites the past year or so. At first it annoyed me, mostly because I use NoScript and did not want to enable JavaScript for another domain just to comment on a site. But after I got over that I found that Disqus wasn’t too bad. As a user I found it to be on par with the standard comment systems provided by Wordpress, Blogger, and the like. The extra features don’t appeal to me. But as an administrator, Disqus appeals to me more because it means that I no longer have to manage comments myself! And as a developer, I’m attracted to some of the things that Disqus has done (they’re a Python shop, and run on top of Django) and their open source contributions.
So I’m giving it a shot. Disqus will happily export comments, so if I (or you) decide that I don’t like it, it will be easy to move to another system.
Markdown
One final note: I like Markdown. That might be an understatement.
I first starting using Markdown on GitHub, which I signed up for about the same time I started with Django and Python. After learning the syntax and playing with it for a few weeks, I discovered that I had a very hard time writing prose in anything else. In fact, the desire to write blog posts in Markdown was probably the biggest factor that influenced me to get off my butt and move away from Wordpress.
So, I incorporated Markdown into the blog. But rather than just making the blog Markdown-only, I took a hint from Mingus and included django-markup, which supports rendering in many lightweight markup languages.
Because I’m still new to Markdown and occasionally cannot remember the correct syntax, I wanted to include some version of WMD. WMD is a What You See Is What You Mean editor for Markdown, a sort of alternative to WYSIWYG editors like TinyMCE. (It is my believe that WYSIWYG editors are one of the worst things to happen to the Internet.) All WMD consists of is a JavaScript library. The original was written by a guy named John Fraser, who was abducted by aliens some time in 2008. Since his disappearance from the interwebs, WMD has been forked countless times. I looked around at a few found a version that I was happy with (which happens to be a fork of a fork of a fork of a fork), and rolled it into a reusable app. While I was at it, I made some visual changes to the editing area for the post body. The result is an attractive post editing area that is simple to use and produces clean code. I think it is much better than what is offered by Wordpress.
Wishlist is a Django application for creating wishlists.
I used Amazon Wishlist for a number of years, but my paranoia finally caught up to me and I decided that I didn’t need to give Amazon that much more information about my interests.
I tried a few substitutes and found that my requirements for a wishlist were less than common. I don’t often use wishlists in the usual way of asking people for gifts on special occasions. Instead, I use wishlists privately to keep track of items that I wish to purchase myself. It helps me to determine savings goals, to track books that I want to read, etc. As such, I usually do not want items on my wishlist to be publicly viewable.
Out of the substitutes I tried, Wishlistr was undoubtedly the best, but there were some aspects of it that I didn’t like. After using it for a while, I decided to write my own app.