I noticed that my Borg directory on The Cloud was 239 GB. This struck me as problematic, as I could see in my local logs that Borg itself reported the deduplicated size of all archives to be 86 GB.
A web search revealed borg compact, which apparently I have been meant to run manually since 2019. Oops. After compacting, the directory dropped from 239 GB to 81 GB.
Optician archives a directory, optionally encrypts it, records the integrity of all the things, and burns it to disc. I created it last year after writing about the steps I took to create optical backups of financial archives. Since then I’ve used it to create my monthly password database backups, yearly e-book library backups, and this year’s annual financial backup.
My first solid state drive was a Samsung 850 Pro 1TB purchased in 2015. Originally I installed it in my T430s. The following year it migrated to my new X260, where it has served admirably ever since. It still seems healthy, as best as I can tell. Sometime ago I found a script for measuring the health of Samsung SSDs. It reports:
------------------------------ SSD Status: /dev/sda------------------------------ On time: 17,277 hr------------------------------ Data written: MB: 47,420,539.560 GB: 46,309.120 TB: 45.223------------------------------ Mean write rate: MB/hr: 2,744.720------------------------------ Drive health: 98 %------------------------------
The 1 terabyte of storage has begun to feel tight over the past couple of years. I’m not sure where it all goes, but I regularly only have about 100GB free, which is not much of a buffer. I’ve had my eye on a Samsung 860 Evo 2TB as a replacement. Last November my price monitoring tool notified me of a significant price drop for this new drive, so I snatched one up. This weekend I finally got around to installing it.
The health script reports that my new drive is, in fact, both new and healthy:
------------------------------ SSD Status: /dev/sda------------------------------ On time: 17 hr------------------------------ Data written: MB: 872,835.635 GB: 852.378 TB: .832------------------------------ Mean write rate: MB/hr: 51,343.272------------------------------ Drive health: 100 %------------------------------
When migrating to a new drive, the simple solution is to just copy the complete contents of the old drive. I usually do not take this approach. Instead I prefer to imagine that the old drive is lost, and use the migration as an exercise to ensure that my excessive backup strategies and OS provisioning system are both fully operational. Successfully rebuilding my laptop like this, with a minimum expenditure of time and effort – and no data loss – makes me feel good about my backup and recovery tooling.
Every year I burn an optical archive of my financial documents, similar to how (and why) I create optical backups of photos. I schedule this financial archive for the spring, after the previous year’s taxes have been submitted and accepted. Taskwarrior solves the problem of remembering to complete the archive.
The first is my ledger repository. Ledger is the double-entry accounting system I began using in 2012 to record the movement of every penny that crosses one of my bank accounts (small cash transactions, less than about $20, are usually-but-not-always except from being recorded). In addition to the plain-text ledger files, this repository also holds PDF or JPG images of receipts.
The second repository holds my tax information. Each tax year gets a ctmg container which contains any documents used to complete my tax returns, the returns themselves, and any notifications of those returns being accepted.
The yearly optical archive that I create holds the entirety of these two repositories – not just the information from the previous year – so really each disc only needs to have a shelf life of 12 months. Keeping the older discs around just provides redundancy for prior years.
Creating the Archive
The process of creating the archive is very similar to the process I outlined six years ago for the photo archives.
The two repositories, combined, are about 2GB (most of that is the directory of receipts from the ledger repository). I burn these to a 25GB BD-R disc, so file size is not a concern. I’ll tar them, but skip any compression, which would just add extra complexity for no gain.
$ mkdir ~/tmp/archive
$ cd ~/library
$ tar cvf ~/tmp/archive/ledger.tar ledger
$ tar cvf ~/tmp/archive/tax.tar tax
The ledger archive will get signed and encrypted with my PGP key. The contents of the tax repository are already encrypted, so I’ll skip encryption and just sign the archive. I like using detached signatures for this.
Previously, when creating optical photo archives, I used DVDisaster to create the disc image with parity. DVDisaster no longer exists. The code can still be found, and the program still works, but nobody is developing it and it doesn’t even an official web presence. This makes me uncomfortable for a tool that is part of my long-term archiving plans. As a result, I’ve moved back to using Parchive for parity. Parchive also does not have much in the way of active development around it, but it is still maintained, has been around for a long period of time, is still used by a wide community, and will probably continue to exist as long as people share files on less-than-perfectly-reliable mediums.
As previously mentioned, I’m not worried about the storage space for these files, so I tell par2create to create PAR2 files with 30% redundancy. I suppose I could go even higher, but 30% seems like a good number. By default this process will be allowed to use 16MB of memory, which is cute, but RAM is cheap and I usually have enough to spare so I’ll give it permission to use up to 8GB.
$ par2create -r30 -m8000 recovery.par2 *
Next I’ll use hashdeep to generate message digests for all the files in the archive.
$ hashdeep * > hashes
At this point all the file processing is completed. I’ll put a blank disc in my burner (a Pioneer BDR-XD05B) and burn the directory using growisofs.
$ growisofs -Z /dev/sr0 -V "Finances 2019" -r *
Verification
The final step is to verify the disc. I have a few options on this front. These are the same steps I’d take years down the road if I actually needed to recover data from the archive.
I can use the previous hashes to find any files that do not match, which is a quick way to identify bit rot.
I signed-up for Pinboard in 2014. It provides everything I need from a bookmarking service, which is mostly, you know, bookmarking. I pay for the archival account, meaning that Pinboard downloads a copy of everything I bookmark and provides me with full-text search. I find this useful and well worth the $25 yearly fee, but Pinboard’s archive is only part of the solution. I also need an offline copy of my bookmarks.
Pinboard provides an API that makes it easy to acquire a list of bookmarks. I have a small shell script which pulls down a JSON-formatted list of my bookmarks and adds the file to git-annex. This is controlled via a systemd service and timer, which wraps the script in backitup to ensure daily dumps. The systemd timer itself is controlled by nmtrust, so that it only runs when I am connected to a trusted network.
This provides data portability, ensuring that I could import my tagged URLs to another bookmarking service if I ever found something better than Pinboard (unlikely, competing with Pinboard is futile). But I also want a locally archived copy of the pages themselves, which Pinboard does not offer through the API. I carry very much about being able to work offline. The usefulness of a computer is directly propertional to the amount of data that is accessible without a network connection.
To address this I use bookmark-archiver, a Python script which reads URLs from a variety of input files, including Pinboard’s JSON dumps. It archives each URL via wget, generates a screenshot and PDF via headless Chromium, and submits the URL to the Internet Archive (with WARC hopefully on the way). It will then generate an HTML index page, allowing the archives to be easily browsed. When I want to browse the archive, I simply change into the directory and use python -m http.server to serve the bookmarks at localhost:8000. Once downloaded locally, the archives are of course backed up, via the usual suspects like borg and cryptshot.
The archiver is configured via environment variables. I configure my preferences and point the program at the Pinboard JSON dump in my annex via a shell script (creatively also named bookmark-archiver). This wrapper script is called by the previous script which dumps the JSON from Pinboard.
The result of all of this is that every day I get a fresh dump of all my bookmarks, each URL is archived locally in multiple formats, and the archive enters into my normal backup queue. Link rot may defeat the Supreme Court, but between this and my automated repository tracking I have a pretty good system for backing up useful pieces of other people’s data.
The Kindle Paperwhite has been my primary medium for consuming books since the beginning of 2014. E Ink is a great display technology that I wish was more wide spread, but beyond the fact that the Kindle (and I assume other e-readers) makes for a pleasant reading experience, the real value in electronic books is storage.
At its peak my physical collection was somewhere north of 200 books. As I mentioned years ago I took inspiration from Gary Snyder’s character in The Dharma Bums and stored my books in milk crates, which stack like a bookcase for normal use and kept the collection pre-boxed for moving. But that many books still take up space, and are still annoying to move. And in some regards they are fragile – redundant data storage is expensive in meatspace.
My digital library currently sits at 572 books and 13 gigabytes (the size skyrocketed after I began to archive a few comics). I could not justify that many physical books in my life. I still have a collection of dead trees, but I’m down to 3 milk crates. I store my digital library in git-annex, allowing me to redundantly replicate my collection across the globe, as well as keep copies in cold storage. I also burn yearly optical backups of the library to M-DISC. The library is managed with Calibre.
When I first bought the Kindle it required internet access to associate with my Amazon account. Ever since then, it has been in airplane mode. I spun up a temporary wireless network for the setup that I then deleted after the process was complete, ensuring that even if Amazon’s airplane mode was untrustworthy, the device would not be able to phone home. The advantages of giving the Kindle internet access seem minute, and are far outweighed by the disadvantage of having to trust Amazon.
If I purchase a book from Amazon, I select the “Download & Transfer via USB” option. This results in a crippled AZW file. I am under the radical delusion that I should own what I purchase, so I import that file into Calibre using the DeDRM_tools plugin. This strips any DRM, making the book ready to be consumed and archived. Books are transferred between my computer and the Kindle via USB, which Calibre makes simple.
When I acquire books through other channels, my preferred format is always EPUB: an open format that is simply a zip archive of HTML files. Calibre’s built-in conversion tools are quite good, giving me confidence that any e-book format I import into the library will be readable at any point in the future, but my preference is to store data in formats that are open, accessible, and understandable. The closer one gets to well-formatted plain text, the closer one gets to god.
While the Kindle excels at the linear reading of novels, I’ve also come to appreciate digital copies of reference books and technical manuals. Often the first reading of these types of books involves lots of flipping back and forth, which is easier in the dead tree variant, but after that first reading the searchability of the digital copy is far more useful for reference. The physical size of these types of books also makes them even more difficult to carry and store than other books, all but guaranteeing you won’t have access to them when you need to reference them. Digital books solve that problem.
I’m confident in my ability to securely store digital data. Whenever I import a book into my library, I know that I now have permanent access to that knowledge for the rest of my life, regardless of environmental disaster, the whims of publishing houses, or the size of my living quarters.
I’d neglected backup LUKS headers until Gwern’s data loss postmortem last year. After reading his post I dumped the headers of the drives I had accessible, but I never got around to performing the task on my less frequently accessed drives. Last month I had trouble mounting one of those drives. It turned out I was simply using the wrong passphrase, but the experience prompted me to make sure I had completed the header backup procedure for all drives.
I dump the header to memory using the procedure from the Arch wiki. This is probably unnecessary, but only takes a few extra steps. The header is stored in my password store, which is obsessively backed up.
For years the core of my backup strategy has been rsnapshot via cryptshot to various external drives for local backups, and Tarsnap for remote backups.
Tarsnap, however, can be slow. It tends to take somewhere between 15 to 20 minutes to create my dozen or so archives, even if little has changed since the last run. My impression is that this is simply due to the number of archives I have stored and the number of files I ask it to archive. Once it has decided what to do, the time spent transferring data is negligible. I run Tarsnap hourly. Twenty minutes out of every hour seems like a lot of time spent Tarsnapping.
I’ve eyed Borg for a while (and before that, Attic), but avoided using it due to the rapid development of its earlier days. While activity is nice, too many changes too close together do not create a reassuring image of a backup project. Borg seems to have stabilized now and has a large enough user base that I feel comfortable with it. About a month ago, I began using it to backup my laptop to rsync.net.
Initially I played with borgmatic to perform and maintain the backups. Unfortunately it seems to have issues with signal handling, which caused me to end up with annoying lock files left over from interrupted backups. Borg itself has good documentation and is easy to use, and I think it is useful to build familiarity with the program itself instead of only interacting with it through something else. So I did away with borgmatic and wrote a small bash script to handle my use case.
Creating the backups is simple enough. Borg disables compression by default, but after a little experimentation I found that LZ4 seemed to be a decent compromise between compression and performance.
Pruning backups is equally easy. I knew I wanted to match roughly what I had with Tarsnap: hourly backups for a day or so, daily backups for a week or so, then a month or two of weekly backups, and finally a year or so of monthly backups.
My only hesitation was in how to maintain the health of the backups. Borg provides the convenient borg check command, which is able to verify the consistency of both a repository and the archives themselves. Unsurprisingly, this is a slow process. I didn’t want to run it with my hourly backups. Daily, or perhaps even weekly, seemed more reasonable, but I did want to make sure that both checks were completed successfully with some frequency. Luckily this is just the problem that I wrote backitup to solve.
Because the consistency checks take a while and consume some resources, I thought it would also be a good idea to avoid performing them when I’m running on battery. Giving backitup the ability to detect if the machine is on battery or AC power was a simple hack. The script now features the -a switch to specify that the program should only be executed when on AC power.
I’ve been running this for about a month now and have been pleased with the results. It averages about 30 seconds to create the backups every hour, and another 30 seconds or so to prune the old ones. As with Tarsnap, deduplication is great.
The most recent repository consistency check took about 30 minutes, but only runs every 172800 seconds, or once every other day. The most recent archive consistency check took about 40 minutes, but only runs every 259200 seconds, or once per 3 days. I’m not sure that those schedules are the best option for the consistency checks. I may tweak their frequencies, but because I know they will only be executed when I am on a trusted network and AC power, I’m less concerned about the length of time.
With Borg running hourly, I’ve reduced Tarsnap to run only once per day. Time will tell if Borg will slow as the number of stored archives increase, but for now running Borg hourly and Tarsnap daily seems like a great setup. Tarsnap and Borg both target the same files (with a few exceptions). Tarsnap runs in the AWS us-east-1 region. I’ve always kept my rsync.net account in their Zurich datacenter. This provides the kind of redundancy that lets me rest easy.
Contrary to what you might expect given the number of blog posts on the subject, I actually spend close to no time worrying about data loss in my day to day life, thanks to stuff like this. An ounce of prevention, and all that. (Maybe a few kilograms of prevention in my case.)