I few months ago I read Marie Kondo’s The Life-Changing Magic of Tidying Up. It’s not the sort of book that usually finds its way into my library, but it had been recommended periodically by a handful of different people over a year or two. I found the book to be disappointing. Many of the pages struck me as fluff – clutter, you might say, which is ironic given its subject. Edited down to a pamphlet of a dozen pages, or perhaps a short series of blog posts, it could be enjoyable, but there isn’t enough content for a book.
The one thing I did take away from the book is folding. Kondo recommends folding things such that they stand on edge in the drawer rather then being stacked on top of each other. This way all the contents of the drawer are visible at once, instead of only the things on the top of a stack.
The goal should be to organize the contents so that you can see where every item is at a glance, just as you can see the spines of the books on your bookshelves. The key is to store things standing up rather than laid flat… The number of folds should be adjusted so that the folded clothing when standing on edge fits the height of the drawer. This is the basic principle that will ultimately allow your clothes to be stacked on edge, side by side, so that when you pull open your drawer you can see the edge of every item inside.
This made sense to me. Unfortunately, the combination of having a walk-in closet in my apartment and not owning much in the way of furniture means I don’t actually fold many of my clothes. Most things end up being hanged (a Kondo no-no). I fold some less-seasonally appropriate clothing for storage in Transport Cubes (another Kondo no-no) and I fold larger things like sheets and towels for storage in underbed boxes, but neither of those really lend themselves to this method of folding.
One of the few pieces of furniture I do find useful enough to own is a filing cabinet. I keep socks in the large bottom drawer and underwear in the middle drawer. The top drawer holds an assortment of bandannas, hand wraps, and some seasonally appropriate head and neck wear. After reading the book, I dumped out all the socks and underwear and folded them to Kondo’s specifications.
It is definitely an improvement. Previously I rolled socks together, which is not very efficient in terms of volume (and disrespectful to the sock, according to Kondo). The drawer was overfilling. A pair or two would frequently fall behind the back of the drawer, where I would forget about it until I happened to notice that the drawer was no longer closing all the way.
Folded this way, everything fits. Immediately upon opening the drawer I can take stock. As with all clothing categories, I have different types of socks and different types of underwear, each more or less appropriate for different applications. A quick glance in the drawer lets me know what I have available, and when it may be time to address the laundry pile.
I do most of my computing in the terminal. Minimizing switches to graphical applications helps to improve my efficiency. While the web browser does tend to be superior for consuming and interacting with detailed weather forecasts, I like using wttr.in for answering simple questions like “Do I need a jacket?” or “Is it going to rain tomorrow?”
Of course, weather forecasts are location department. I don’t want to have to think about where I am every time I want to use wttr. To feed it my current location, I use jq to parse the zip code from the output of ip-api.com.
I keep this in a shell script so that I have a simple command that gives me current weather for wherever I happen to be – as long as I’m not connected to a VPN.
I have confidence in my backup strategies for my own data, but until recently I had not considered backing up other people’s data.
Recently, the author of a repository that I tracked on GitHub deleted his account and disappeared from the information super highway. I had a local copy of the repository, but I had not pulled it for a month. A number of recent changes were lost to me. This inspired me to setup the system I now use to automatically update local copies of any code repositories that are useful or interesting to me.
I clone the repositories into ~/library/src and use myrepos to interact with them. I use myrepos for work and personal repositories as well, so to keep this stuff segregated I setup a separate config file and a shell alias to refer to it.
alias lmr='mr --config $HOME/library/src/myrepos.conf --directory=$HOME/library/src'
Now when I want to add a new repository, I clone it normally and register it with myrepos.
$ cd ~/library/src
$ git clone https://github.com/warner/magic-wormhole
$ cd magic-wormhole && lmr register
The ~/library/src/myrepos.conf file has a default section which states that no repository should be updated more than once every 24 hours.
Now I can ask myrepos to update all of my tracked repositories. If it sees that it has already updated a repository within 24 hours, myrepos will skip the repository.
If I’m curious to see what has been recently active, I can ls -ltr ~/library/src. I find this more useful than GitHub stars or similar bookmarking.
I currently track 120 repositories. This is only 3.3 GB, which means I can incorporate it into my normal backup strategies without being concerned about the extra space.
The internet can be fickle, but it will be difficult for me to loose a repository again.
I use a Yubikey Neo for day-to-day PGP operations. For managing the secret key itself, such as during renewal or key signing, I use a USB Armory with host adapter. In host mode, the Armory provides a trusted, open source platform that is compact and easily secured, making it ideal for key management.
Setting up the Armory is fairly straightforward. The Arch Linux ARM project provides prebuilt images. From my laptop, I follow their instructions to prepare the micro SD card, where /dev/sdX is the SD card.
$ddif=/dev/zeroof=/dev/sdXbs=1Mcount=8$fdisk/dev/sdX# `o` to clear any partitions# `n`, `p`, `1`, `2048`, `enter` to create a new primary partition in the first position with a first sector of 2048 and the default last sector# `w` to write$mkfs.ext4/dev/sdX1$mkdir/mnt/sdcard$mount/dev/sdX1/mnt/sdcard
And then extract the image, doing whatever verification is necessary after downloading.
The SD card can then be inserted into the Armory. At no time during this process – or at any point in the future – is the Armory connected to a network. It is entirely air-gapped. As long as the image was not compromised and the Armory is stored securely, the platform should remain trusted.
Note that because the Armory is never on a network, and it has no internal battery, it does not keep time. Upon first boot, NTP should be disabled and the time and date set.
$ timedatectl set-ntp false
$ timedatectl set-time "yyyy-mm-dd hh:mm:ss"# UTC
On subsequent boots, the time and date should be set with timedatectl set-time before performing any cryptographic operations.
The novel tells the story of dynasties struggling for power on the moon, which has been settled and turned into a mining colony. It has been described as “Game of Thrones in space”. While I have not read Game of Thrones, that seems like a roundabout way of saying that it is like another series that deals with the struggles of feudal families mining resources in space. Luna is much like Dune – even up to including a female religious order interested in long term breeding programs and social experiment (funded by The Long Now, of course). Fans of classic science fiction will likely feel at home in its pages. I look forward to the sequel.
This past spring I mentioned my cold storage setup: a number of encrypted 2.5” drives in external enclosures, stored inside a Pelican 1200 case, secured with Abloy Protec2 321 locks. Offline, secure, and infrequently accessed storage is an important component of any strategy for resilient data. The ease with which this can be managed with git-annex only increases my infatuation with the software.
I’ve been happy with the Seagate ST2000LM003 drives for this application. Unfortunately the enclosures I first purchased did not work out so well. I had two die within a few weeks. They’ve been replaced with the SIG JU-SA0Q12-S1. These claim to be compatible with drives up to 8TB (someday I’ll be able to buy 8TB 2.5” drives) and support USB 3.1. They’re also a bit thinner than the previous enclosures, so I can easily fit five in my box. The Seagate drives offer about 1.7 terabytes of usable space, giving this setup a total capacity of 8.5 terabytes.
Setting up git-annex to support this type of cold storage is fairly straightforward, but does necessitate some familiarity with how the program works. Personally, I prefer to do all my setup manually. I’m happy to let the assistant watch my repositories and manage them after the setup, and I’ll occasionally fire up the web app to see what the assistant daemon is doing, but I like the control and understanding provided by a manual setup. The power and flexibility of git-annex is deceptive. Using it solely through the simplified interface of the web app greatly limits what can be accomplished with it.
Encryption
Before even getting into git-annex, the drive should be encrypted with LUKS/dm-crypt. The need for this could be avoided by using something like gcrypt, but LUKS/dm-crypt is an ingrained habit and part of my workflow for all external drives. Assuming the drive is /dev/sdc, pass cryptsetup some sane defaults:
At this point the drive is ready. I close it and then mount it with udiskie to make sure everything is working. How the drive is mounted doesn’t matter, but I like udiskie because it can integrate with my password manager to get the drive passphrase.
With the encryption handled, the drive should now be mounted at /media/themisto. For the first few steps, we’ll basically follow the git-annex walkthrough. Let’s assume that we are setting up this drive to be a repository of the annex ~/video. The first step is to go to the drive, clone the repository, and initialize the annex. When initializing the annex I prepend the name of the remote with satellite :. My cold storage drives are all named after satellites, and doing this allows me to easily identify them when looking at a list of remotes.
$ cd /media/themisto
$ git clone ~/video
$ cd video
$ git annex init "satellite : themisto"
Disk Reserve
Whenever dealing with a repository that is bigger (or may become bigger) than the drive it is being stored on, it is important to set a disk reserve. This tells git-annex to always keep some free space around. I generally like to set this to 1 GB, which is way larger than it needs to be.
$ git config annex.diskreserve "1 gb"
Adding Remotes
I’ll then tell this new repository where the original repository is located. In this case I’ll refer to the original using the name of my computer, nous.
$ git remote add nous ~/video
If other remotes already exist, now is a good time to add them. These could be special remotes or normal ones. For this example, let’s say that we have already completed this whole process for another cold storage drive called sinope, and that we have an s3 remote creatively named s3.
Trust is a critical component of how git-annex works. Any new annex will default to being semi-trusted, which means that when running operations within the annex on the main computer – say, dropping a file – git-annex will want to confirm that themisto has the files that it is supposed to have. In the case of themisto being a USB drive that is rarely connected, this is not very useful. I tell git-annex to trust my cold storage drives, which means that if git-annex has a record of a certain file being on the drive, it will be satisfied with that. This increases the risk for potential data-loss, but for this application I feel it is appropriate.
$ git annex trust .
Preferred Content
The final step that needs to be taken on the new repository is to tell it what files it should want. This is done using preferred content. The standard groups that git-annex ships with cover most of the bases. Of interest for this application is the archive group, which wants all content except that which has already found its way to another archive. This is the behaviour I want, but I will duplicate it into a custom group called satellite. This keeps my cold storage drives as standalone things that do not influence any other remotes where I may want to use the default archive.
For other repositories, I may want to store the data on multiple cold storage drives. In that case I would create a redundantsatellite group that wants all content which is not already present in two other members of the group.
The book begins with an overview of espionage immediately before, during, and shortly after the Cold War, before moving on to the role played by Western intelligence agencies in the current millenium. Grey contrasts the earlier focus on human intelligence with the growing dependency on signals intelligence and assassination programs, and makes a compelling case for the need to return to a balanced approach with a focus on traditional spy running.
The dichotomy is reminiscent between that of the longer-term, unconventional warfare practiced by US Special Forces and the direct action focus of other Special Operations Forces as discussed by Tony Schwalm.