You are currently viewing all posts tagged with linux.

An Inbox for Taskwarrior

My experience with all task managements systems – whether software or otherwise – is that the more you put into them, the more useful they become. Not only adding as many tasks as possible, however small they may be, but also enriching the tasks with as much metadata as possible. When I began using taskwarrior, one of the problems I encountered was how to address this effectively.

Throughout the day I’ll be working on something when I receive an unrelated request. I want to log those requests so that I remember them and eventually complete them, but I don’t want to break from whatever I’m currently doing and take the time to mark these tasks up with the full metadata they eventually need. Context switching is expensive.

To address this, I’ve introduced the idea of a task inbox. I have an alias to add a task to taskwarrior with a due date of tomorrow and a tag of inbox.

alias ti='task add due:tomorrow tag:inbox'

This allows me to very quickly add a task without needing to think about it.

$ ti do something important

Each morning I run task ls. The tasks which were previously added to my inbox are at the top, overdue with a high priority. At this point I’ll modify each of them, removing the inbox tag, setting a real due date, and assigning them to a project. If they are more complex, I may also add annotations or notes, or build the task out with dependencies. If the task is simple – something that may only take a minute or two – I’ll just complete it immediately and mark it as done without bothering to remove the inbox tag.

This alias lowers the barrier of entry enough that I am likely to log even the smallest of tasks, while the inbox concept provides a framework for me to later make the tasks rich in a way that allows me to take advantage of the power that taskwarrior provides.

Borg Assimilation

For years the core of my backup strategy has been rsnapshot via cryptshot to various external drives for local backups, and Tarsnap for remote backups.

Tarsnap, however, can be slow. It tends to take somewhere between 15 to 20 minutes to create my dozen or so archives, even if little has changed since the last run. My impression is that this is simply due to the number of archives I have stored and the number of files I ask it to archive. Once it has decided what to do, the time spent transferring data is negligible. I run Tarsnap hourly. Twenty minutes out of every hour seems like a lot of time spent Tarsnapping.

I’ve eyed Borg for a while (and before that, Attic), but avoided using it due to the rapid development of its earlier days. While activity is nice, too many changes too close together do not create a reassuring image of a backup project. Borg seems to have stabilized now and has a large enough user base that I feel comfortable with it. About a month ago, I began using it to backup my laptop to rsync.net.

Initially I played with borgmatic to perform and maintain the backups. Unfortunately it seems to have issues with signal handling, which caused me to end up with annoying lock files left over from interrupted backups. Borg itself has good documentation and is easy to use, and I think it is useful to build familiarity with the program itself instead of only interacting with it through something else. So I did away with borgmatic and wrote a small bash script to handle my use case.

Creating the backups is simple enough. Borg disables compression by default, but after a little experimentation I found that LZ4 seemed to be a decent compromise between compression and performance.

Pruning backups is equally easy. I knew I wanted to match roughly what I had with Tarsnap: hourly backups for a day or so, daily backups for a week or so, then a month or two of weekly backups, and finally a year or so of monthly backups.

My only hesitation was in how to maintain the health of the backups. Borg provides the convenient borg check command, which is able to verify the consistency of both a repository and the archives themselves. Unsurprisingly, this is a slow process. I didn’t want to run it with my hourly backups. Daily, or perhaps even weekly, seemed more reasonable, but I did want to make sure that both checks were completed successfully with some frequency. Luckily this is just the problem that I wrote backitup to solve.

Because the consistency checks take a while and consume some resources, I thought it would also be a good idea to avoid performing them when I’m running on battery. Giving backitup the ability to detect if the machine is on battery or AC power was a simple hack. The script now features the -a switch to specify that the program should only be executed when on AC power.

My completed Borg wrapper is thus:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#!/bin/sh
export BORG_PASSPHRASE='supers3cr3t'
export BORG_REPO='borg-rsync:borg/nous'
export BORG_REMOTE_PATH='borg1'

# Create backups
echo "Creating backups..."
borg create --verbose --stats --compression=lz4             \
    --exclude ~/projects/foo/bar/baz                        \
    --exclude ~/projects/xyz/bigfatbinaries                 \
    ::'{hostname}-{user}-{utcnow:%Y-%m-%dT%H:%M:%S}'        \
    ~/documents                                             \
    ~/projects                                              \
    ~/mail                                                  \
    # ...etc

# Prune backups
echo "Pruning backups..."
borg prune --verbose --list --prefix '{hostname}-{user}-'    \
    --keep-within=2d                                         \
    --keep-daily=14                                          \
    --keep-weekly=8                                          \
    --keep-monthly=12                                        \

# Check backups
echo "Checking repository..."
backitup -a                                             \
    -p 172800                                           \
    -l ~/.borg_check-repo.lastrun                       \
    -b "borg check --verbose --repository-only"         \
echo "Checking archives..."
backitup -a                                             \
    -p 259200                                           \
    -l ~/.borg_check-arch.lastrun                       \
    -b "borg check --verbose --archives-only --last 24" \

This is executed by a systemd service.

[Unit]
Description=Borg Backup

[Service]
Type=oneshot
ExecStart=/home/pigmonkey/bin/borgwrapper.sh

[Install]
WantedBy=multi-user.target

The service is called hourly by a systemd timer.

[Unit]
Description=Borg Backup Timer

[Timer]
Unit=borg.service
OnCalendar=hourly
Persistent=True

[Install]
WantedBy=timers.target

I don’t enable the timer directly, but add it to /usr/local/etc/trusted_units so that nmtrust activates it when I’m connected to trusted networks.

$ echo "borg.timer,user:pigmonkey" >> /usr/local/etc/trusted_units

I’ve been running this for about a month now and have been pleased with the results. It averages about 30 seconds to create the backups every hour, and another 30 seconds or so to prune the old ones. As with Tarsnap, deduplication is great.

------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:               19.87 GB             18.41 GB             10.21 MB
All archives:              836.02 GB            773.35 GB             19.32 GB
                       Unique chunks         Total chunks
Chunk index:                  371527             14704634
------------------------------------------------------------------------------

The most recent repository consistency check took about 30 minutes, but only runs every 172800 seconds, or once every other day. The most recent archive consistency check took about 40 minutes, but only runs every 259200 seconds, or once per 3 days. I’m not sure that those schedules are the best option for the consistency checks. I may tweak their frequencies, but because I know they will only be executed when I am on a trusted network and AC power, I’m less concerned about the length of time.

With Borg running hourly, I’ve reduced Tarsnap to run only once per day. Time will tell if Borg will slow as the number of stored archives increase, but for now running Borg hourly and Tarsnap daily seems like a great setup. Tarsnap and Borg both target the same files (with a few exceptions). Tarsnap runs in the AWS us-east-1 region. I’ve always kept my rsync.net account in their Zurich datacenter. This provides the kind of redundancy that lets me rest easy.

Contrary to what you might expect given the number of blog posts on the subject, I actually spend close to no time worrying about data loss in my day to day life, thanks to stuff like this. An ounce of prevention, and all that. (Maybe a few kilograms of prevention in my case.)

Terminal Weather

I do most of my computing in the terminal. Minimizing switches to graphical applications helps to improve my efficiency. While the web browser does tend to be superior for consuming and interacting with detailed weather forecasts, I like using wttr.in for answering simple questions like “Do I need a jacket?” or “Is it going to rain tomorrow?”

Of course, weather forecasts are location department. I don’t want to have to think about where I am every time I want to use wttr. To feed it my current location, I use jq to parse the zip code from the output of ip-api.com.

curl wttr.in/"${1:-$(curl http://ip-api.com/json | jq 'if (.zip | length) != 0 then .zip else .city end')}"

I keep this in a shell script so that I have a simple command that gives me current weather for wherever I happen to be – as long as I’m not connected to a VPN.

$ wttr
Weather report: 94107

     \   /     Sunny
      .-.      62-64 °F
   ― (   ) ―   → 19 mph
      `-’      12 mi
     /   \     0.0 in
...

Automated Repository Tracking

I have confidence in my backup strategies for my own data, but until recently I had not considered backing up other people’s data.

Recently, the author of a repository that I tracked on GitHub deleted his account and disappeared from the information super highway. I had a local copy of the repository, but I had not pulled it for a month. A number of recent changes were lost to me. This inspired me to setup the system I now use to automatically update local copies of any code repositories that are useful or interesting to me.

I clone the repositories into ~/library/src and use myrepos to interact with them. I use myrepos for work and personal repositories as well, so to keep this stuff segregated I setup a separate config file and a shell alias to refer to it.

alias lmr='mr --config $HOME/library/src/myrepos.conf --directory=$HOME/library/src'

Now when I want to add a new repository, I clone it normally and register it with myrepos.

$ cd ~/library/src
$ git clone https://github.com/warner/magic-wormhole
$ cd magic-wormhole && lmr register

The ~/library/src/myrepos.conf file has a default section which states that no repository should be updated more than once every 24 hours.

[DEFAULT]
skip = [ "$1" = update ] && ! hours_since "$1" 24

Now I can ask myrepos to update all of my tracked repositories. If it sees that it has already updated a repository within 24 hours, myrepos will skip the repository.

$ lmr update

To automate this I create a systemd service.

[Unit]
Description=Update library repositories

[Service]
Type=oneshot
ExecStart=/usr/bin/mr --config %h/library/src/myrepos.conf -j5 update

[Install]
WantedBy=multi-user.target

And a systemd timer to run the service every hour.

[Unit]
Description=Update library repositories timer

[Timer]
Unit=library-repos.service
OnCalendar=hourly
Persistent=True

[Install]
WantedBy=timers.target

I don’t enable this timer directly, but instead add it to my trusted_units file so that nmtrust will enable it only when I am on a trusted network.

$ echo "library-repos.timer,user:pigmonkey" >> /usr/local/etc/trusted_units

If I’m curious to see what has been recently active, I can ls -ltr ~/library/src. I find this more useful than GitHub stars or similar bookmarking.

I currently track 120 repositories. This is only 3.3 GB, which means I can incorporate it into my normal backup strategies without being concerned about the extra space.

The internet can be fickle, but it will be difficult for me to loose a repository again.

The USB Armory for PGP Key Management

I use a Yubikey Neo for day-to-day PGP operations. For managing the secret key itself, such as during renewal or key signing, I use a USB Armory with host adapter. In host mode, the Armory provides a trusted, open source platform that is compact and easily secured, making it ideal for key management.

Setting up the Armory is fairly straightforward. The Arch Linux ARM project provides prebuilt images. From my laptop, I follow their instructions to prepare the micro SD card, where /dev/sdX is the SD card.

$ dd if=/dev/zero of=/dev/sdX bs=1M count=8
$ fdisk /dev/sdX
# `o` to clear any partitions
# `n`, `p`, `1`, `2048`, `enter` to create a new primary partition in the first position with a first sector of 2048 and the default last sector
# `w` to write
$ mkfs.ext4 /dev/sdX1
$ mkdir /mnt/sdcard
$ mount /dev/sdX1 /mnt/sdcard

And then extract the image, doing whatever verification is necessary after downloading.

$ wget http://os.archlinuxarm.org/os/ArchLinuxARM-usbarmory-latest.tar.gz
$ bsdtar -xpf ArchLinuxARM-usbarmory-latest.tar.gz -C /mnt/sdcard
$ sync

Followed by installing the bootloader.

$ sudo dd if=/mnt/sdcard/boot/u-boot.imx of=/dev/sdX bs=512 seek=2 conv=fsync
$ sync

The bootloader must be tweaked to enable host mode.

$ sed -i '/#setenv otg_host/s/^#//' /mnt/sdcard/boot/boot.txt
$ cd /mnt/sdcard/boot
$ ./mkscr

For display I use a Plugable USB 2.0 UGA-165 adapter. To setup DisplayLink one must configure the correct modules.

$ sed -i '/blacklist drm_kms_helper/s/^/#/g' /mnt/sdcard/etc/modprobe.d/no-drm.conf
$ echo "blacklist udlfb" >> /mnt/sdcard/etc/modprobe.d/no-drm.conf
$ echo udl > /mnt/sdcard/etc/modules-load.d/udl.conf

Finally, I copy over pass and ctmg so that I have them available on the Armory and unmount the SD card.

$ cp /usr/bin/pass /mnt/sdcard/bin/
$ cp /usr/bin/ctmg /mnt/sdcard/bin/
$ umount /mnt/sdcard

The SD card can then be inserted into the Armory. At no time during this process – or at any point in the future – is the Armory connected to a network. It is entirely air-gapped. As long as the image was not compromised and the Armory is stored securely, the platform should remain trusted.

Note that because the Armory is never on a network, and it has no internal battery, it does not keep time. Upon first boot, NTP should be disabled and the time and date set.

$ timedatectl set-ntp false
$ timedatectl set-time "yyyy-mm-dd hh:mm:ss" # UTC

On subsequent boots, the time and date should be set with timedatectl set-time before performing any cryptographic operations.

Cold Storage

This past spring I mentioned my cold storage setup: a number of encrypted 2.5” drives in external enclosures, stored inside a Pelican 1200 case, secured with Abloy Protec2 321 locks. Offline, secure, and infrequently accessed storage is an important component of any strategy for resilient data. The ease with which this can be managed with git-annex only increases my infatuation with the software.

Data Data Data Data Data

I’ve been happy with the Seagate ST2000LM003 drives for this application. Unfortunately the enclosures I first purchased did not work out so well. I had two die within a few weeks. They’ve been replaced with the SIG JU-SA0Q12-S1. These claim to be compatible with drives up to 8TB (someday I’ll be able to buy 8TB 2.5” drives) and support USB 3.1. They’re also a bit thinner than the previous enclosures, so I can easily fit five in my box. The Seagate drives offer about 1.7 terabytes of usable space, giving this setup a total capacity of 8.5 terabytes.

Setting up git-annex to support this type of cold storage is fairly straightforward, but does necessitate some familiarity with how the program works. Personally, I prefer to do all my setup manually. I’m happy to let the assistant watch my repositories and manage them after the setup, and I’ll occasionally fire up the web app to see what the assistant daemon is doing, but I like the control and understanding provided by a manual setup. The power and flexibility of git-annex is deceptive. Using it solely through the simplified interface of the web app greatly limits what can be accomplished with it.

Encryption

Before even getting into git-annex, the drive should be encrypted with LUKS/dm-crypt. The need for this could be avoided by using something like gcrypt, but LUKS/dm-crypt is an ingrained habit and part of my workflow for all external drives. Assuming the drive is /dev/sdc, pass cryptsetup some sane defaults:

$ sudo cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha512 luksFormat /dev/sdc

With the drive encrypted, it can then be opened and formatted. I’ll give the drive a human-friendly label of themisto.

$ sudo cryptsetup luksOpen /dev/sdc themisto_crypt
$ sudo mkfs.ext4 -L themisto /dev/mapper/themisto_crypt

At this point the drive is ready. I close it and then mount it with udiskie to make sure everything is working. How the drive is mounted doesn’t matter, but I like udiskie because it can integrate with my password manager to get the drive passphrase.

$ sudo cryptsetup luksClose /dev/mapper/themisto_crypt
$ udiskie-mount -r /dev/sdc

Git-Annex

With the encryption handled, the drive should now be mounted at /media/themisto. For the first few steps, we’ll basically follow the git-annex walkthrough. Let’s assume that we are setting up this drive to be a repository of the annex ~/video. The first step is to go to the drive, clone the repository, and initialize the annex. When initializing the annex I prepend the name of the remote with satellite :. My cold storage drives are all named after satellites, and doing this allows me to easily identify them when looking at a list of remotes.

$ cd /media/themisto
$ git clone ~/video
$ cd video
$ git annex init "satellite : themisto"

Disk Reserve

Whenever dealing with a repository that is bigger (or may become bigger) than the drive it is being stored on, it is important to set a disk reserve. This tells git-annex to always keep some free space around. I generally like to set this to 1 GB, which is way larger than it needs to be.

$ git config annex.diskreserve "1 gb"

Adding Remotes

I’ll then tell this new repository where the original repository is located. In this case I’ll refer to the original using the name of my computer, nous.

$ git remote add nous ~/video

If other remotes already exist, now is a good time to add them. These could be special remotes or normal ones. For this example, let’s say that we have already completed this whole process for another cold storage drive called sinope, and that we have an s3 remote creatively named s3.

$ git remote add sinope /media/sinope/video
$ export AWS_ACCESS_KEY_ID="..."
$ export AWS_SECRET_ACCESS_KEY="..."
$ git annex enableremote s3

Trust

Trust is a critical component of how git-annex works. Any new annex will default to being semi-trusted, which means that when running operations within the annex on the main computer – say, dropping a file – git-annex will want to confirm that themisto has the files that it is supposed to have. In the case of themisto being a USB drive that is rarely connected, this is not very useful. I tell git-annex to trust my cold storage drives, which means that if git-annex has a record of a certain file being on the drive, it will be satisfied with that. This increases the risk for potential data-loss, but for this application I feel it is appropriate.

$ git annex trust .

Preferred Content

The final step that needs to be taken on the new repository is to tell it what files it should want. This is done using preferred content. The standard groups that git-annex ships with cover most of the bases. Of interest for this application is the archive group, which wants all content except that which has already found its way to another archive. This is the behaviour I want, but I will duplicate it into a custom group called satellite. This keeps my cold storage drives as standalone things that do not influence any other remotes where I may want to use the default archive.

$ git annex groupwanted satellite "(not copies=satellite:1) or approxlackingcopies=1"
$ git annex group . satellite
$ git annex wanted . groupwanted

For other repositories, I may want to store the data on multiple cold storage drives. In that case I would create a redundantsatellite group that wants all content which is not already present in two other members of the group.

$ git annex groupwanted redundantsatellite "(not copies=redundantsatellite:2) or approxlackingcopies=1"
$ git annex group . redundantsatellite
$ git annex wanted . groupwanted

Syncing

With everything setup, the new repository is ready to sync and to start to ingest content from the remotes it knows about!

$ git annex sync --content

However, the original repository also needs to know about the new remote.

$ cd ~/video
$ git remote add themisto /media/themisto/video
$ git annex sync

The same is the case for any other previously existing repository, such as sinope.

Redundant File Storage

As I’ve mentioned previously, I store just about everything that matters in git-annex (the only exception is code, which is stored directly in regular git). One of git-annex’s many killer features is special remotes. They make tenable this whole “cloud storage” thing that we do now.

A special remote allows me to store my files with a large number of service providers. It makes this easy to do by abstracting away the particulars of the provider, allowing me to interact with all of them in the same way. It makes this safe to do by providing encryption. These factors encourage redundancy, reducing my reliance on any one provider.

Recently I began playing with rclone. Rclone is a program that supports file syncing for a handful of cloud storage providers. That’s semi-interesting by itself but, more significantly, there is a git-annex special remote wrapper. That means any of the providers supported by rclone can be used as a special remote. I looked through all of rclone’s supported providers and decided there were a few that I had no reason not to use.

Hubic

Hubic is a storage provider from OVH with a data center in France. Their pricing is attractive. I’d happily pay €50 per year for 10TB of storage. Unfortunately they limit connections to 10 Mbit/s. In my experience they ended up being even slower than this. Slow enough that I don’t want to give them money, but there’s still no reason not to take advantage of their free 25 GB plan.

After signing up, I setup a new remote in rclone.

$ rclone config
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> hubic-annex
Type of storage to configure.
Choose a number from below, or type in your own value
 1 / Amazon Drive
   \ "amazon cloud drive"
 2 / Amazon S3 (also Dreamhost, Ceph)
   \ "s3"
 3 / Backblaze B2
   \ "b2"
 4 / Dropbox
   \ "dropbox"
 5 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
 6 / Google Drive
   \ "drive"
 7 / Hubic
   \ "hubic"
 8 / Local Disk
   \ "local"
 9 / Microsoft OneDrive
   \ "onedrive"
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
11 / Yandex Disk
   \ "yandex"
Storage> 7
Hubic Client Id - leave blank normally.
client_id> 
Hubic Client Secret - leave blank normally.
client_secret> 
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
client_id = 
client_secret = 
token = {"access_token":"XXXXXX"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

With that setup, I went into my ~/documents annex and added the remote.

$ git annex initremote hubic type=external externaltype=rclone target=hubic-annex prefix=annex-documents chunk=50MiB encryption=shared rclone_layout=lower mac=HMACSHA512

I want git-annex to automatically send everything to Hubic, so I took advantage of standard groups and put the repository in the backup group.

$ git annex wanted hubic standard
$ git annex group hubic backup

Given Hubic’s slow speed, I don’t really want to download files from it unless I need to. This can be configured in git-annex by setting the cost of the remote. Local repositories default to 100 and remote repositories default to 200. I gave the Hubic remote a high cost so that it will only be used if no other remotes are available.

$ git config remote.hubic.annex-cost 500

If you would like to try Hubic, I have a referral code which gives us both an extra 5GB for free.

Backblaze B2

B2 is the cloud storage offering from backup company Backblaze. I don’t know anything about them, but at $0.005 per GB I like their pricing. A quick search of reviews shows that the main complaint about the service is that they offer no geographic redundancy, which is entirely irrelevant to me since I build my own redundancy with my half-dozen or so remotes per repository.

Signing up with Backblaze took a bit longer. They wanted a phone number for 2-factor authentication, I wanted to give them a credit card so that I could use more than the 10GB they offer for free, and I had to generate an application key to use with rclone. After that, the rclone setup was simple.

$ rclone config
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> b2-annex
Type of storage to configure.
Choose a number from below, or type in your own value
 1 / Amazon Drive
   \ "amazon cloud drive"
 2 / Amazon S3 (also Dreamhost, Ceph)
   \ "s3"
 3 / Backblaze B2
   \ "b2"
 4 / Dropbox
   \ "dropbox"
 5 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
 6 / Google Drive
   \ "drive"
 7 / Hubic
   \ "hubic"
 8 / Local Disk
   \ "local"
 9 / Microsoft OneDrive
   \ "onedrive"
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
11 / Yandex Disk
   \ "yandex"
Storage> 3
Account ID
account> 123456789abc
Application Key
key> 0123456789abcdef0123456789abcdef0123456789
Endpoint for the service - leave blank normally.
endpoint> 
Remote config
--------------------
[remote]
account = 123456789abc
key = 0123456789abcdef0123456789abcdef0123456789
endpoint = 
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

With that, it was back to ~/documents to initialize the remote and send it all the things

$ git annex initremote b2 type=external externaltype=rclone target=b2-annex prefix=annex-documents chunk=50MiB encryption=shared rclone_layout=lower mac=HMACSHA512
$ git annex wanted b2 standard
$ git annex group b2 backup

While I did not measure the speed with B2, it feels as fast as my S3 or rsync.net remotes, so I didn’t bother setting the cost.

Google Drive

While I do not regularly use Google services for personal things, I do have a Google account for Android stuff. Google Drive offers 15 GB of storage for free and rclone supports it, so why not take advantage?

$ rclone config
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> gdrive-annex
Type of storage to configure.
Choose a number from below, or type in your own value
 1 / Amazon Drive
   \ "amazon cloud drive"
 2 / Amazon S3 (also Dreamhost, Ceph)
   \ "s3"
 3 / Backblaze B2
   \ "b2"
 4 / Dropbox
   \ "dropbox"
 5 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
 6 / Google Drive
   \ "drive"
 7 / Hubic
   \ "hubic"
 8 / Local Disk
   \ "local"
 9 / Microsoft OneDrive
   \ "onedrive"
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
11 / Yandex Disk
   \ "yandex"
Storage> 6
Google Application Client Id - leave blank normally.
client_id> 
Google Application Client Secret - leave blank normally.
client_secret> 
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
client_id = 
client_secret = 
token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

And again, to ~/documents.

$ git annex initremote gdrive type=external externaltype=rclone target=gdrive-annex prefix=annex-documents chunk=50MiB encryption=shared rclone_layout=lower mac=HMACSHA512
$ git annex wanted gdrive standard
$ git annex group gdrive backup

Rinse and repeat the process for other annexes. Revel in having simple, secure, and redundant storage.

I treat myself to a new laptop every three or four years.

A few weeks ago I bought a Lenovo Thinkpad X260, replacing the T430s that has been my daily driver since 2012. I’m a big fan of the simplicity, ruggedness and modularity of Thinkpads. It used to be that one of the only downsides to Thinkpads were the terrible screens, but that has been addressed by the X260’s FHD display. The high resolution let me move from the 14” display of the T430s to the 12.5” display of the X260 without feeling like I’ve lost anything, but with an obvious gain in portability. The X260 is a great machine to put Linux on, which Spark helps me to do with no effort and a minimum expenditure of time.

Thinkpad X260