Documenting my backup setup as it changes over time.

2017-12-24 Borg Backup

The new laptop uses PureOS, a Debian variant, and thus will not use the Apple Timemachine backup disks. What else to use? I asked around on Mastodon:

What do you use to backup an entire GNU/Linux laptop to an external disk? Ideally it would be a bootable backup, of course, but that’s not mandatory. Déjà Dup Backup Tool seems to be designed for just user data. Duplicity still generates opaque files. Ideally, I’d use an encrypted external disk and just rsync every hour if the disk is mounted, and delete old backups when running out of space. Does this wrapper script already exist? Something like Time Machine for every Unix out there.

I mean that rsync creates linked trees that look like complete sets of backups for every time period, and thus you can delete old link trees the actual file content will only get deleted when it is not referred to anymore. You basically want a clever use of the --link-dest parameter, as illustrated in the linked article. That also solves the problem with --delete deleting files in your backup.

Radomir suggested Borgbackup and that is what I went with. I still don’t like the opaque file format, but I have to start somewhere.

Following the tutorial Automated backups to a local hard drive, I run only into a small number of problems.

  1. you have to mkdir /mnt/backup/borg-backups before creating the repo
  2. you have to chmod +x before running it
  3. if you create your repo using borg init --encryption=repokey --progress /mnt/backup/borg-backups/backup.borg you will type in a key and you then need to export BORG_PASSPHRASE="*secret*" in as indicated by the comment somewhere in the middle

When all that was done, it still wouldn’t run using systemctl start --no-block automatic-backup.

Here’s what it says when I check using journalctl -fu automatic-backup:

-- Logs begin at Sat 2017-12-23 14:10:29 CET. --
Dec 24 15:45:09 melanobombus[32376]: EOFError
Dec 24 15:45:09 melanobombus[32376]: Platform: Linux melanobombus 4.13.0-1-amd64 #1 SMP Debian 4.13.10-1 (2017-10-30) x86_64
Dec 24 15:45:09 melanobombus[32376]: Linux: PureOS 8 green
Dec 24 15:45:09 melanobombus[32376]: Borg: 1.1.3  Python: CPython 3.6.4rc1
Dec 24 15:45:09 melanobombus[32376]: PID: 32397  CWD: /
Dec 24 15:45:09 melanobombus[32376]: sys.argv: ['/usr/bin/borg', 'create', '--stats', '--one-file-system', '--compression', 'lz4', '--checkpoint-interval', '86400', '--exclude', '/root/.cache', '--exclude', '/var/cache', '--exclude', '/var/lib/docker/devicemapper', '/mnt/backup/borg-backups/backup.borg::2017-12-24-melanobombus-32376-system', '/', '/boot']
Dec 24 15:45:09 melanobombus[32376]: SSH_ORIGINAL_COMMAND: None
Dec 24 15:45:09 melanobombus systemd[1]: automatic-backup.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 24 15:45:09 melanobombus systemd[1]: automatic-backup.service: Failed with result 'exit-code'.
Dec 24 15:45:09 melanobombus systemd[1]: Failed to start automatic-backup.service.

But I was able to run it manually using /usr/bin/borg create --stats --one-file-system --compression lz4 --checkpoint-interval 86400 --exclude /root/.cache --exclude /var/cache --exclude /var/lib/docker/devicemapper /mnt/backup/borg-backups/backup.borg::2017-12-24-melanobombus-32376-system / /boot and /usr/bin/borg create --stats --one-file-system --compression lz4 --checkpoint-interval 86400 --exclude 'sh:/home/*/.cache' /mnt/backup/borg-backups/backup.borg::2017-12-24-melanobombus-32376-home /home/ so I’m not quite sure what the problem is.

Any ideas?

I just came back from the family event, ran as root, no problems. Then I started the backup service via systemctl, no problem. I guess it just works, for the moment?

In theory, plugging in the drive should now mount it automatically and once that happens, a new backup will be made.

This last part actually needs an explanation. I used the Disks application to format the external disk and mount it.

Screenshot of the Disk app

This resulted in a change to /etc/fstab:

LABEL=Backup /mnt/backup auto nosuid,nodev,nofail,noauto,x-gvfs-show 0 0

Thus, any disk labeled “Backup” will be mounted as /mnt/backup.


Let me quickly copy and paste the content of the various files in /etc/backups just in case the original documentation changes.

/etc/backups $ ls -l
total 5
-rw-r--r--   1 root           root      130 2017-12-24 15:28 40-backup.rules
-rw-r--r--   1 root           root        0 2017-12-24 15:30 autoeject-no
-rw-r--r--   1 root           root       53 2017-12-24 15:28 automatic-backup.service
-rw-r--r--   1 root           root       37 2017-12-24 15:33 backup.disks
-rwx------   1 root           root     2712 2017-12-24 15:46
-rwx------   1 root           root     2665 2017-12-24 15:29


You installed a symlink to this file using ln -s /etc/backups/40-backup.rules /etc/udev/rules.d/40-backup.rules.

ACTION=="add", SUBSYSTEM=="bdi", DEVPATH=="/devices/virtual/bdi/*", TAG+="systemd", ENV{SYSTEMD_WANTS}="automatic-backup.service"


This is an empty file for me to rename. If the file autoeject exists, the disk will be ejected after the backup is made. I don’t know whether I will use this feature. This file serves as a reminder. See the end of for details.


You installed a symlink to this file using ln -s /etc/backups/automatic-backup.service /etc/systemd/system/automatic-backup.service.



This is the number derived from lsblk -o+uuid,label. It will differ from what you need to put here! This lists the disks that are actual backup disks. All the others will be ignored by even if they are mounted as /mnt/backup.


When I ran lsblk -o+uuid,label I also saw that the disk was /dev/sdb. To mount it for the first time: mount /dev/sdb /mnt/backup. Run mkdir /mnt/backup/borg-backups to create the directory. Run borg init --encryption=repokey --progress /mnt/backup/borg-backups/backup.borg to create the repo in that directory.

Don’t forget to search for BORG_PASSPHRASE and change it to whatever you used when you ran borg init --encryption=repokey --progress /mnt/backup/borg-backups/backup.borg.

Run sudo chmod 0700* to hide the passphrase from everybody else and to make it executable.

#!/bin/bash -ue

# The udev rule is not terribly accurate and may trigger our service before
# the kernel has finished probing partitions. Sleep for a bit to ensure
# the kernel is done.
# This can be avoided by using a more precise udev rule, e.g. matching
# a specific hardware path and partition.
sleep 5

# Script configuration

# The backup partition is mounted there

# This is the location of the Borg repository

# Archive name schema
DATE=$(date --iso-8601)-$(hostname)

# This is the file that will later contain UUIDs of registered backup drives

# Find whether the connected block device is a backup drive
for uuid in $(lsblk --noheadings --list --output uuid)
        if grep --quiet --fixed-strings $uuid $DISKS; then

if [ ! $uuid ]; then
        echo "No backup disk found, exiting"
        exit 0

echo "Disk $uuid is a backup disk"
# Mount file system if not already done. This assumes that if something is already
# mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
# it was mounted somewhere else.
(mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
echo "Drive path: $drive"

# Create backups

# Options for borg create
BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"

# Set BORG_PASSPHRASE or BORG_PASSCOMMAND somewhere around here, using export,
# if encryption is used.
export BORG_PASSPHRASE="*secret*"

# No one can answer if Borg asks these questions, it is better to just fail quickly
# instead of hanging.

# Log Borg version
borg --version

echo "Starting backup for $DATE"

borg create $BORG_OPTS \
  --exclude /root/.cache \
  --exclude /var/cache \
   $TARGET::$DATE-$$-system \

echo "Completed backup for $DATE"

borg prune                          \
    --list                          \
    --show-rc                       \
    --keep-daily    7               \
    --keep-weekly   4               \
    --keep-monthly  6               \

# Just to be completely paranoid

if [ -f /etc/backups/autoeject ]; then
        umount $MOUNTPOINT
        hdparm -Y $drive

if [ -f /etc/backups/backup-suspend ]; then
        systemctl suspend

Examining the backups

Listing the archives:

$ sudo borg list /mnt/backup/borg-backups/backup.borg
Enter passphrase for key /mnt/backup/borg-backups/backup.borg: 
2017-12-24-melanobombus-32376-home   Sun, 2017-12-24 16:10:27 [64279b2b27c17174ce8673e0eb8b1e9c8f16057300baf7edf1a1491facb87eba]
2017-12-25-melanobombus-3598-system  Mon, 2017-12-25 01:36:41 [2c28d0fa20d3e8cc818c21c3273abedd6c5dd113034ac88a0b08800aeb4215d5]
2017-12-26-melanobombus-3809-system  Tue, 2017-12-26 19:09:22 [f5df4b0dc7d280317591e599e3c715fcfaa89e260fbdf300237980314ad8d894]

Mounting an archive:

$ sudo mkdir /mnt/borg
$ sudo borg mount /mnt/backup/borg-backups/backup.borg::2017-12-26-melanobombus-3809-system /mnt/borg
Enter passphrase for key /mnt/backup/borg-backups/backup.borg: 
$ sudo ls /mnt/borg/etc/backups
40-backup.rules  autoeject-no  automatic-backup.service  backup.disks

Looking good!


Handle exit codes? See Automatic backups.

Now that I have seen it all in action, perhaps Déjà Dup Backup Tool is close enough? After all, I still have opaque files, now. :( And Déjà Dup is well integrated in the system...


Comments on 2017-12-24 Borg Backup

Moin Alex, nice post!

Some comments:

“you have to mkdir /mnt/backup/borg-backups before creating the repo”

borg creates the repo dir if it does not exist, but maybe it does not create missing parent dirs of it. Not sure if that would be an improvement if it did, esp. considering typos.

--checkpoint-interval 86400

be careful with a that long checkpoint interval. you will lose up to a day of “backup work” if the connection breaks down before finishing.

DATE=$(date --iso-8601)-$(hostname)

borg can expand {utcnow}-{hostname} internally.

borg --version

borg create ... --show-version ... (similar to --show-rc)

borg prune: maybe add --stats so it tells how much repo space it freed.

systemd issues: no idea

Cheers, Thomas

Thomas Waldmann 2017-12-29 01:08 UTC

Thank you for the comments!

I read up on the various options and changed the script to now run as follows:

borg create                         \
     --stats                        \
     --one-file-system              \
     --compression lz4              \
     --show-version                 \
     --exclude /root/.cache         \
     --exclude /var/cache           \
     $TARGET::{utcnow}-{hostname}   \

borg prune                          \
     --stats                        \
     --list                         \
     --show-rc                      \
     --keep-daily    7              \
     --keep-weekly   4              \
     --keep-monthly  6              \

– Alex 2017-12-29 13:20 UTC

As a reminder to myself: what to do when you want to add another disk to your rotating disk schedule?

  1. I used the disk utility to format and partition the backup disk. Partitioning: GUID Partition Table. Volume: Ext4.
  2. mkdir /mnt/backup/borg-backups/
  3. borg init --encryption=repokey --progress /mnt/backup/borg-backups/backup.borg
  4. use the same password as I provided in /etc/backups/
  5. use lsblk --list --output=uuid,mountpoint to find the new UUID
  6. add this UUID to /etc/backups/backup.disks

Unmount the disk and unplug it, then plug it again and look at the output of sudo journalctl -fu automatic-backup.

– Alex Schroeder 2018-02-08 07:06 UTC

Make sure you check the journal! Today I ran sudo journalctl -fu automatic-backup and saw:

Feb 16 14:14:18 melanobombus systemd[1]: Starting automatic-backup.service...
Feb 16 14:14:24 melanobombus[1728]: No backup disk found, exiting
Feb 16 14:14:24 melanobombus systemd[1]: Started automatic-backup.service.

I was confused but finally decided to just try again, running sudo umount /mnt/backup and unplugging the disk, plugging it back in again, and then it worked:

Feb 16 14:22:28 melanobombus systemd[1]: Starting automatic-backup.service...
Feb 16 14:22:33 melanobombus[3102]: Disk 156cf4df-aa58-421e-b3d0-583fe6fdff4a is a backup disk
Feb 16 14:22:33 melanobombus[3102]: /dev/sdb1 on /mnt/backup type ext4 (rw,nosuid,nodev,relatime,data=ordered,x-gvfs-show)
Feb 16 14:22:33 melanobombus[3102]: Drive path: /dev/sdb1


– Alex Schroeder 2018-02-16 13:24 UTC

Add Comment

2017-09-04 Backup

I bought three new 4T disk drives for my backup needs! Two of these will be used in rotation, one of them always at my wife’s office. I’d like to encrypt them. Do I use the Apple tools to do it? Maybe I should.

I think this is what I want to do:

  1. I want to backup my laptop’s internal drive, of course.
  2. Use one of the 4T disks as the new external disk, replacing the 1T disk I currently use (called “Extern”).
  3. I also want to replace the other external disk we use for media (called “Movies”)
  4. Backup my websites using rsync.
  5. Use Time Machine for the two other 4T disks. This means that eventually, as the first disk starts to fill up, a complete backup will no longer be possible. But since all my backups are currently on 1T disks, this should be possible for quite a while.

And these are the steps I need to do:

  1. Pick a nice long password.
  2. Use Disk Utility to erase the first 4T disk and create a Mac OS Extended (Case-sensitive, Journaled, Encrypted) partition. Let’s call it “Data”.
  3. Copy all the data from “Extern” to “Data” in archive mode. Can I use cp -a for this? I think I’m better off using what I know: sudo rsync --archive --itemize-changes /Volumes/Extern/ /Volumes/Data
  4. Copy all the data from “Movies” to “Data” in archive mode. sudo rsync --archive --itemize-changes /Volumes/Movies/ /Volumes/Data should merge these without problems, as far as I can tell from the top level directories.
  5. Fix the existing backupscript such that it downloads the sites and /etc to the new “Data” drive; remove the rsync invocations for the local drives.
  6. Use Disk Utility to erase the second 4T disk and create a Mac OS Extended (Journaled) partition. Let’s call it “Time Machine 1”. Tell Time Machine to use it, and make sure the backup is encrypted. Send it off site.
  7. Use Disk Utility to erase the third 4T disk and create a Mac OS Extended (Journaled) partition. Let’s call it “Time Machine 2”. Tell Time Machine to use it, and make sure the backup is encrypted, too.


“About This Mac” reports:

  1. macOS Sierra, Version 10.12.6
  2. MacBooc Pro (13-inch, Mid 2010)

Disk Utility reports:

  1. Hitachi HTS545025B9SA02 Media (the internal disk, 250GB)
  2. TOSHIBA External USB 3.0 Media (”CANVIO for Desktop”, 4TB), twice


Comments on 2017-09-04 Backup

Many hours later, I copied the contents of my old “Extern” disk and my old “Movies” disk (a hold over from the old sneaker net days when people would visit one another with hard disks in order to share) to the new “Data” disk.

So on to the next step: I plugged in the second disk, used Disk Utility to rename it to “Backup” and used Time Machine to set it up as an encrypted backup. I made sure to look at Options and removed the exclusion of the external “Data” disk. I want it included, after all.

I do wonder how good this Apple disk and backup encryption is.

– Alex 2017-09-05 13:28 UTC

Wow. Many hours later and we have 440GB of an estimated 1.8TB written. Time Machine is slooow.

– Alex 2017-09-05 19:38 UTC

OK, today I learned: one full backup takes more than 24h. The next laptop definitely needs USB 3.

– Alex 2017-09-06 13:29 UTC

Ok, disk “Backup” is done. Time Machine said “Encrypting Backup: 7%.” What is this? I thought it was all encrypted?

Oh well, since I was able to unmount it, I just went ahead and plugged in the third disk, erased it, called it “Backup 2”, and told Time Machine to use it without discarding the first Backup disk. So now it will backup to both. This is good.

And now that I have a new set of disks, I should definitely check the disks. But before doing all that, I will have to prepare:

  1. install the SMART driver, check all three disks
  2. much later, uninstall the SMART driver, mount all the old disks and wipe them
  3. install the SMART driver again


– Alex 2017-09-06 14:44 UTC

OK, second backup done. When I ejected the disk it said “Encrypting Backup 6%”. I still wonder what that means.

Just to get a feeling for how things work, I decided to put the first backup disk back in and clicked “Backup Now” in the menu. To be honest, I thought Time Machine should detect the old backup disk immediately, notice that the last backup was older than one hour and immediately do another backup. Not so, unfortunately.

“Preparing Backup...”

– Alex 2017-09-07 12:36 UTC

I am happy to learn that this new backup is “490MB of 28.38GB” done.

– Alex 2017-09-07 12:39 UTC

Sadly, this is where it remains. Currently: 492MB. The estimate is: 2h remaining.

– Alex 2017-09-07 12:54 UTC

Ugh, status unchanged. This is not cool.

– Alex 2017-09-07 14:06 UTC

I was unsure of what to do and so I turned to the age old trick: I rebooted the system.

– Alex 2017-09-07 14:32 UTC

Rebooting with the drives connected left me with the grey apple screen and ventilators at 100%. I disconnected the new USB drives and held down the power button until it powered down. I am not liking this!

– Alex 2017-09-07 14:53 UTC

After rebooting and reconnecting the drives I was asked for the two passwords and the icon for the backup drive turned into the petrol colored backup icon. Good!

Picked “Backup Now” from the menu. Current status: “Preparing Backup...”

– Alex 2017-09-07 14:58 UTC

Status:”Encrypting Backup Disk: 9%”

– Alex 2017-09-07 19:04 UTC

After a longer trip abroad I returned to this backup and find this: “Encrypting Backup Disk… (16%)” – I googled for time machine status encrypting and found this: “What you describe is completely normal. It will take the better part of a day to finish encrypting 48 GB with a rotating hard disk drive.” [1] I have about 1.71 TB of data on this drive. This is a major pain.

And I still don’t understand what is happening. I have an encrypted disk (I get asked for a password when mounting it), and yet there seems to be a second layer of encryption that is applied later. This would seem to be ridiculous. The person maintaining the Time Machine FAQ says that this doesn’t happen. “If I use the encrypted disk AND choose the Time Machine encryption option, will everything be encrypted twice? No.” [2]

I just wonder how to explain what I’m seeing and I wonder whether I should just switch to Carbon Copy Cloner.

– Alex 2017-10-22 06:07 UTC

Well, 24h later it’s still encrypting the backup disk, now at 35%. In short, about 20% per day. This will not do. What happens when I get an OS update?

I feel like I should try a restart: If your disk is encrypting for an unbearably long time, cancel it, erase and encrypt the drive first, and than start the timemachine backup.

– Alex 2017-10-23 05:23 UTC

And just as I opened up Disk Utility, picked the disk, clicked Erase, and unmounted the disk, but it had failed to erase it, and I check the progress monitor and it says: Latest Backup to “Backup”: Today, 06:35” (i.e. 50 min ago).

No more “Encrypting…” I guess I can just keep unmounting the backup disk mid-encryption.

– Alex 2017-10-23 05:26 UTC

Uaaaaagh. Worst Case Scenario!

I switched backup drives, so now I have “Backup 2” at home. Plugged it in, provided password: it is refused. WTF!

– Alex 2017-10-24 21:38 UTC

Erasing disk using Disk Utility, picking Mac OS Extended (Case-sensitive, Journaled, Encrypted). Getting the message that the identity of backup disk “Backup 2” has changed since the previous backup. I answer Use This Disk.

– Alex 2017-10-24 21:46 UTC

Now it says it’s 4TB Unformatted and 4TB Backup 2. This partition map is confused.

– Alex 2017-10-24 21:48 UTC

Even erasing the disk from the command line doesn’t help. diskutil eraseDisk jhfsx "Backup 2" /dev/disk1 results in the same extra partition when using Disk Utility. The only thing is that on the command line looks good:

alex@Megabombus:~$ diskutil list
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *250.1 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                  Apple_HFS Macintosh HD            249.2 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3

/dev/disk1 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *4.0 TB     disk1
   1:                        EFI EFI                     209.7 MB   disk1s1
   2:                  Apple_HFS Backup 2                4.0 TB     disk1s2

I also saw something interesting in the diskutil manpage:

At this point, if no encryption was specified, all is done. Otherwise, the bytes-on-disk will begin to be encrypted in-place by CoreStorage automatically “in the background” while the PV/LVG/LVF/LV stack continues to be usable. Encryption progress may be monitored with diskutil coreStorage list.

When encryption is finished, a Disk passphrase will be required the next time the LV is ejected and re- attached. If the LV is hosting the boot volume, this passphrase requirement will thus occur at the next reboot.

Note that all on-disk data is not secured immedi- ately; it is a deliberate process of encrypting all on-disk bytes while the CoreStorage driver keeps publishing the (usable) LVG/LV.

I guess there’s no helping it, then.

– Alex 2017-10-24 22:10 UTC

So now I erased the disk, accepted the strange partition display in Disk Utility, told Time Machine to accept the disk even though its identity has changed (note that I erased it from the command line without providing a passphrase using diskutil corestorage convert <device> -stdinpassphrase. I wonder whether Time Machine will still encrypt it?

– Alex 2017-10-24 22:13 UTC

Hah, got a warning about backup up from an encrypted disk (Data) to an unencrypted disk (Backup 2). So now I ran diskutil corestorage convert "Backup 2" -stdinpassphrase and doing another backup.

– Alex 2017-10-24 22:29 UTC

More Time Machine sadness: if you use hard links on your system in order to save space, you will be sad to learn that the files will get duplicated in your backups. [3]

– Alex 2017-10-25 15:07 UTC

Add Comment

2014-02-14 Backup

This got started on 2006-08-03 Backup using rsync. We’re using several external USB harddisks to keep backups. The most important part is that we have two external disks and we keep at least one of them at Claudia’s office. Just making sure that a simple fire in our flat cannot destroy all our data. I do backups using rsync. Note my Restore page.

Look at how the external drives are setup. The Extern disk is always connected and it’s where both I and Claudia keep files such as the iTunes library. That’s why it has the “noowners” flag set. The Backup disk, on the other hand, needs to preserve ownership.

alex@Megabombus:~$ mount
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
/dev/disk2s3 on /Volumes/Backup (hfs, local, nodev, nosuid, journaled)
/dev/disk1s2 on /Volumes/Extern (hfs, local, nodev, nosuid, journaled, noowners)

I run this from my main machine, Megabombus.


if [ ! -d /Volumes/Backup ]; then
    echo you need to mount the backup drive, first

if [ -z "$1" -o "$1" == "Megabombus" ]; then
    echo Megabombus
    sudo rsync --archive --delete --delete-excluded \
	--itemize-changes \
	--exclude=.Spotlight-V100 --exclude=.DS_Store \
	--exclude="/Users/*/Library/Caches" \
	--exclude="/Users/*/.Trash" --exclude="/.Trashes" \
	--exclude="/Volumes" --exclude=".fseventsd" \
	--exclude="/Users/*/Library/Application Support/Wuala/Data/Temp" \
	/ /Volumes/Backup/Machines/Megabombus
    echo skipping Megabombus

if [ -d /Volumes/Extern -a \( -z "$1" -o "$1" == "Extern" \) ]; then
    echo Extern
    sudo rsync \
	--itemize-changes \
	--partial --archive --delete --delete-excluded --verbose \
	--exclude=.Spotlight-V100 --exclude=.DS_Store \
	--exclude=/Extern/.Trashes --exclude=/Extern/.fseventsd \
	/Volumes/Extern /Volumes/Backup
    echo skipping Extern

if [ -z "$1" -o "$1" == "Psithyrus" -o "$1" == "net" ]; then
    echo Psithyrus
    rsync --archive --verbose --compress --delete --delete-excluded \
	--itemize-changes \
	--exclude '/logs' \
	--exclude '/planet/rpg' \
	--exclude 'temp/' \
	--exclude 'pids/' \
	--exclude 'visitors.log' \
	--exclude 'referer/' \
	--exclude '.git/' \
	--iconv=UTF8-MAC,UTF-8 \ \
    echo skipping Psithyrus

if [ -z "$1" -o "$1" == "Emacs Wiki" -o "$1" == "net" ]; then
    echo Emacs Wiki
    rsync --archive --verbose --compress --delete --delete-excluded \
	--itemize-changes \
	--exclude '/org.emacswiki/logs' \
	--exclude '/org.emacswiki/htdocs/*/visitors.log' \
	--exclude '/org.emacswiki/htdocs/*/pids/' \
	--exclude '/org.emacswiki/htdocs/*/temp/' \
	--exclude '/org.emacswiki/htdocs/*/referer' \
	--exclude '/org.emacswiki/htdocs/emacs/git' \
	--iconv=UTF8-MAC,UTF-8 \ \
    echo skipping Emacs Wiki

if [ -z "$1" -o "$1" == "Raspberry Pi" -o "$1" == "net" ]; then
    echo Raspberry Pi
    if ping -q -c1 raspberrypi.local > /dev/null; then
      for ME in pi alex; do
        echo ... $ME
	mkdir -p /Volumes/Backup/Machines/raspberrypi.local/home/$ME
	rsync --archive --verbose --compress --delete --delete-excluded \
	    --iconv=UTF8-MAC,UTF-8 \
	    $ME@raspberrypi.local: \
	notify Raspberry Pi cannot be reached
    echo skipping Raspberry Pi

if [ -z "$1" -o "$1" == "Subterraneobombus" -o "$1" == "net" ]; then
    echo Subterraneobombus
    if ping -q -c1 subterraneobombus.local > /dev/null; then
	rsync --archive --verbose --compress --delete --delete-excluded \
	    --exclude '/tmp' \
	    --exclude '/dev' --exclude '/proc' --exclude '/sys' \
	    --exclude '/home/alex/.local/share/Trash' \
	    --exclude '/home/alex/.mozilla/firefox/*/Cache' \
	    --exclude '/home/alex/Videos' \
	    --iconv=UTF8-MAC,UTF-8 \
	    alex@subterraneobombus.local:/ \
	notify Subterraneobombus cannot be reached
    echo skipping Subterraneobombus

notify Backup finished.


Add Comment

2012-07-20 rsync backups

I have a job that creates backups of my sites using rsync. My sites are in Germany and the USA, the backups are in Canada and Chile. The point was to protect myself against hosting services disappearing and my sites getting lost. Recently I was thinking about data corruption, however. As soon as the cronjob writes the corrupted data to the backups, there is no way to retrieve my data. (There is in fact a third backup: every few weeks I use rsync to copy the remote sites to one of a rotating set of mobile disks, one of which is always outside our apartment.)

There is in fact an option for rsync which will allow you to create copies of your file tree at certain intervals using hard links for the files that haven’t changed. I found a tutorial on how to do it: Time Machine for every Unix out there subtitled “Using rsync to mimic the behavior of Apple’s Time Machine feature.”

And that’s exactly what I did.

Update: I soon disabled it again because I was running out of disk space. :)


Add Comment

2008-06-17 Advanced Rsync Issues

So I’m trying to use rsync to backup stuff on my various servers out there to my newly acquired external disks. Just in case the hosting providers are having backup issues. ;)

The source system is a GNU/Linux system using ext3, the target is a Mac using HFS+. That is, the target uses a slightly modified normal form D instead of normal form C UTF-8 , and the target is not case sensitive – it is case-preserving only. The mind recoils in fear!

The first part of the problem is addressed in the rsync FAQ rsync recopies the same files:

rsync --archive --verbose --compress  --delete --delete-excluded \
--exclude '/org.emacswiki/logs' \
--iconv=UTF-8,UTF8-MAC \


rsync: --iconv=UTF-8,UTF8-MAC: unknown option

Oh no, here we go again... :(

Update: Building rsync from source was painless. Yay!!

But now:

rsync: on remote machine: --iconv=UTF8-MAC: unknown option



Add Comment

2006-10-14 iPhoto Corruption

Claudia got back from her holidays in Crete. We connected the camera to the Mac Mini, switched it on, iPhoto started, we imported the pictures. Then we noticed: How weird, the new pictures came right in the middle of the archive. Somehere in 2005. When I searched for “Kreta”, we got all the pictures from Crete 2006 and all the pictures from Beirut 2005. W00t!?

Some rearranging, investigating, restarting... And suddenly the complete archive only lists all the pictures up to Beirut and all later pictures were lost, unless you clicked down to the archive for 2006. Another restart, and iPhoto offered to reimport some new pictures. And it turned out that it reimported two more sets of the Crete pictures. When I rebuilt the library using the special key-combo while starting iPhoto, everything looked ok (and the Crete images were gone), but once I reimported them from a folder, things got mixed up again. Three pictures from older days suddenly got reassigned to the Crete folder.

I think I’m going to restore the iPhoto library from backup and try again.

This sucks!

Time passes. Restore from backup, everything looks ok. Import pictures from Crete folder, and there we go again!! AAAARARRRRGHH!! >{

So what we had before was a folder the Pictures/iPhoto Library/Data/2005/2005-02-22 Beirut. After importing the pictures from a folder called “Kreta 2006”, the old Beirut folder disappeared and all the pictures got moved into a new folder called Pictures/iPhoto Library/Data/2005/Kreta 2006. All the Crete pictures, got moved into Pictures/iPhoto Library/Data/2005/Kreta 2006_2.

Well, at least they didn’t all end up in the same folder like last time. But it sure sucks like hell!

What are we supposed to do now?

I think what I will do is restore a library from the backup, but keep it under a different name. And we’ll delete the current library, so that iPhoto will start a new one. At least we can navigate both libraries. Both libraries will not be corrupted. And we could even merge them later, should that be necessary.


Add Comment

2006-10-12 VFAT and rsync

Gah. I decided to copy all my audio files to a small extra portable disk I have so that I could easily transfer stuff between home and office. It turns out that VFAT has some curious properties, which make sure that some files get retransfered again and again. Grrrr!

What should I do? In theory I could install ext2 drivers on both systems and use that, hehe. But honestly: There must be a better way!

I still remember my ext2 woes where OSX would kernel panic if a command-line tool read Latin-1 encoded filename (OSX finder would just truncate right there).

I guess I could format it as UFS and use ufs2tools to access the files from the command-line. Extra hassle... : And nobody knows whether they’ll work for external USB drives, either! I think we’re stuck with a crappy VFAT filesystem for this kind of task.

man rsync suggest using --modify-window: In particular, when transferring to Windows FAT filesystems which cannot represent times with a 1 second resolution --modify-window=1 is useful.

And it works!!

sent 267517 bytes  received 20 bytes  59452.67 bytes/sec
total size is 38204530806  speedup is 142800.92


Comments on 2006-10-12 VFAT and rsync

I’ve seen this fortune on irc and thought about you :)

Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it ;) – Linus Torvalds, about his failing hard drive on

– PierreGaston 2006-10-13 14:33 UTC

Haha! :)

– Alex

Add Comment

2006-08-03 Backup using rsync

Here’s how I backup my current home directory on the Mac Mini onto an external USB harddisk using rsync.

rsync --archive --verbose --delete --delete-excluded --exclude=/Library/Caches --exclude=/.Trash "/Users/alex/" "/Volumes/Media Backup/Alex Pyrobombus"

For my laptop:

rsync --archive --copy-unsafe-links --verbose --delete --delete-excluded --exclude=/Library/Caches --exclude=/apache2 --exclude=/.Trash "/Users/alex/" "/Volumes/Media Backup/Alex Alpinobombus"

I don’t have any music on my laptop... ;) I’m excluding apache2 because that is a symlink to /usr/local/apache2 and this causes a chgrp error when running rsync. I use --copy-unsafe-links because some of the links in Library/NeoOfficeJ-1.1 point outside of my home directory.

That reminds me, by the way, that I need to buy a second enclosure for the second internal 120G IDE drive I have left-over from my old Confusibombus machine. The old dead and empty hull is all that remains of my self-assembled Pentium 4 SlackWare. BumbleBees don’t last forever...

Update: I bought an “M9-DX Mini Pod” with USB 2.0 1 port upstream (B Type) and 3 ports downstream (A Type), 3 ports Firewire 400, a passive heat sink, and a thermal probe to regulate fan speed.

While backing up, I noticed that my use of the CPAN shell was wrong, and I reran o conf init and used “sudo make“ and “sudo ./Build” instead of the defaults. Now I can run the CPAN shell directly without sudo, and it will sudo when installing. That (hopefully) means that my .cpan/build and .cpan/sources will have the correct ownerships: alex instead of root. ;)

For Claudia’s account, we also need to backup the DVD stuff. The DVD stuff has no space on the system disk. So we have a disk called “Extern” for DVD stuff.

  1. /Users/Claudia/Volumes/Claudias Backup/Claudia Pyrobombus
  2. /Volumes/Extern/DVD/Volumes/Claudias Backup/DVD

Here we go:

rsync --archive --copy-unsafe-links --verbose --delete --delete-excluded --exclude=/Library/Caches --exclude=/.Trash "/Users/claudia/" "/Volumes/Claudias Backup/Claudia Pyrobombus"

rsync --archive --copy-unsafe-links --verbose --delete --delete-excluded --exclude=/.Trash "/Volumes/Extern/DVD/" "/Volumes/Claudias Backup/DVD"


Add Comment

2006-07-11 Restoring an iPhoto Library from DVD

I burnt a copy of Claudia’s iPhoto library on a DVD, for backup purposes. I also looked at the files in the iPhoto Library folder and found that the movies (little 30s AVI files from my digital camera) where not to be found in the iPhoto Library. So I decided to try the following experiment: Renaming the existing library and restoring the library from DVD. After all, that’s my worst case scenario anyway.

I renamed the existing library, and when iPhoto started, I had to create a new one. That seems reasonable. When I inserted the DVD, I was able to browse it. Cool. But how to merge them, including originals?

After some googling “merging iphoto library” I decided I just wanted to drag the folder from the Finder into iPhoto. This worked, in a way: All the pictures got imported, but since I had about twice as many images after the import, and I spotted several duplicates, I’m assuming that it imported all the files from the Data, Originals, and Modified folders. No good! I deleted the new library folder again.

Then I tried the thing that seems like the most natural thing to do: I just copied the iPhoto Library folder from the DVD into my Pictures directory. Why merge a new library with an empty one, if I can just copy it? After restarting iPhoto it regenerates the thumbnail cache, which seems reasonable enough. The number of images is correct. And the AVI files are part of the library folder, too. Great!

Simpler than I thought. It’s not always the system’s fault. I wonder how I turned out to be as paranoid and irrational about software... 8-)

I also used rsync -av Pictures/iPhoto\ Library/ /Volumes/Extern/iPhoto\ Library/ to copy all the data to an external harddisk. Just to be safe. ;) I heard rumours about the resource fork being used, and wonder I could use this directory to restore the data... Even though I am lazy by nature, I’ve become paranoid, so I’ll try it.

I also wonder how useful it is to run rsync when copying data between local discs. It seems to me that this would be only beneficial if writing data takes significantly longer than reading data. That might be true. But I don’t feel like measuring it. Well, actually I do, because now I decided to rsync the entire Pictures folder. It contains the iPhoto Library and iChat Icons folders. So I created a new Pictures folder on the extern disk, moved the 1.9G iPhoto Library into it, and ran rsync -av Pictures/ /Volumes/Extern/Pictures/:

Pyrobombus:~ claudia$ rsync -av Pictures/ /Volumes/Extern/Pictures/
building file list ... done
iChat Icons -> /Library/Application Support/Apple/iChat Icons/
iPhoto Library/
iPhoto Library/.ipspot_update

sent 73505 bytes  received 60 bytes  147130.00 bytes/sec
total size is 2019142949  speedup is 27447.06

Nice! It was blazing fast. Thanks rsync!

Anyway, back to my test: I now renamed the iPhoto Library I had copied from DVD and copied the rsync copy from the external harddisk back into my Pictures folder. (Well, Claudia’s folder...) Restarted iPhoto. The number of images is correct. The thumbnail cache does not need to be rebuilt. It seems to work. Awesome! I picked an image I had rotated, and reverted it back to the original. It worked, too! Awesome!! :-D

You might be wondering why I’m doing all this if her pictures fit on a single DVD. Well, here MP3 collection does not... I have more backing up to do today!

Here’s what I’m using right now:

if [ -z "$1" ]; then
    echo Missing volume name
    echo Currently available:
    ls /Volumes/
    exit 1

if [ ! -d "/Volumes/$1" ]; then
    echo Volume $1 does not exist
    echo Currently available:
    ls /Volumes/
    exit 1

for d in Desktop Documents Movies Music Pictures; do
        rsync -av ~/$d/ "/Volumes/$1/$d/"


Add Comment


Gah, with three external disks for me and four external disks for Claudia (one of them dedicated to movie material from her camera), bringing the appropriate two from the office, running the backups (my laptop to two disks, my mini to two disks, Claudia’s mini and her movies to one disk) still takes about an hour. :(

AlexSchroeder 2006-09-29 07:54 UTC

Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Please say HELLO.