FAQ, Frequently asked Questions

1  I do not want to compress any file
2  Where is the GUI?
3  I do not need that lateLinks stuff
4  Making a remote Backup with SSH (no NFS)
5  I like this blocked file stuff and want to use it for all files bigger than 50 MB
6  How do I make a full backup of my GNU/Linux machine?
7  How do I install storeBackup on a (Synology) NAS?
8  How to run storeBackup on Raspberry Pi
9  Can storeBackup run out of hard links?

*  *  *  *  *

FAQ 1  I do not want to compress any file

I do not want to compress any file in the backup. How can I configure this?

When configuring storeBackup.pl, set the option exceptSuffix to '.*', which is the pattern for ``match everything''.

*  *  *  *  *
FAQ 2  Where is the GUI?

Why does storeBackup not provide a GUI (graphical user interface)?

There are several reasons why storeBackup is command line driven:

*  *  *  *  *
FAQ 3  I do not need that lateLinks stuff

I only want to make my backup to an external usb drive and do not want to use this new option ``lateLinks''. How can I do this?

You do not have to concern yourself with this ``highly sophisticated option'' (or with storeBackupUpdateBackup.pl) if you do not use option lateLinks. Have a look at Example 1.

*  *  *  *  *
FAQ 4  Making a remote Backup with SSH (no NFS)

Under GNU/Linux, it is also possible to back up data over an SSH connection. This has the advantage that no separate network file system has to be configured (as it is the case for NFS).

In order to NFS mount the target directory, the sshfs program has to be used. It is shipped with most distributions, but it can also be obtained from http://fuse.sourceforge.net/sshfs.html.

The command to mount the remote directory /var/backup on the computer chronos as user ``backup'' to the target directory /mnt/target is:

# sshfs backup@chronos:/var/backup /mnt/target

Now storeBackup.pl has to be configured to place the backup in /mnt/target. After the backup, the target directory can be unmounted with fusermount -u /mnt/target.

SPEEDING UP A REMOTE BACKUP OVER SSHFS

sshfs uses an individual network request for each individual hardlink that has to be set and for each single file that has to be deleted. Since the latency for any network operation is generally several magnitudes larger than for any local operation, backing up to a remote system can therefore be very slow even if the network bandwith is as high as for a local harddisk.

For this reason, it is strongly recommended to use the lateLinks and doNotDelete options for remote backups. Their usage allows to perform the hardlinking and deletion operations on the remote system only and generally speeds up backups by a factor of 10 to 75, depending on the amount of changed data and the latency of the network.

The general procedure is as follows:

  1. Mount remote system:

    # sshfs backup@chronos:/var/backup /mnt/target
    

  2. Do the backup:

    # storeBackup.pl --backupDir /mnt/target --lateLinks \
         --doNotDelete [other options]
    

  3. Unmount the remote system:

    # fusermount -u /mnt/target
    

  4. Set hardlinks on the remote system:

    # ssh -T -l backup ebox.rath.org \
        'storeBackupUpdateBackup.pl --backupDir /var/backup'
    

  5. Delete old backups on the remote system:

    # ssh -T -l backup chronos \
        "storeBackupDel.pl --backupDir /var/backup [other options]"
    

Note that this requires that storeBackup is also installed on the remote system.

*  *  *  *  *
FAQ 5  I like this blocked file stuff and want to use it for all files bigger than 50 MB

To achive the desired result, simply set:

checkBlocksSuffix = .*
checkBlocksMinSize = 50M

This configuration will use blocked files for all file with a size of 50 megabyte or more. If you want another size than 50 megabyte, e.g. 800 kilobyte, set the value of checkBlocksMinSize to 800k.

Explanation for the experts: storeBackup.pl will generate an internal rule from the configuration above:

'$file =~ /.*$/' and '$size >= 52428800'

You can also directly use the following rule:

'$size >= &::SIZE("50M")'

to get the same result.

*  *  *  *  *
FAQ 6  How do I make a full backup of my GNU/Linux machine?

First of all, generate a configuration file:

storeBackup.pl -g completeLinux.conf

Open the configuration file with an editor of your choice and edit the following options:

sourceDir = /

Set sourceDir to /, so the whole file system will be saved.

backupDir=/media/drive

Here, I assume your attached hard disk for the backup uses path /media/drive. You have to change this if it is mounted elsewhere. Naturally, you also can save your backups e.g. on an nfs mount. If you do so, you can find an explanation how to back up to a remote file system via nfs in configuring nfs. If you make a backup via nfs, you should read using option lateLinks. Next, configure the directories you do not want to backup. We have to include backupDir in this list to avoid recursion.

exceptDirs= tmp var/tmp proc sys media

If there are other directories you do not want to save (e.g., nfs mounted home directories), include them into this list.

Now let's say you also want to exclude the contents of all other directories called tmp or temp (upper or lower case) anywhere in the file system. So add:

exceptRule= '$file =~ m#/te?mp/#i'

To avoid cached files, add all directories with cache in their names (upper or lower case) to that rule. Change the line above to:

exceptRule= '$file =~ m#/te?mp/#i' or '$file =~ m#cache.*/#i'

But now there is the risk, that perhaps some important files are not saved because the are stored in a directory called /tmp/, /temp/ or a directory with e.g., Cache in its name.
Therefore, write all files excluded because of rule exceptRule in a file to check these names after the backup:

writeExcludeLog=yes

In every backup, there will be a file called .storeBackup.notSaved.bz2 listing all these files.
To copy all file types, expecially block and character devices in /dev, set:

cpIsGnu=yes

For making a full backup, you also have to store the boot sector. The following script assumes your system boots from drive sda. You may need to change this value to match your system. Make the directory /backup and locate the following script (pre.sh) in that directory:

#! /bin/sh

rm -f /backup/MBR.prior
mv /backup/MBR.copy /backup/MBR.prior
# copy the boot loader
dd if=/dev/sda of=/backup/MBR.copy bs=512 count=1 > /dev/null 2>&1

# copy back with:
# dd if=/backup/MBR.copy of=/dev/sda bs=512 count=1

Set the permissions:

chmod 755 /backup/pre.sh

To call the script, set precommand in the configuration file:

precommand = /backup/pre.sh

To see that something is happening during the backup, set:

progressReport = 2000
printDepth = yes

Look at the keep* option and set the appropriate values and set logFile to a useful value for you.

Also set the other options to values that fit to your need.

As always, the first backup will take some time because of calculating all the md5 sums and especially because of file compression. The next backups will be much faster.
After making your backup, you should control which files were not in the backup because of option exceptRule.

*  *  *  *  *
FAQ 7  How do I install storeBackup on a (Synology) NAS?

The following way leads to success:

Preparation:

Installation:

Now storeBackup should run on your NAS box.

*  *  *  *  *
FAQ 8  How to run storeBackup on Raspberry Pi

I got reports, that storeBackup is running on RaspberryPi (raspbmc and Raspbian GNU/Linux 7). You should take care about the following:

  • set noCompress to 1 (more than one compression job doesn't make sense)
  • Use option saveRAM and take care that there is enough space in your temporary directory. It seems to be best to create an own directory for temporary files and to point to it via the option tmpdir of storeBackup.pl. At least on raspbmc, temporay space (/tmp) seems to be so small, that even without the option saveRAM (which means some hash tables are stored on the disk) storeBackup.pl crashes with a very strange error message.
It is important to avoid swapping because on Raspberry Pi, swapping is done to something very slow (e.g. sdcard). Naturally, swapping depends on how much RAM your Raspberry Pi can use and about the data you are saving.

Like always, the first backup is very slow, but the next one is (pretty) fast, naturally depending on the slow cpu (compared ones used in mainstream PC ones) and slow media to write to.

*  *  *  *  *
FAQ 9  Can storeBackup run out of hard links?

I remember playing with anti dupe tools (deduplication) that they normaly did not hardlink zero or ``small'' byte files.

Do we have problems with ``hard link count'' / too much hardlinks? I did a test on btrfs (rsyncing an ext4 storeBackup series) and it oopsed with ``too many hardlinks''. I know ...I should not use rsync and would I have used storeBackup to backup on a filesystem with lower max hardlinks storebackup would have started a new file and restart hardlinks.

For each file you need at least an inode, so hard linking zero byte files saves at least lots of inodes. This may have no result if the file system (ext) has enough statically reserved or really saves some memory if the filesystem allocates them dynamically (reiserfs, btrfs) or you run out of inodes with ext file systems.

Setting a new hard link should also be faster than creating a new inode.

For storeBackup, handling of hardlinks is no problem. It tries to create a hardlink, and if this is not successful, it creates a new file and hard links against this in the future. Running out of hardlinks simply means to create a new identical file. Because of this simple and stupid algorithm, the number of hard links a file system supports is nothing storeBackup has to care about. This behavior is different to typical Unix tools like cp, tar or rsync. If you copy a directory with lots of hardlinks to to a filesystem which does not support enough hardlinks, you will get errors (see also explanation to program linkToDirs.pl delivered with storeBackup which bypasses this limitation in the way described above).

In case of btrfs, I would not use it for backup at the moment because I think it is not stable enough (2014). But anyway, I made some tests and its behavior seems to be very different from other file systems regarding hardlinks. It has a very limited number of hardlinks in one directory. I ran out of hardlinks - all files in one directory - but was able to create additional hardlinks to those inode from another directory. But anyway - because of storeBackups stupid algorithm, it can handle btrfs also efficiently.

Finally, I think there is no reason not to hard link zero byte files.

Heinz-Josef Claes 2014-04-20