-mount (--one-file-system) like option for rdiff-backup?
Thu, 27 Jun 2002 18:05:41 -0700
Content-Type: text/plain; charset=us-ascii
>>>>> "DG" == dean gaudet <email@example.com>
>>>>> wrote the following on Thu, 27 Jun 2002 17:11:22 -0700 (PDT)
DG> --include-filelist goes some of the way -- but if you include a
DG> directory in --include-filelist then rdiff-backup will recurse.
Are you sure? The man page says:
Each line in a filelist is interpreted similarly to the
way extended shell patterns are, with a few exceptions:
1. Globbing patterns like *, **, ?, and [...] are not
2. Include patterns do not match files in a directory
that is included. So /usr/local in an include file
will not match /usr/local/doc.
3. Lines starting with "+ " are interpreted as include
directives, even if found in a filelist referenced
by --exclude-filelist. Similarly, lines starting
with "- " exclude files even if they are found
within an include filelist.
So that is not the intention at least...
DG> also -- for some file selection problems you really need access
DG> to the complete file list on both sides of the link. it seems
DG> hard to move this outside of rdiff-backup.
DG> so what i was considering is some sort of "unexpected large
DG> delta" detection prior to transmitting across the link. the
DG> most simple form of this would be "this 200MB+ file/directory is
DG> not yet on the mirror, it won't be backed up until it's
DG> whitelisted somewhere".
Hmm, I can see how large differences in files could be detected, but
it would be harder to detect a difference of 200MB spread out over a
directory. Currently rdiff-backup just makes one pass, and it would
be more complicated to have it look ahead into directories on both
ends to see if the total difference would add up to 200MB, and if so
not to back up the directory in the first place. Then again, it might
not be so hard to just quit backing up the rest once 200MB of total
changes are found... It would be like giving users a quota not on
total disk space, but on how much they can change their directory.
But even if this were painless to implement, it raises a lot of
interface issues. Do we measure changes on how many/how large new
files are? Or how many bytes get added to files? Once a directory's
change quota gets exceeded, do we stop processing the directory
altogether, or do we still allow file changes/deletions. Do we
specify the quota by directory, with different directories have
different quotas, etc.?
Just today I read an interesting article on interface design by
he makes a strong case that there is a real cost to adding too many
preferences, so that's why I'm hesitant to add something like this.
And I'm sure people can think of more esoteric include/exclude
rules.. Maybe if it could be brought under something general but
conceptually simple, like some kind of plugin system... I don't
DG> i don't really have an answer... but i keep thinking about
DG> multiple processes sharing data via pipes, so that we could
DG> replace some components with locally modified pieces to get new
Sorry, could you elaborate on this?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: Exmh version 2.5 01/15/2001
-----END PGP SIGNATURE-----