Posted: Tue Oct 04, 2011 8:45 pm
I have a feature request: Considering FBackup already keeps a file index (and presumably one containing checksums), would it be possible to match large files based on their checksum (if the file is new and another one of similar size appears on the deletion list)?
i.e.:
- sync job starts
- large file appears 'deleted' at the source location
- another large file appears somewhere else, and has *exactly* the same byte size (which can of course be checked quickly)
- generate checksum for 'new' source file, compare to stored checksum for 'deleted' target file
- if the files turn out to be identical, just change the file path on the target location instead of copying mega- or gigabytes
Right now, if I change a directory name even slightly, FBackup will delete and resync everything, which takes quite a long time and of course would be unnecessary, as the files themselves haven't changed
i.e.:
- sync job starts
- large file appears 'deleted' at the source location
- another large file appears somewhere else, and has *exactly* the same byte size (which can of course be checked quickly)
- generate checksum for 'new' source file, compare to stored checksum for 'deleted' target file
- if the files turn out to be identical, just change the file path on the target location instead of copying mega- or gigabytes
Right now, if I change a directory name even slightly, FBackup will delete and resync everything, which takes quite a long time and of course would be unnecessary, as the files themselves haven't changed