Well, a lot of people still find dump a useful tool. Its easy to use and its fast. In fact its really fast. tar and just about every other backup tool accesses the filesystem through the directory structure. The filesystem on disk is not ordered in the same way as its directory structure and the result is a lot of time spent seeking. dump opens the underlying device and accesses the data in its native order.
I ran a primitive benchmark just now:
- sync the filesystem (an ext3 filesystem on a encrypted volume).
- flush out the page and dentry caches (echo 3 > /proc/sys/vm/drop_caches)
- run the backup
- full backup with tar
- incremental backup with tar
- full backup with dump
- incremental backup with dump
|tar cf - /home/perbu||37m 55s|
|tar --after-date 2008-11-01 -cf - /home/perbu||3m 59s|
|dump -f - /dev/vg0/perbu||13m 22s|
|dump -f -T 'Fri Nov 01 00:00:00 2008 +0100' /dev/vg0/perbu||2m 22s|
I would guess that on a SSD the results would more or less be the same as the seek times are more or less zero. If someone gets me an SSD I'll make a post abount it. :-)
However, there is a price for this performance. If your filesystem is very active there might be changes that are not yet flushed out to disk - these data might not be backed up completely. To be 100% sure everything is backed up you might want to take a snapshot of the devices and dump this.For a personal computer however, the risk in negligible.