BTRFS corrupted file on progressive snapshots

Tags: en tutorial linux

case

Have a corrupted file across 10 progressive snapshots on the remote end, read-only, managed by btrbk. The original file on the sending disk is intact. I would like to tell BTRFS to use the intact file instead of the corrupted one, without breaking the progressive backups.

trial 1

Tried to create a rw subvolume of the corrupted snapshot, hoping that once the file would have been overwritten, having the same hash, this would have been corrected in all the snaps. Once copied to the subvolume, have tried to completely read it via cp /new/file.ext /dev/null and it copied fine, but… a later md5sum of the 2 files, the one on the rw subvolume and the one on the ro snapshot, revealed that the corrupted one was still different, and was indeed giving an I/O error back.

trial 2

I attempted to make all the snapshots rw, to reflink the file copied before to all the subvolumes, to just later make it ro again.. this would have worked, but I was also worried it could break the subsequent progressive backups… anyway, for the sake of science I tried with this script:

for sub in <subvol-name>*; do
btrfs property set -ts $sub ro false;
done

but lucky me, got this safeguard error back:

ERROR: cannot flip ro->rw with received_uuid set, use force option -f if you really want unset the read-only status. The value of received_uuid is used for incremental send, consider making a snapshot instead. Read more at btrfs-subvolume(8) and Subvolume flags.

what I ended up doing

By following the advice of luxagen on GitHub, on a btrbk issue

It is possible to fix the file by changing the backup subvolume to read/write using btrfs property set, then fix the broken file, then switch it back again. While this may work (I know of people successfully doing this), I would not recommend it as it can completely break subsequent incremental send/receive without notice.

If I run a single non-incremental backup, that should force the good copy to be transferred to the target, correct?

correct. This is the safest way, but of course will also fill your target disk most. A “middle way” would be to pinpoint the last backup containing no errors, and if the corresponding snapshot (uuid matching received-uudi) still exists on the source you can send-receive using this one as parent.

and since the 2 files could be lost without problem, have opted for deleting them both, and then send a progressive snapshot without the offending files, but if the 2 files would have been important, I valuated that the safest solution, remaining in progressive space, would have been to override the file at the source, create a snapshot, and send it as progressive, with the consequence of having such file occupy double space until the older snapshots containing it get deleted.


Help the author to continue publishing,
donate now or use an affiliate link.

⭐ Click here to choose ✨

You can also become an Affiliate for my content at this Link



  Made with Bulma   build with   Build with Hugo   hosted on the amazing Contabo and Vultr VPS

  © 2019 Francesco Yoshi Gobbo - PGP 3FC5F028E7AFF594 - ISNI 0000000502954809 - P.IVA / VAT: IT-01572520052

privacy policy | cookie policy