Restic: tree e5457a72: file "XX": metadata size (8770356) and sum of blob sizes (8770808) do not match

Created on 22 May 2018  ·  5Comments  ·  Source: restic/restic

Hi,

Just updated to restic 0.9 and found that restic now reports errors for my existing repo (that was ok with 0.8.3 and was not changed after upgrade):

error for tree e5457a72: tree e5457a72: file "mail.err": metadata size (1085858) and sum of blob sizes (1085966) do not match tree e5457a72: file "mail.info": metadata size (8770356) and sum of blob sizes (8770808) do not match tree e5457a72: file "mail.log": metadata size (8770356) and sum of blob sizes (8770808) do not match tree e5457a72: file "mail.warn": metadata size (1091226) and sum of blob sizes (1091334) do not match
Probably files were changed while backup was running.

Output of restic version

restic 0.9.0 compiled with go1.10.2 on linux/amd64

How did you run restic exactly?

restic check

What backend/server/service did you use to store the repository?

local directory and rest-server

Expected behavior

I understand that restic can't do anything to make sure backup is consistent (for example that a few dependent files have matched content).

But I'm pretty sure that restic should make sure it's own repository is consistent in such case (repository metadata should match repository data). If something was appended to file after stat was called,but before whole file is read, probably restic should read only up to expected file size, or just update metadata with number of bytes that were actually read.

PS. Probably this issue is fixed on 0.9 and this will not happens for new snapshots.
But unfortunately restic rebuild-index doesn't fix it.

0.9.0 bug

Most helpful comment

I agree, this should either be a warning or not shown at all. There's nothing users can (or need to) do.

It's a bug in the old archiver code, which wrote the wrong size to the repo when the file was appended to while restic was reading it. The new archiver doesn't do that, and all the other functions will work just fine (they just use the correct size, the sum of the file chunks).

All 5 comments

I was about to open exactly the same issue as you @dionorgua, you beat me up by 11 min. :+1:

Have same errors in my repo (see below), and share the feeling that these should not be errors, and restic should handle it gracefully. There's nothing a backup software can do about files that change under it, beside warning that it happened during the backup process. But later on, when check is run, these should not be errors, or even reported again?

The results of v0.9.0 check run on a repo on which v0.8.3 doesn't report any error.

check snapshots, trees and blobs
error for tree c1c7286d:
  tree c1c7286d: file "panacea.dat": metadata size (5975885) and sum of blob sizes (5975910) do not match
error for tree 5908dec5:
  tree 5908dec5: file "panacea.dat": metadata size (5425341) and sum of blob sizes (5425366) do not match
Fatal: repository contains errors

I agree, this should either be a warning or not shown at all. There's nothing users can (or need to) do.

It's a bug in the old archiver code, which wrote the wrong size to the repo when the file was appended to while restic was reading it. The new archiver doesn't do that, and all the other functions will work just fine (they just use the correct size, the sum of the file chunks).

How does one deal with this issue? It's causing my weekly automatic checks to fail on several of my backup repos.

Here is the output of my "restic check" -- I'm doing it via the restic docker image.

...
Digest: sha256:9c851e0ba8a9c20ef853ee507af14c4d87c33661c25136262e97506a1cdc7a57
Status: Image is up to date for restic/restic:latest
ID        Date                 Host              Tags        Directory
----------------------------------------------------------------------
8eb0175e  2018-02-28 21:01:38  internal-cluster              /fisheye
a7848682  2018-03-31 09:28:09  internal-cluster              /fisheye
8ad27273  2018-04-30 09:28:09  internal-cluster              /fisheye
97d2e914  2018-05-31 09:28:12  internal-cluster              /fisheye
96ba1cc7  2018-06-30 09:28:13  internal-cluster              /fisheye
23ef9a4b  2018-07-08 09:28:11  internal-cluster              /fisheye
76f8e70a  2018-07-09 09:28:12  internal-cluster              /fisheye
74d46da4  2018-07-10 09:28:14  internal-cluster              /fisheye
a893de2c  2018-07-11 09:28:12  internal-cluster              /fisheye
7dbeb6c0  2018-07-12 09:28:13  internal-cluster              /fisheye
8df2f318  2018-07-13 09:28:11  internal-cluster              /fisheye
e7321bf1  2018-07-14 09:28:13  internal-cluster              /fisheye
----------------------------------------------------------------------
12 snapshots
+ restic check
+ sudo -E docker run --rm -e AWS_ACCESS_KEY_ID=**** -e AWS_SECRET_ACCESS_KEY=**** -e RESTIC_PASSWORD=**** -v /mnt/efs/fisheye:/fisheye:ro -h internal-cluster --user root restic/restic -r s3:s3.amazonaws.com/redacted/restic/fisheye check
using temporary cache in /tmp/restic-check-cache-069761908
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
error for tree d93db471:
  tree d93db471: file "atlassian-fisheye-2018-07-13.log": metadata size (52139444) and sum of blob sizes (52165018) do not match
error for tree 8d1b1f5f:
  tree 8d1b1f5f: file "atlassian-fisheye-2018-04-30.log": metadata size (53418588) and sum of blob sizes (53426968) do not match
Fatal: repository contains errors

That is a good question! All my repos except one are in error now... thinking about disabling check stage alltogether, but then again, it was and will be a good way to detect possible future regressions. Hard to tell what is the best way going forward...

@fd0, this is quite a serious bug, is there any workaround to make repos silent again?

Sorry for that, I've disabled the check in #1887. You can cherry-pick the commit if you like.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

fbartels picture fbartels  ·  3Comments

mholt picture mholt  ·  4Comments

shibumi picture shibumi  ·  3Comments

ikarlo picture ikarlo  ·  4Comments

fd0 picture fd0  ·  3Comments