...
That is definitely a good question. To get the answer, bzip2 has a neat option: -t.
Code Block |
---|
bzip2 -t archive.tar.bz2
|
This will tell you if your bzipped archive is fine, or not.
If it's fine, well, enjoy your day Otherwise, read on, we'll recover it.
...
cd into recovery.
Here, we'll use the magic bzip2recover command. Hey, but what's that bzip2recover command. Hmmm.
Bzip2 compressed file are divided into blocks (each block being 100k, 200k, ..., 900k bytes big,
depending on what compression options you used - default is 900k).
What bzip2recover does, is splitting a bzip2 archive into many smaller
bzip2 archives (one per block, actually). That's why it's generating soooo many small files.
So, here we go:
Code Block |
---|
bzip2recover archive.tar.bz2
|
...
No, i'm not. I'm not the one with a corrupted archive <evil laugh>.
Seriously, now that we have divided the archive into smaller parts, we'll be able to "isolate" the corrupted parts.
To do so, we'll use bzip2 -t, as we did before, but this time on every small archive file.
Here we go:
Code Block |
---|
bzip2 -tv rec*.bz2 > testoutput.log 2>&1
|
...
Ok, now, we will search for any corrupted small archive through the log file.
Code Block |
---|
grep [^ok]$ testoutput.log
|
(this actually parses the output of bzip2 -t to extract only files which don't end up with a candid "ok"
- guess what, corrupted files don't generate this kind of candid output )
Ouch, i've got corrupted blocks. What should I do with that ?
...
Ok, cd into recovery1.
Here, we have the beginning of a tar file, nothing's corrupted, but the tar file is not complete.
Right. That makes things easy.
We will just bunzip all the small archives into one recovery1.tar file:
Code Block |
---|
bzip2 -dc rec*.bz2 > recovery1.tar
|
Let's have a look at the result .tar file :
Code Block |
---|
tar tf recovery1.tar
|
Wow ! We're getting a list of file, and an error. Not perfect, but better than nothing!
We have here all the files which were into the original archive.tar.bz2 until the first corrupted block.
We're done for recovery1 !
...
recovery2 !! cd ../recovery2
Hmmmm trying the same method as above fails. Why that ? Because tar sux. Yes, it does.
It does not manage to find a correct header right at the start of the file, and so, fails.
Creepy, huh ? But we are smarter than Tar, and there's not much that a little of Perl Magic can't solve.
First, let's have our bzip2 small archives bunziped into a "failing" tar.
Code Block |
---|
bzip2 -dc rec*.bz2 > recovery2_failing.tar
|
As I told you right before, a tar tf recovery2_failing.tar would.... fail
What we would need to fix it, is having our recovery2_failing.tar
starting from the begining of a clean header block.
A simple but efficient perl script will help us to make our way out: *findtarheaderfind_tar_headers.pl
Panel | ||||
---|---|---|---|---|
| ||||
#!/usr/bin/perl -w
my $tarfile;
die "No tar file given on command line" if $#ARGV != 0;
open(IN,$tarfile) or die "Could not open `$tarfile': $!"; $hit = 0;
close(IN) or warn "Error closing `$tarfile': $!"; |
Yeah, copy/paste and save it.bz2
Yeah, bunzip2 . chmod +x on it.
Now, to find the first clean tar header on recovery2_failing.tar, do the following:
Code Block |
---|
./findtarheader.pl recovery2_failing.tar
|
This will generate quite a bunch of output. The only one interesting here is the first result. You can then do :
Code Block |
---|
./findtarheader.pl recovery2_failing.tar | head -n 1
|
...
To do so, do the following :
Code Block |
---|
tail -c +17185 recovery2_failing.tar > recovery2_working.tar
|
This command copies everything from recovery2_failing.tar, starting at offset +17185 into recovery2_working.tar.
Great, now we have a "recovery2_working.tar" tar file, which WORKS !
Code Block |
---|
tar tf recovery2_working.tar
|
...
Addendum : Expected Minimal Data Loss
Best case (minimal loss): No file has its header within a corrupted block and its data block in others.
Wors case (maximal loss): Each corrupted block contains the header of a big file. The whole block is lost, plus that file. (hypothetically, unlimited amount of data can be lost, it could be a 100GB file....)
...
Block Size
...
Minimal loss/N corrupted blocks
...
100 kB
...
100 x N kB
...
200 kB
...
200 x N kB
...
300 kB
...
300 x N kB
...
400 kB
...
400 x N kB
...
500 kB
...
500 x N kB
...
600 kB
...
600 x N kB
...
...
700 x N kB
...
800 kB
...
800 x N kB
...
900 kB
...
900 x N kB
Please note that statistically,with a size of block of 'B' kB on a high amount of corrupted blocks ('N'), if the average
filesize is 'M' kB, the expected data lost is around (block size + average file size) x (number of corrupt blocks)
Code Block |
---|
Estimated average data lost over coruption: (B + (M+1)/2 ) x ( N ) kB |
On a tar file within which the average file size is 200kB, bziped with 900 kB per block, 10 faulty blocks, data
loss is around (900 + 101) x 10 = 10.1 MB.
Same thing with 100kB per block, (100 + 101) x 10 = 2.02 MB.
This should be considered when deciding to build a bzip2 zipped archive, the smaller the block size is, the faster it will compress, the worse the compression will be, and the smaller data will be lost in case of corruption.