Quote:
Originally Posted by pludi
Which is, usually, something you do not want to happen. Personally, I'd rather have a system that's a bit slower if a disk goes bad, but keeps on working, than losing files and not being able to use the system until I've got a replacement disk. But that might just be me.
I agree that the requirements look a bit wired. The important data are backup on an external NAS, so it would (theoretically) "be acceptable to loose the data on the failed drive". But if one can afford to loose the data on the drive, why put it there in the first place?
Quote:
Originally Posted by Loic Domaigne
However, I failed to restore the file system if /dev/vda6 gets damaged. I used an alternate superblock for fsck (one located on vdb2 or vdb3), but no avail.[...]
Quote:
Originally Posted by pludi
That might be because of the striping done by LVM. The volume manager doesn't fill up the first disk, then the second, and so on, but first fills the first stripe (by default 4 MB) on the first disk, the first stripe on the second disk, and so on. With that scheme it might very well be that all superblocks end up on the same disk.
Actually, the restore operation has succeeded. What failed: I lost the filenames and directory contents, the files were restored in lost+found. The reason for this behavior is quite simple: directory contents are stored on data block, and if this block is gone (as it was the case in my scenario), then you loose this information forever. One more reason for RAID+LVM.
Thanks you everyone, and especially pludi
. I gathered enough information to make a sound proposal for the storage system!