Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Linux Storage system: looking for advices Post 302385275 by Loic Domaigne on Thursday 7th of January 2010 04:15:48 PM
Old 01-07-2010
Good Evening,

Quote:
Originally Posted by pludi
No, it's not possible with LVM alone. LVM is designed to simplify the management of multiple, different devices by grouping them together. Bonus is a slight speed improvement. If you loose one drive with LVM, the data on it is gone for good too, but it's easy to extend the size.
Since I am not very fluent in LVM, I set-up a virtual KVM/Qemu guest with Linux in order to play with LVM and investigate possible failures scenario. Enclosed the results of my experiments.

I have the following setup: a volume group containing the following physical volume /dev/vda6, /dev/vdb2 and /dev/vdb3. I have only one logical volume than spans the entire volume group. ext3 is used as filesystem.
Code:
+--------------------------------------------------+
|                    ext3                          |
+--------------------------------------------------+
+--------------------------------------------------+
|                    mylv                          | logical volume = 100%VG
+--------------------------------------------------+
+--------------------------------------------------+
|                    myvg                          | volume group = vda6, vdb2, vdb3 
+--------------------------------------------------+
+-------------+     +-------------+    +-----------+
| /dev/vda6   |     | /dev/vdb2   |    | /dev/vdb3 |
+-------------+     +------------+    +-----------+

It is possible to save the current volume group meta information using vgcfgbackup.
I failed /dev/vdb3, /dev/vdb2 and /dev/vda6 respectively (zeroed the partition using dd). For vdb2 and vdb3, I could restore the ext3 filesystem as follows:
- recreate the physical volume of the failed partition, giving the right uuid label.
- restore the volume group meta information using vgcfgrestore
- repairing the ext3 filesystem using fsck.
As expected, only the files from the failed partition were missing after the restore operation.

However, I failed to restore the file system if /dev/vda6 gets damaged. I used an alternate superblock for fsck (one located on vdb2 or vdb3), but no avail. I lost information about the data stored on vdb2 and vdb3 (they can be found in lost+found, but name is lost). I didn't managed so far to recover the file system if the first disk (i.e. where the primary superblock is) failed. I still need to investigate what's wrong.

What do you think about these recovery possibilities? No point however that RAID+LVM looks safer.

Quote:
Originally Posted by DukeNuke2
how about opensolaris with ZFS? there are many tutorials on ZFS and how to build a home NAS system...
That could be definitively worse a try. But I'll first investigate on the system where I am most knowledgeable about.
 
AMZFS-SNAPSHOT(8)					  System Administration Commands					 AMZFS-SNAPSHOT(8)

NAME
amzfs-snapshot - Amanda script to create zfs snapshot DESCRIPTION
amzfs-snapshot is an Amanda script implementing the Script API. It should not be run by users directly. It create a zfs snapshot of the filesystem where the path specified is mounted. PRE-DLE-* create a snapshot and the POST-DLE-* destroy the snapshot, *-DLE-AMCHECK, *-DLE-ESTIMATE and *-DLE-BACKUP must be set to be executed on the client: execute-on pre-dle-amcheck, post-dle-amcheck, pre-dle-estimate, post-dle-estimate, pre-dle-backup, post-dle-backup execute-where client The PRE_DLE_* script output a DIRECTORY property telling where the directory is located in the snapshot. The application must be able to use the DIRECTORY property, amgtar can do it. The script is run as the amanda user, it must have the priviledge to create and destroy snapshot: zfs allow -ldu AMANDA_USER mount,snapshot,destroy FILESYSTEM Some system doesn't have "zfs allow", but you can give the Amanda backup user the rights to manipulate ZFS filesystems by using the following command: usermod -P "ZFS File System Management,ZFS Storage Management" AMANDA_USER This will require that your run zfs under pfexec, set the PFEXEC property to YES. The format of the DLE must be one of: Desciption Example ---------- ------- Mountpoint /data Arbitrary mounted dir /data/interesting_dir ZFS pool name datapool ZFS filesystem datapool/database ZFS logical volume datapool/dbvol The filesystem must be mounted. PROPERTIES
This section lists the properties that control amzfs-snapshot's functionality. See amanda-scripts(7) for information on the Script API, script configuration. DF-PATH Path to the 'df' binary, search in $PATH by default. ZFS-PATH Path to the 'zfs' binary, search in $PATH by default. PFEXEC-PATH Path to the 'pfexec' binary, search in $PATH by default. PFEXEC If "NO" (the default), pfexec is not used, if set to "YES" then pfexec is used. EXAMPLE
In this example, a dumptype is defined to use amzfs-snapshot script to create a snapshot and use amgtar to backup the snapshot. define script-tool amzfs_snapshot { comment "backup of zfs snapshot" plugin "amzfs-snapshot" execute-on pre-dle-amcheck, post-dle-amcheck, pre-dle-estimate, post-dle-estimate, pre-dle-backup, post-dle-backup execute-where client #property "DF-PATH" "/usr/sbin/df" #property "ZFS-PATH" "/usr/sbin/zfs" #property "PFEXEC-PATH" "/usr/sbin/pfexec" #property "PFEXEC" "NO" } define dumptype user-zfs-amgtar { dt_amgtar script "amzfs_snapshot" } SEE ALSO
amanda(8), amanda.conf(5), amanda-client.conf(5), amanda-scripts(7) The Amanda Wiki: : http://wiki.zmanda.com/ AUTHORS
Jean-Louis Martineau <martineau@zmanda.com> Zmanda, Inc. (http://www.zmanda.com) Dustin J. Mitchell <dustin@zmanda.com> Zmanda, Inc. (http://www.zmanda.com) Amanda 3.3.1 02/21/2012 AMZFS-SNAPSHOT(8)
All times are GMT -4. The time now is 11:29 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy