07-06-2012
There are two major concepts in LVM to grasp:
Striping.
Where you deliberately create a Logical Volume across partitions on multiple physical discs. This can give a dramatic performance improvement.
Mirroring.
Where you have one (or preferably more) mirror copies of your critical Logical Volumes on totally different physical disc drives.
Disc drives do fail. With careful design for resilience you can keep running with a failed disc drive and replace a hot-pull disc without interruption to service. Taking this concept further you can fit multiple hot spare disc drives and configure the LVM system to automatically replace the failed disc drive.
Given the option of performance against resilience I would choose resilience every time.
Don't forget to check your server for failed disc drives at least once a day.
Ps. Checking resilent Disc Arrays is equally important because a failure will not be visible to your unix system.
Last edited by methyl; 07-06-2012 at 05:30 PM..
Reason: typos, layout, Ps
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hello,
I have logical volume group of 50GB, in which I have 2 logical volumes, LogVol01 and LogVol02, both are of 10GB.
If I extend LogVol01 further by 10GB, then it keeps the extended copy after logical volume 2. I want to know where it keeps this information
Regards
Himanshu (3 Replies)
Discussion started by: ghimanshu
3 Replies
2. AIX
Does anyone have any simple methods for moving a current logical volume from one volume group to another? I do not wish to move the data from one physical volume to another. Basically, I want to "relink" the logical volume to exist in a different volume group. Any ideas? (2 Replies)
Discussion started by: krisw
2 Replies
3. AIX
Hi!
Can anyone help me on how I can do a basic check on the Unix filesystems / physical volumes and logical volumes?
What items should I check, like where do I look at in smit? Or are there commands that I should execute?
I need to do this as I was informed by IBM that there seems to be... (1 Reply)
Discussion started by: chipahoys
1 Replies
4. HP-UX
Hi,
Someone please help me with how i can unmount and remove all the files systems from a cluster. This is being shared by two servers that are active_standby. (3 Replies)
Discussion started by: joeli
3 Replies
5. AIX
Hello,
I am a french computer technician, and i speak English just a little.
On Aix 5.3, I encounter a name conflict logical volume on two volume group.
The first volume lvnode01 is OK in rootvg and mounted. It is also consistent in the ODM
root # lsvg -l rootvg |grep lvnode01 ... (10 Replies)
Discussion started by: dantares
10 Replies
6. AIX
Hello everyone,
I just read that while creating a logical volume(LV) we can choose the region of the physical volume (PV) in which the LV should be created.
When I say region I mean: outer edge - outer middle - center - inner middle and inner edge.
Can anyone help me understand the utility... (11 Replies)
Discussion started by: adilyos
11 Replies
7. UNIX for Dummies Questions & Answers
hi,
I want to create a volume group of 200 GB and then create different file systems on that.
please help me out. Its becomes confusing when the PP calculating PP.
I don't understand this concept. (2 Replies)
Discussion started by: kamaldev
2 Replies
8. Linux
When installing Linux, I choose some default setting to use all the disk space.
My server has a single internal 250Gb SCSI disk. By default the install appears to have created 3 logical volumes
lv_root, lv_home and lv_swap.
fdisk -l shows the following
lab3.nms:/dev>fdisk -l
Disk... (2 Replies)
Discussion started by: jimthompson
2 Replies
9. Red Hat
Hello Guys,
I want to create a file system dedicated for an application installation. But there is no space in volume group to create a new logical volume. There is enough space in other logical volume which is being mounted on /var.
I know we can use that logical volume and create a virtual... (2 Replies)
Discussion started by: vamshigvk475
2 Replies
10. AIX
I want to remove hdisk1 from volume group diskpool_4 and migrate PV from hdisk1 to hdisk2 , but facing problems, so what is the quickest way to migratepv and remove hdisk1 --
# lspv | grep diskpool_4
hdisk1 00c7780e2e21ec86 diskpool_4 active
hdisk2 ... (2 Replies)
Discussion started by: filosophizer
2 Replies
LEARN ABOUT DEBIAN
ql-dynamic-tgt-lun-disc
ql-dynamic-tgt-disc(8) System Administration ql-dynamic-tgt-disc(8)
NAME
ql-dynamic-tgt-lun-disc - Scans for newly added LUNs.
SYNOPSIS
ql-dynamic-tgt-lun-disc [OPTIONS]
DESCRIPTION
Dynamic TGT-LUN Discovery Utility
This utility scans for newly added LUNs. After adding new LUNs, you do not need to unload, then load the QLogic FC driver or reboot the
system
To begin scanning the LUNs issue following command:
# /usr/sbin/ql-dynamic-tgt-lun-disc
-al, --allow-lip
LIP is not issued by default, even if it is required for scanning new LUNs Setting this option, allows the utility to issue LIP.
-cl, --current-luns
Displays LUNS currently present
-e, --extended-scan
Use this option as "-e | --extended-scan". to rescan LUNs. This will identify any change in attributes of existing LUNs. This option
can be used in combination of scan/refresh or max luns
-h, --help, ?
Prints this help message
-i, --interactive
Use this option to use the menu driven program
-is, --iscsi
Use this option to operate on ISCSI HBAs, this option can be used in combination of any other supported option.
-m, --max-lun
To set the maximum LUNs to be scanned
-p, --proc
Use PROC file system for LUN scanning
-r, --refresh
To refresh, that is remove LUNs that are lost use the options "-r|--refresh". This will remove the LUNs which no more exist.
-s, --scan [ -r|--refresh ]
The QLogic LUN scan utility re-scans all the devices connected to the QLogic HBA
SEE ALSO
ql-lun-state-online(8), ql-hba-snapshot(8), ql-set-cmd-timeout(8)
Matthias Schmitz <matthias@sigxcpu.org> July 2009 ql-dynamic-tgt-disc(8)