I want to remove hdisk1 from volume group diskpool_4 and migrate PV from hdisk1 to hdisk2 , but facing problems, so what is the quickest way to migratepv and remove hdisk1 --
Code:
# lspv | grep diskpool_4
hdisk1 00c7780e2e21ec86 diskpool_4 active
hdisk2 00c7780ea5bd16bb diskpool_4 active
# lsvg -l diskpool_4
diskpool_4:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
clodba jfs 720 1440 2 open/stale N/A
# lsvg diskpool_4
VOLUME GROUP: diskpool_4 VG IDENTIFIER: 00c7780e00004c000000013c210bf284
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 2234 (571904 megabytes)
MAX LVs: 1024 FREE PPs: 794 (203264 megabytes)
LVs: 1 USED PPs: 1440 (368640 megabytes)
OPEN LVs: 1 QUORUM: 2 (Enabled)
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 1 STALE PPs: 1
ACTIVE PVs: 2 AUTO ON: yes
MAX PPs per VG: 1048576 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
DISK BLOCK SIZE: 512 CRITICAL VG: no
#
# lsvg -p diskpool_4
diskpool_4:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk2 active 1117 396 224..79..00..00..93
hdisk1 active 1117 398 184..00..01..00..213
# lspv -l hdisk1
hdisk1:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
clodba 719 719 40..223..222..223..11 N/A
# lspv -l hdisk2
hdisk2:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
clodba 720 721 00..144..223..223..131 N/A
# migratepv -l clodba hdisk1 hdisk2
0516-076 lmigratelv: Cannot remove last good copy of stale partition.
Resynchronize the partitions with syncvg and try again.
0516-812 migratepv: Warning, migratepv did not completely succeed;
all physical partitions have not been moved off the PV.
# syncvg -v diskpool_4
0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /etc/syncvg: Unable to synchronize logical volume clodba.
0516-932 /etc/syncvg: Unable to synchronize volume group diskpool_4.
First, before anything else, backup your data and make sure it is recoverable in a usable way.
Two tested copies at least.
Looking at this part ....
Code:
# lspv -l hdisk1
hdisk1:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
clodba 719 719 40..223..222..223..11 N/A
# lspv -l hdisk2
hdisk2:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
clodba 720 721 00..144..223..223..131 N/A
.... suggests the logical volume is mirrored (as suggested) but you must have (at least) one LP that map to PPs that are both on hdisk2. That's a bit of a problem to the point that it's probably not worth mirroring. Lose hdisk2 and you lose logical volume integrity, the filesystem and data. It may be recoverable as far as the filesystem goes, but the data loss is unpredictable.
You may be able to turn on relocation with a chlv or a chvg -b ....command and force the re-sync. What does the full output of lslv -m clodba give you? Sadly it will be quite long given the number of LPs. You will probably see that copy 1 is entirely on hdisk1 except for one PP. You might manage to force the removal of the PPs from hdisk1 with rmlvcopy clodba 1 hdisk1 and if that works, you will have hdisk1 empty (check lspv -l hdisk1) and hdisk2 should be reduced to just 720 PPs in use.
For the future I would consider setting the volume group mirror pool strictness (is that the -M flag?) to force PPs for each LP to be on separate PVs and even to keep LV copies from being mixed across volumes. Personally, I'm more anorak than that and force the creation/extention to use the PPs I set, but that's just me.
# migratepv -l clodba hdisk1 hdisk2
0516-076 lmigratelv: Cannot remove last good copy of stale partition.
Resynchronize the partitions with syncvg and try again.
0516-812 migratepv: Warning, migratepv did not completely succeed;
all physical partitions have not been moved off the PV.
Checking again
Code:
# lsvg -l diskpool_4
diskpool_4:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
0516-1147 : Warning - logical volume clodba may be partially mirrored.
clodba jfs 720 1439 3 open/stale N/A
# lsvg -p diskpool_4
diskpool_4:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk2 active 1117 397 224..00..00..00..173
hdisk1 active 1117 398 184..00..01..00..213
# lspv -l hdisk1
hdisk1:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
clodba 719 719 40..223..222..223..11 N/A
# lspv -l hdisk2
hdisk2:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
clodba 720 720 00..223..223..223..51 N/A
# syncvg -v diskpool_4
0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /etc/syncvg: Unable to synchronize logical volume clodba.
0516-932 /etc/syncvg: Unable to synchronize volume group diskpool_4.
# chlv -b y clodba
0516-012 lchangelv: Logical volume must be closed. If the logical
volume contains a filesystem, the umount command will close
the LV device.
0516-704 chlv: Unable to change logical volume clodba.
#
and then
Code:
# rmlvcopy clodba 1 hdisk1
0516-1939 lquerypv: PV identifier not found in VGDA.
0516-304 getlvodm: Unable to find device id 0000000000000000 in the Device
Configuration Database.
0516-848 rmlvcopy: Failure on physical volume 0000000000000000, it may be missing
or removed.
# lspv
hdisk0 00c7780e79838606 rootvg active
hdisk1 00c7780e2e21ec86 diskpool_4 active
hdisk2 00c7780ea5bd16bb diskpool_4 active
Last edited by filosophizer; 12-06-2018 at 03:58 PM..
Generally, this is what we do:-
On primary, export 2 LUNs (add-vdsdev).
On primary, assign these disks to the ldom in question (add-vdisk).
On ldom, created mirrored zpool from these two disks.
On one server (which is older) we have:-
On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Hello all,
So I made a rookie mistake today. I forgot to remove my disk from my disk group, before running the following command:for i in `ioscan -fnN | awk /NO/'{print $3}'`
do
rmsf -H $i
done
I am trying to run the following command, but not having any luck obviously:vxdg -g dgvol1 rmdisk... (0 Replies)
When installing Linux, I choose some default setting to use all the disk space.
My server has a single internal 250Gb SCSI disk. By default the install appears to have created 3 logical volumes
lv_root, lv_home and lv_swap.
fdisk -l shows the following
lab3.nms:/dev>fdisk -l
Disk... (2 Replies)
Our SAN administrator decided to unpresent then destroy LUN's we were actively using as a volume group (all PV's in said volume group). Now every time I do a pvscan or whatever it complains about I/O errors trying to access those PV's. How do I get it to forget the VG existed completely? vgreduce... (7 Replies)
Hi Experts
I need an script to add an disk in to the veritas volume manager disk group.
For example:
# cd /tmp
# view disk
c6t5d2
c6t2d1
c6t3d7
c6t11d2
c7t11d2
c6t11d6
Normally we add the disk like this:
# vxdg -g freedg freedisk01=c6t5d2
# vxdg -g freedg freedisk02=c6t2d1
#... (3 Replies)
Hi,
I am new to unix. I am working on Red Hat Linux and side by side on AIX also. After reading the concepts of Storage, I am now really confused regarding the terminologies
1)Physical Volume
2)Volume Group
3)Logical Volume
4)Physical Partition
Please help me to understand these concepts. (6 Replies)
Hi,
Someone please help me with how i can unmount and remove all the files systems from a cluster. This is being shared by two servers that are active_standby. (3 Replies)
AIX 4.3.3, 3 drive mirrored stripe. I can't replace the failed drive for at least a week and the VG is down hard. Any way to recover the VG using the mirror copy?
Something like this:
break the mirror
redefine the VG with the mirrored hdisks
replace the drive
remirror using the original VG... (1 Reply)
Hello,
I need explanations about physical disks and physical volumes. What is the difference between these 2 things?
In fact, i am trying to understand what the AIX lspv2command does.
Thank you in advance. (2 Replies)
Dear all,
I would be very grateful if you could help me with removing the volume group.. The case is that, I was trying to create a volume group with 4 disk eligible, but the system has hang .. Maybe because the disks that I was trying to include into the new volume group were of too large size... (5 Replies)