Sponsored Content
Full Discussion: AIX - stale partition
Operating Systems AIX AIX - stale partition Post 303038037 by Necronomic on Thursday 22nd of August 2019 04:14:06 AM
Old 08-22-2019
AIX - stale partition

Hi everybody,
I have a little problem with my AIX 6.1, PowerHA 6.1 LVM mirror. After problem with SAN pathing of our one Datacenter, I have LV at stale state.

Code:
 # lsvg cpsdata2vg
VOLUME GROUP:       cpsdata2vg               VG IDENTIFIER:  00fb518c00004c0000000169445f4c2c
VG STATE:           active                   PP SIZE:        1024 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      6142 (6289408 megabytes)
MAX LVs:            256                      FREE PPs:       441 (451584 megabytes)
LVs:                2                        USED PPs:       5701 (5837824 megabytes)
OPEN LVs:           2                        QUORUM:         2 (Enabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          1                        STALE PPs:      108
ACTIVE PVs:         1                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Concurrent
Node ID:            1                        Active Nodes:       2 3 4
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no

 # lspv
...
hdisk36         00fb518c4457e71a                    cpsdata2vg      concurrent
hdisk37         00fb518c4457f895                    cpsdata2vg      concurrent

 # lsvg -l cpsdata2vg
cpsdata2vg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
cpsabcd2lv          jfs2       2850    5700    2    open/stale    /cpsabcd2
loglv00             jfs2log    1       1       1    open/syncd    N/A

 # lsvg -p cpsdata2vg
cpsdata2vg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk36           active            3071        220         00..00..00..00..220
hdisk37           missing           3071        221         00..01..00..00..220


Normally I solve it by command varyonvg for non-concurrent LUNs. I cant find solutions for Enhanced-Capable concurrent LUNs.
I guess that the right way to solve this problem is command varyonvg too. With parameter '-c' for concurrent vary.

Does anyone have experience with this procedure? I cant umount this filesystem right now, so I am curious if I can do this without unavailability or other problem with filesystem.

Thank you.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Unix Stale Mounts

Is there an easy way to find all stale mounts on a system? (2 Replies)
Discussion started by: derf912
2 Replies

2. UNIX for Dummies Questions & Answers

I've created a partition with GNU Parted, how do I mount the partition?

I've created a partition with GNU Parted, how do I mount the partition? The manual information at http://www.gnu.org/software/parted/manual/parted.html is good, but I am sure about how I mount the partition afterwards. Thanks, --Todd (1 Reply)
Discussion started by: jtp51
1 Replies

3. UNIX for Dummies Questions & Answers

removing stale partitions

hi, i was trying to mirror root volume group and the command i was using didnt respond for a long time mirrorvg -m rootvg hdisk1 I was checking rootvg and it gives below. how do i fix stale partitions?? it seems to be on hdisk1 LV NAME TYPE LPs PPs PVs LV STATE ... (2 Replies)
Discussion started by: karthikosu
2 Replies

4. HP-UX

Stale users and no process

HP-UX B.11.23 U ia64 I've got two users that show in "w" with long idle times but if I search for their processes I find nothing (ps -ef | grep username ) I'm not sure why "w" still sees them and if there is anything (short of a reboot) that I can do to clean them out. Ideas? (8 Replies)
Discussion started by: LisaS
8 Replies

5. Solaris

Partition overlaps another partition while creating new parition in solaris

hi all while formatting hard disk i am getting following error. Partition 1 ends at 266338338 It must be between 34 and 143374704. label error: EFI Labels do not support overlapping partitions Partition 8 overlaps partition 1. Warning: error writing EFI. Label failed. I have formatted the... (2 Replies)
Discussion started by: nikhil kasar
2 Replies

6. AIX

AIX 5.3 using raw partition for Oracle 10g

Hellow friends, We are having AIX 5.3 total memory allotted to /Backup directory is 700GB actual used memory is 250GB ..but when i issue df -gt command to check space it is showing /Backup directory is 70% full ..how to identify root cause.? (1 Reply)
Discussion started by: umashankar1987
1 Replies

7. HP-UX

Bypass stale PE ?

Hello, I have an ancient HP-UX 11.11 system where I have a logical volume marked stale and I can't get it sync'd. I have tried lvsync and lvreduce/lvextend to no avail. It is just one 4Mb PE on the disk that I can't get current. # lvdisplay -v /dev/vg00/lvol5 | grep stale LV Status ... (17 Replies)
Discussion started by: port43
17 Replies

8. AIX

Hd6 is in stale condition

Hi friends, the paging lv hd6 is in stale condition hd6 paging 48 96 2 open/stale N/A And i'am getting the following alerts in the server 333BD283 0811044814 U S LVDD Bad block detected with no relocation al 333BD283 0811041114 U S LVDD ... (1 Reply)
Discussion started by: Mohamed Thamim
1 Replies

9. Red Hat

Shrink LVM partition & create new Linux Primary partition

Hello All, I have a Red Hat Linux 5.9 Server installed with one hard disk & 2 Partitions created on it as follows, /boot - Linux Partition & another is LVM - One VG & under that 5-6 Logical volumes(var,opt,home etc). Here my requirement is to take out 1GB of space from LVM ( Any logical... (5 Replies)
Discussion started by: gr8_usk
5 Replies

10. AIX

Stale PPs in AIX, failed disks.. how to replace?

I have a AIX 7.1 system that has 3 failed disks, 1 in rootvg and 2 in vg_usr1. Here is the output of lspv. # lspv hdisk0 00044d4dfbb11575 vg_usr1 active hdisk1 0000150179158027 vg_usr1 active hdisk2 ... (11 Replies)
Discussion started by: c3rb3rus
11 Replies
PVCK(8) 						      System Manager's Manual							   PVCK(8)

NAME
pvck - check physical volume metadata SYNOPSIS
pvck [-d|--debug] [-h|--help] [-v|--verbose] [--labelsector] PhysicalVolume [PhysicalVolume...] DESCRIPTION
pvck checks physical volume LVM metadata for consistency. OPTIONS
See lvm for common options. --labelsector sector By default, 4 sectors of PhysicalVolume are scanned for an LVM label, starting at sector 0. This parameter allows you to specify a different starting sector for the scan and is useful for recovery situations. For example, suppose the partition table is corrupted or lost on /dev/sda, but you suspect there was an LVM partition at approximately 100 MB. This area of the disk may be scanned by using the --labelsector parameter with a value of 204800 (100 * 1024 * 1024 / 512 = 204800): pvck --labelsector 204800 /dev/sda Note that a script can be used with --labelsector to automate the process of finding LVM labels. SEE ALSO
lvm(8), pvcreate(8), pvscan(8) vgck(8) Sistina Software UK LVM TOOLS 2.02.95(2) (2012-03-06) PVCK(8)
All times are GMT -4. The time now is 09:06 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy