HACMP resource group State not STABLE


 
Thread Tools Search this Thread
Operating Systems AIX HACMP resource group State not STABLE
# 1  
Old 01-25-2011
HACMP resource group State not STABLE

Hi,

Not sure if this is the correct forum to post this on but maybe a mod could move it if not.

When trying to move a HACMP resource group between lpars on AIX I receive the following.

State not STABLE/RP_RUNNING or ibcasts Join for node 2 rejected,
Clearing in join protocol flag
Attempting to recover resource group from error
"Resource group not found in client configuration"
echo '+BrokerMB02rg:clvaryonvg prmb02vg[808]' LC_ALL=C 0516-052 varyonvg: Volume group cannot be varied
on without a quorum. More physical volumes in the group must be active. Run diagnostics on inactive PVs.

When these errors are received the resource group then starts back up on the original node.

I have checked the LV's and Quorum is disabled and there are nbo stale PV's

Any help or pointers in the right direction would be much aprreciated.

Thanks,

Matt

Last edited by elmesy; 01-26-2011 at 05:04 AM..
# 2  
Old 01-25-2011
Hi,

I would try to export and reimport the volumegroup in question on the inactive node. Make sure you keep the correct VG Major number (you can import using the -V flag).
If it finds all PVs you should just sync the cluster config and than try again. If it has problems during the import you can take it from there. I had similar issues after migrating storage across disks - duplicate pvid's on some disks - the cluster did not like that much.

Hope that helps,
regards
zxmaus
# 3  
Old 01-26-2011
Thanks for the response, so If I carry out the following you will have to bare with me as I've not been using HACMP for long.
1. Disbale moniotoring on resource group then shutdown apps using filesystems.
2. exportfs -u /dirname
3. exportfs /dirname
When you say sync cluster config is this something that can be done via a command or that will hapen if the re-import is successful?

Thanks

Matt
# 4  
Old 01-26-2011
Hello
on the inactive node nothing should be mounted from that volumegroup - so all you need to do is
1. lspv - copy the output to somewhere so you know which disks belong to the volumegroup
2. exportvg yourvolumegroupname
3. importvg -V volumegroup major number -Ry volumegroupname PVID from any of the disks belonging to this volumegroup (you can look up the major number on the active node doing ls -ali /dev | grep volumegroupname - the major and minor number is stated beside the name - this import should happen hopefully without any errors - if you get an error than post it here so we can follow up on that
4. cluster synchronization is part of the hacmp menus in smitty - look under Extended Configuration ...

Kind regards
zxmaus
# 5  
Old 01-26-2011
Thanks for the help will have to give this a try on the weekend as it's production.
# 6  
Old 01-27-2011
check the lun's reserve policy (lsattr -El hdiskx)

should be set to no_reserve, if not set it

Code:
chdev -l hdiskx -a reserve_policy=no_reserve

it's always a good idea to try out manually what the cluster is doing
normally there is no need to set the vg online, since it should be concurrent passive online, as soon as the cluster starts

when the resource group moves, the vg will be set to concurrent active on one node, and concurrent passive on the other node

ist the vg concurrent capable? (lsvg vgname)

Code:
VOLUME GROUP: xxxvg                 VG IDENTIFIER:  xxxxxx
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      67478 (8637184 megabytes)
MAX LVs:            256                      FREE PPs:       1078 (137984 megabytes)
LVs:                31                       USED PPs:       66400 (8499200 megabytes)
OPEN LVs:           31                       QUORUM:         1 (Disabled)
TOTAL PVs:          34                       VG DESCRIPTORS: 34
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         34                       AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Concurrent                               
Node ID:            1                        Active Nodes:       2 
MAX PPs per VG:     131072                   MAX PVs:        1024                                                                                                                                                                            
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no                                                                                                                                                                              
HOT SPARE:          no                       BB POLICY:      relocatable

should look like this
vg state may be active or passive
# 7  
Old 01-27-2011
Hi,

I have checked the lun's reserve policy and they are set to no_reserve

When checking the VG's there is no field for concurrent or VG mode so maybe this in a different release of HACMP?

Thanks,

Matt
Login or Register to Ask a Question

Previous Thread | Next Thread

8 More Discussions You Might Find Interesting

1. AIX

Sysmirror 7.1.3 Resource Group NFS mounts

Hello all, I'm working to fix a two-node SysMirror cluster that uses NFS mounts from a NetApp appliance as the data repository. Currently all the NFS mounts/unmounts are called from the application controller scripts, and since file collection isn't currently working, (One fight at at time... (3 Replies)
Discussion started by: ZekesGarage
3 Replies

2. AIX

Power HA 6.1 Bring Resource group online issue

Hi all, I have the following in hacmp.out for bringing Resource group online. Volume groups themselves are Enhanced-Capable and on each node I can varyon and mount filesystems. +main1_rg_01:cl_activate_vgs STATUS=0 +main1_rg_01:cl_activate_vgs typeset -li STATUS +main1_rg_01:cl_activate_vgs... (2 Replies)
Discussion started by: OdilPVC
2 Replies

3. AIX

Adding existing VG to powerHA Resource group

Hello. I am Running AIX 6.1 and PowerHA 6.1 I have an active/active cluster (Prod/Dev) cluster. Each side will failover to the other. I have on my prod side an active volume group with a file system. The VG is imported on both nodes and active (varried on, file system mounted) on the prod... (3 Replies)
Discussion started by: mhenryj
3 Replies

4. AIX

HACMP, NFS cross-mount problem. Can not move resource group

Hi, I'm new to HACMP. Currently I setup a cluster with nfs cross-mount follow this guide: kristijan.org NFS cross-mounts in PowerHA/HACMPMy cluster has two nodes: erp01 and erp02. I'm using nfs4 with filesystem for nfs is: /sapnfs Cluster start without problems. But I cannnot move RG (with... (3 Replies)
Discussion started by: giobuon
3 Replies

5. AIX

Adding a Volume Group to an HACMP Resource Group?

Hi, I have a 2 node Cluster. Which is working in active/passive mode (i.e Node#1 is running and when it goes down the Node#2 takes over) Now there's this requirement that we need a mount point say /test that should be available in active node #1 and when node #1 goes down and node#2 takes... (6 Replies)
Discussion started by: aixromeo
6 Replies

6. AIX

Resource Group Monitoring

Hi, I have a requirement to monitor the HACMP Resource Groups. At present in my environment, if the Resource Groups fail over from preferred node to Secondary node we dont get notification. Can some one help me in creating a scrript. I have more than one RG online. (Max 4 Resource Groups in... (2 Replies)
Discussion started by: srnagu
2 Replies

7. Solaris

Solaris + VCS , move a resource to another group

Hi, I am running Solaris 10 + VCS 5 in one of our environment. Recently one of my colleague configured all resources in a single service group.( ie, one service group which has all resources) ,Usually we create seperate service groups for Hardware & for application. For eg: SYS_HW_GRP, will... (0 Replies)
Discussion started by: mpics66
0 Replies

8. High Performance Computing

sun Cluster resource group cant failover

I have rcently setup a 4 node cluster running sun cluster 3.2 and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename> this worked ok. theni I tried to siwitch the zone to node a thei work... (14 Replies)
Discussion started by: lesliek
14 Replies
Login or Register to Ask a Question