Not sure if this is the correct forum to post this on but maybe a mod could move it if not.
When trying to move a HACMP resource group between lpars on AIX I receive the following.
State not STABLE/RP_RUNNING or ibcasts Join for node 2 rejected,
Clearing in join protocol flag
Attempting to recover resource group from error
"Resource group not found in client configuration"
echo '+BrokerMB02rg:clvaryonvg prmb02vg[808]' LC_ALL=C 0516-052 varyonvg: Volume group cannot be varied
on without a quorum. More physical volumes in the group must be active. Run diagnostics on inactive PVs.
When these errors are received the resource group then starts back up on the original node.
I have checked the LV's and Quorum is disabled and there are nbo stale PV's
Any help or pointers in the right direction would be much aprreciated.
I would try to export and reimport the volumegroup in question on the inactive node. Make sure you keep the correct VG Major number (you can import using the -V flag).
If it finds all PVs you should just sync the cluster config and than try again. If it has problems during the import you can take it from there. I had similar issues after migrating storage across disks - duplicate pvid's on some disks - the cluster did not like that much.
Thanks for the response, so If I carry out the following you will have to bare with me as I've not been using HACMP for long.
1. Disbale moniotoring on resource group then shutdown apps using filesystems.
2. exportfs -u /dirname
3. exportfs /dirname
When you say sync cluster config is this something that can be done via a command or that will hapen if the re-import is successful?
Hello
on the inactive node nothing should be mounted from that volumegroup - so all you need to do is
1. lspv - copy the output to somewhere so you know which disks belong to the volumegroup
2. exportvg yourvolumegroupname
3. importvg -V volumegroup major number -Ry volumegroupname PVID from any of the disks belonging to this volumegroup (you can look up the major number on the active node doing ls -ali /dev | grep volumegroupname - the major and minor number is stated beside the name - this import should happen hopefully without any errors - if you get an error than post it here so we can follow up on that
4. cluster synchronization is part of the hacmp menus in smitty - look under Extended Configuration ...
check the lun's reserve policy (lsattr -El hdiskx)
should be set to no_reserve, if not set it
it's always a good idea to try out manually what the cluster is doing
normally there is no need to set the vg online, since it should be concurrent passive online, as soon as the cluster starts
when the resource group moves, the vg will be set to concurrent active on one node, and concurrent passive on the other node
ist the vg concurrent capable? (lsvg vgname)
should look like this
vg state may be active or passive
Hello all,
I'm working to fix a two-node SysMirror cluster that uses NFS mounts from a NetApp appliance as the data repository. Currently all the NFS mounts/unmounts are called from the application controller scripts, and since file collection isn't currently working, (One fight at at time... (3 Replies)
Hi all, I have the following in hacmp.out for bringing Resource group online.
Volume groups themselves are Enhanced-Capable and on each node I can varyon and mount filesystems.
+main1_rg_01:cl_activate_vgs STATUS=0
+main1_rg_01:cl_activate_vgs typeset -li STATUS
+main1_rg_01:cl_activate_vgs... (2 Replies)
Hello. I am Running AIX 6.1 and PowerHA 6.1
I have an active/active cluster (Prod/Dev) cluster. Each side will failover to the other.
I have on my prod side an active volume group with a file system. The VG is imported on both nodes and active (varried on, file system mounted) on the prod... (3 Replies)
Hi,
I'm new to HACMP. Currently I setup a cluster with nfs cross-mount follow this guide:
kristijan.org NFS cross-mounts in PowerHA/HACMPMy cluster has two nodes: erp01 and erp02.
I'm using nfs4 with filesystem for nfs is: /sapnfs
Cluster start without problems. But I cannnot move RG (with... (3 Replies)
Hi,
I have a 2 node Cluster. Which is working in active/passive mode (i.e Node#1 is running and when it goes down the Node#2 takes over)
Now there's this requirement that we need a mount point say /test that should be available in active node #1 and when node #1 goes down and node#2 takes... (6 Replies)
Hi,
I have a requirement to monitor the HACMP Resource Groups. At present in my environment, if the Resource Groups fail over from preferred node to Secondary node we dont get notification.
Can some one help me in creating a scrript. I have more than one RG online. (Max 4 Resource Groups in... (2 Replies)
Hi,
I am running Solaris 10 + VCS 5 in one of our environment. Recently one of my colleague configured all resources in a single service group.( ie, one service group which has all resources) ,Usually we create seperate service groups for Hardware & for application.
For eg: SYS_HW_GRP, will... (0 Replies)
I have rcently setup a 4 node cluster running sun cluster 3.2
and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename>
this worked ok.
theni I tried to siwitch the zone to node a thei work... (14 Replies)