01-25-2011
Hi,
I would try to export and reimport the volumegroup in question on the inactive node. Make sure you keep the correct VG Major number (you can import using the -V flag).
If it finds all PVs you should just sync the cluster config and than try again. If it has problems during the import you can take it from there. I had similar issues after migrating storage across disks - duplicate pvid's on some disks - the cluster did not like that much.
Hope that helps,
regards
zxmaus
8 More Discussions You Might Find Interesting
1. High Performance Computing
I have rcently setup a 4 node cluster running sun cluster 3.2
and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename>
this worked ok.
theni I tried to siwitch the zone to node a thei work... (14 Replies)
Discussion started by: lesliek
14 Replies
2. Solaris
Hi,
I am running Solaris 10 + VCS 5 in one of our environment. Recently one of my colleague configured all resources in a single service group.( ie, one service group which has all resources) ,Usually we create seperate service groups for Hardware & for application.
For eg: SYS_HW_GRP, will... (0 Replies)
Discussion started by: mpics66
0 Replies
3. AIX
Hi,
I have a requirement to monitor the HACMP Resource Groups. At present in my environment, if the Resource Groups fail over from preferred node to Secondary node we dont get notification.
Can some one help me in creating a scrript. I have more than one RG online. (Max 4 Resource Groups in... (2 Replies)
Discussion started by: srnagu
2 Replies
4. AIX
Hi,
I have a 2 node Cluster. Which is working in active/passive mode (i.e Node#1 is running and when it goes down the Node#2 takes over)
Now there's this requirement that we need a mount point say /test that should be available in active node #1 and when node #1 goes down and node#2 takes... (6 Replies)
Discussion started by: aixromeo
6 Replies
5. AIX
Hi,
I'm new to HACMP. Currently I setup a cluster with nfs cross-mount follow this guide:
kristijan.org NFS cross-mounts in PowerHA/HACMPMy cluster has two nodes: erp01 and erp02.
I'm using nfs4 with filesystem for nfs is: /sapnfs
Cluster start without problems. But I cannnot move RG (with... (3 Replies)
Discussion started by: giobuon
3 Replies
6. AIX
Hello. I am Running AIX 6.1 and PowerHA 6.1
I have an active/active cluster (Prod/Dev) cluster. Each side will failover to the other.
I have on my prod side an active volume group with a file system. The VG is imported on both nodes and active (varried on, file system mounted) on the prod... (3 Replies)
Discussion started by: mhenryj
3 Replies
7. AIX
Hi all, I have the following in hacmp.out for bringing Resource group online.
Volume groups themselves are Enhanced-Capable and on each node I can varyon and mount filesystems.
+main1_rg_01:cl_activate_vgs STATUS=0
+main1_rg_01:cl_activate_vgs typeset -li STATUS
+main1_rg_01:cl_activate_vgs... (2 Replies)
Discussion started by: OdilPVC
2 Replies
8. AIX
Hello all,
I'm working to fix a two-node SysMirror cluster that uses NFS mounts from a NetApp appliance as the data repository. Currently all the NFS mounts/unmounts are called from the application controller scripts, and since file collection isn't currently working, (One fight at at time... (3 Replies)
Discussion started by: ZekesGarage
3 Replies
LEARN ABOUT DEBIAN
votequorum_leaving
VOTEQUORUM_LEAVING(3) Corosync Cluster Engine Programmer's Manual VOTEQUORUM_LEAVING(3)
NAME
votequorum_leaving - Tell other nodes that we are leaving the cluster
SYNOPSIS
#include <corosync/votequorum.h>
int votequorum_leaving(votequorum_handle_t handle);
DESCRIPTION
The votequorum_leaving function is used to tell the other nodes in the cluster that this node is leaving. They will (when the node actually
leaves) reduce quorum to keep the cluster running without this node.
This function should only be called if it is known that the node is being shut down for a known reason and could be out of the cluster for
an extended period of time.
Normal behaviour is for the cluster to reduce the total number of votes, but NOT expected_votes when a node leave the cluster, so the clus-
ter could become inquorate. This is correct behaviour and is ther eto prevent split-brain.
Do NOT call this function unless you know what you are doing.
RETURN VALUE
This call returns the CS_OK value if successful, otherwise an error is returned.
ERRORS
The errors are undocumented.
SEE ALSO
votequorum_overview(8), votequorum_initialize(3), votequorum_finalize(3), votequorum_dispatch(3), votequorum_fd_get(3),
corosync Man Page 2009-01-26 VOTEQUORUM_LEAVING(3)