Sponsored Content
Operating Systems AIX HACMP resource group State not STABLE Post 302490839 by zxmaus on Tuesday 25th of January 2011 08:59:53 PM
Old 01-25-2011
Hi,

I would try to export and reimport the volumegroup in question on the inactive node. Make sure you keep the correct VG Major number (you can import using the -V flag).
If it finds all PVs you should just sync the cluster config and than try again. If it has problems during the import you can take it from there. I had similar issues after migrating storage across disks - duplicate pvid's on some disks - the cluster did not like that much.

Hope that helps,
regards
zxmaus
 

8 More Discussions You Might Find Interesting

1. High Performance Computing

sun Cluster resource group cant failover

I have rcently setup a 4 node cluster running sun cluster 3.2 and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename> this worked ok. theni I tried to siwitch the zone to node a thei work... (14 Replies)
Discussion started by: lesliek
14 Replies

2. Solaris

Solaris + VCS , move a resource to another group

Hi, I am running Solaris 10 + VCS 5 in one of our environment. Recently one of my colleague configured all resources in a single service group.( ie, one service group which has all resources) ,Usually we create seperate service groups for Hardware & for application. For eg: SYS_HW_GRP, will... (0 Replies)
Discussion started by: mpics66
0 Replies

3. AIX

Resource Group Monitoring

Hi, I have a requirement to monitor the HACMP Resource Groups. At present in my environment, if the Resource Groups fail over from preferred node to Secondary node we dont get notification. Can some one help me in creating a scrript. I have more than one RG online. (Max 4 Resource Groups in... (2 Replies)
Discussion started by: srnagu
2 Replies

4. AIX

Adding a Volume Group to an HACMP Resource Group?

Hi, I have a 2 node Cluster. Which is working in active/passive mode (i.e Node#1 is running and when it goes down the Node#2 takes over) Now there's this requirement that we need a mount point say /test that should be available in active node #1 and when node #1 goes down and node#2 takes... (6 Replies)
Discussion started by: aixromeo
6 Replies

5. AIX

HACMP, NFS cross-mount problem. Can not move resource group

Hi, I'm new to HACMP. Currently I setup a cluster with nfs cross-mount follow this guide: kristijan.org NFS cross-mounts in PowerHA/HACMPMy cluster has two nodes: erp01 and erp02. I'm using nfs4 with filesystem for nfs is: /sapnfs Cluster start without problems. But I cannnot move RG (with... (3 Replies)
Discussion started by: giobuon
3 Replies

6. AIX

Adding existing VG to powerHA Resource group

Hello. I am Running AIX 6.1 and PowerHA 6.1 I have an active/active cluster (Prod/Dev) cluster. Each side will failover to the other. I have on my prod side an active volume group with a file system. The VG is imported on both nodes and active (varried on, file system mounted) on the prod... (3 Replies)
Discussion started by: mhenryj
3 Replies

7. AIX

Power HA 6.1 Bring Resource group online issue

Hi all, I have the following in hacmp.out for bringing Resource group online. Volume groups themselves are Enhanced-Capable and on each node I can varyon and mount filesystems. +main1_rg_01:cl_activate_vgs STATUS=0 +main1_rg_01:cl_activate_vgs typeset -li STATUS +main1_rg_01:cl_activate_vgs... (2 Replies)
Discussion started by: OdilPVC
2 Replies

8. AIX

Sysmirror 7.1.3 Resource Group NFS mounts

Hello all, I'm working to fix a two-node SysMirror cluster that uses NFS mounts from a NetApp appliance as the data repository. Currently all the NFS mounts/unmounts are called from the application controller scripts, and since file collection isn't currently working, (One fight at at time... (3 Replies)
Discussion started by: ZekesGarage
3 Replies
CRM_MON(8)							  [FIXME: manual]							CRM_MON(8)

NAME
crm_mon - monitor the cluster's status SYNOPSIS
crm_mon [-V] -d -pfilename -h filename crm_mon [-V] [-1|-n|-r] -h filename crm_mon [-V] [-n|-r] -X filename crm_mon [-V] [-n|-r] -c|-1 crm_mon [-V] -i interval crm_mon -? DESCRIPTION
The crm_mon command allows you to monitor your cluster's status and configuration. Its output includes the number of nodes, uname, uuid, status, the resources configured in your cluster, and the current status of each. The output of crm_mon can be displayed at the console or printed into an HTML file. When provided with a cluster configuration file without the status section, crm_mon creates an overview of nodes and resources as specified in the file. OPTIONS
--help, -? Provide help. --verbose, -V Increase the debug output. --interval seconds, -i seconds Determine the update frequency. If -i is not specified, the default of 15 seconds is assumed. --group-by-node, -n Group resources by node. --inactive, -r Display inactive resources. --as-console, -c Display the cluster status on the console. --one-shot, -1 Display the cluster status once on the console then exit (does not use ncurses). --as-html filename, -h filename Write the cluster's status to the specified file. --daemonize, -d Run in the background as a daemon. --pid-file filename, -p filename Specify the daemon's pid file. --xml-file filename, -X filename Specify an XML file containing a cluster configuration and create an overview of the cluster's configuration. EXAMPLES
Display your cluster's status and get an updated listing every 15 seconds: crm_mon Display your cluster's status and get an updated listing after an interval specified by -i. If -i is not given, the default refresh interval of 15 seconds is assumed: crm_mon -i interval[s] Display your cluster's status on the console: crm_mon -c Display your cluster's status on the console just once then exit: crm_mon -1 Display your cluster's status and group resources by node: crm_mon -n Display your cluster's status, group resources by node, and include inactive resources in the list: crm_mon -n -r Write your cluster's status to an HTML file: crm_mon -h filename Run crm_mon as a daemon in the background, specify the daemon's pid file for easier control of the daemon process, and create HTML output. This option allows you to constantly create HTML output that can be easily processed by other monitoring applications: crm_mon -d -p filename -h filename Display the cluster configuration laid out in an existing cluster configuration file (filename), group the resources by node, and include inactive resources. This command can be used for dry-runs of a cluster configuration before rolling it out to a live cluster. crm_mon -r -n -X filename FILES
/var/lib/heartbeat/crm/cib.xml--the CIB (minus status section) on disk. Editing this file directly is strongly discouraged. AUTHOR
crm_mon was written by Andrew Beekhof. [FIXME: source] 07/05/2010 CRM_MON(8)
All times are GMT -4. The time now is 09:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy