09-11-2011
Thanks for this. Should offer some more options to make the required work a little easier. Agree with what you say about the hdisk numbering. If I remove the disk HB networks (temporarily of course), I think that is the only dependency HA has to actual hdisk numbers. The HA resource groups only contain the VG names, so it should be possible to clean up the hdisk numbering on one side of the cluster at a time (while HA is stopped there) - and then add back the HBs with the new numbering in place.
Moderator's Comments:
|
|
edit by bakunin: changed the threads title to reflect the status of the problem.
|
|
Last edited by bakunin; 09-12-2011 at 11:29 AM..
10 More Discussions You Might Find Interesting
1. Programming
hello,
Can anyone please tell me how can i check the availability of the online user in a client-server environtment.
This is for a program where lakhs of client are connected to the server and the server has to check the availability of the every client in every minute.
So polling every client... (0 Replies)
Discussion started by: shushilmore
0 Replies
2. AIX
Hello.
I have a cluster with two heartbeat networks.
Both are on separate VG.
Some one varyoned those Vg's on one node.
After that heartbeat network went down on both nodes.
I varyoffed those two Vg's
networks came back up
but cluster is still in UNSTABLE status and in hacmp.out... (1 Reply)
Discussion started by: phobus
1 Replies
3. AIX
This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Discussion started by: shockneck
0 Replies
4. AIX
Hi
What is the procedure to upgrade the MQ from 6 to 7 in aix hacmp cluster. Do i need to bring down the cluster
services running in both the nodes and then give #smitty installp in both the nodes separately. Please assist... (0 Replies)
Discussion started by: samsungsamsung
0 Replies
5. AIX
Hi,
I have a IBM Power series machine that has 2 VIOs and hosting 20 LPARS.
I have two LPARs on which GPFS is configured (4-5 disks)
Now these two LPARs need to be configured for HACMP (PowerHA) as well.
What is recommended? Is it possible that HACMP can be done on this config or do i... (1 Reply)
Discussion started by: aixromeo
1 Replies
6. UNIX for Dummies Questions & Answers
I am trying to do the two node clustering with serviceguard package in redhat 5 and I am following this link http ://pbraun.nethence.com/doc/sysutils/mcsg.html
Here They are mentioning 10.1.1 for network use and 10.1.2 and 10.1.3 for HeartBeat use,
I have a doubt, How to make a heartbeat... (1 Reply)
Discussion started by: sankarg304
1 Replies
7. AIX
Hi
I have one of the disk missing in my NIMVG. My doubt is can I remove this hdisk2 online ? few of the file systems seems to be spread over 7 PV's. that's why i'm worried. Can someone suggest if I can replace this disk online. Also how to check if there is some data present in hdisk2 alone... (2 Replies)
Discussion started by: newtoaixos
2 Replies
8. AIX
As i have updated a lot of HACMP-nodes lately the question arises how to do it with minimal downtime. Of course it is easily possible to have a downtime and do the version update during this. In the best of worlds you always get the downtime you need - unfortunately we have yet to find this best of... (4 Replies)
Discussion started by: bakunin
4 Replies
9. AIX
Hi,
A customer I'm supporting once upon a time broke their 2 cluster node database servers so they could use the 2nd standby node for something else. Now sometime later they want to bring the 2nd node back into the cluster for resilance. Problem is there are now 3 VG's that have been set-up... (1 Reply)
Discussion started by: elcounto
1 Replies
10. AIX
I have troubles making clstat work. All the "usual suspects" have been covered but still no luck. The topology is a two-node active/passive with only one network-interface (it is a test-setup). The application running is SAP with DB/2 as database. We do not use SmartAssists or other gadgets.
... (8 Replies)
Discussion started by: bakunin
8 Replies
cmruncl(1m) cmruncl(1m)
NAME
cmruncl - run a high availability cluster
SYNOPSIS
cmruncl [-f] [-v] [-n node_name...] [-t | -w none]
DESCRIPTION
cmruncl causes all nodes in a configured cluster or all nodes specified to start their cluster daemons and form a new cluster.
To start a cluster, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configuration
file. See access policy in cmquerycl(1m).
This command should only be run when the cluster is not active on any of the configured nodes. This command verifies the network configu-
ration before causing the nodes to start their cluster daemons. If a cluster is already running on a subset of the nodes, the cmrunnode
command should be used to start the remaining nodes and force them to join the existing cluster.
If node_name is not specified, the cluster daemons will be started on all the nodes in the cluster. All nodes in the cluster must be
available for the cluster to start unless a subset of nodes is specified.
Options
cmruncl supports the following options:
-f Force cluster startup without warning message and continuation prompt that are printed with the -n option.
-v Verbose output will be displayed.
-t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages.
The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the
nodes can meet any external dependencies such as EMS resources, package subnets, and storage.
-n node_name...
Start the cluster daemon on the specified subset of node(s).
-w none By default network probing is performed to check that the network connectivity is the same as when the cluster was config-
ured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The
option should only be used if this network configuration is known to be correct from a recent check.
RETURN VALUE
cmruncl returns the following value:
0 Successful completion.
1 Command failed.
EXAMPLES
Run the cluster daemon:
cmruncl
Run the cluster daemons on node1 and node2:
cmruncl -n node1 -n node2
AUTHOR
cmruncl was developed by HP.
SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m).
Requires Optional Serviceguard Software cmruncl(1m)