12-20-2019
Could this be that whatever is supplying the common disk used to keep heartbeat failed? That way, both nodes would be unable to keep updating the shared disk and the usual response is to terminate all services to avoid getting in the way, i.e. to panic/abort. We've had a Oracle RAC database cluster do this before. not pretty, but it is the best course of action to avoid damage.
Robin
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
help, what is the difference between core dump and panic dump? (1 Reply)
Discussion started by: aileen
1 Replies
2. HP-UX
hi friends,
i know that when there is a crash then that memory image is
put into /var/adm/crash
but if the system hangs up and if i have access to console of
that machine then how can i take the crash dump manully.
thanks (2 Replies)
Discussion started by: mxms755
2 Replies
3. Solaris
Can anyone of you help me in enabling crash dump on Solaris 5.5.1 (1 Reply)
Discussion started by: csreenivas
1 Replies
4. AIX
Hi Guys,
I have two nodes clustered. Each node is AIX 5.2 & they are clustered with HACMP 5.2. The mode of the cluster is Active/Passive which mean one node is the Active node & have all resource groups on it & the 2nd node is standby.
Last Monday I noted that all resource groupes have been... (2 Replies)
Discussion started by: aldowsary
2 Replies
5. Solaris
hi ,
i have machine that is crashed
how i can enable core dump file & how can i find it ? :confused: (4 Replies)
Discussion started by: lid-j-one
4 Replies
6. UNIX for Advanced & Expert Users
Hi.
I have started heartbeat on two redhat servers. Using eth0.
Before I start heartbeat I can ping the two server to each other.
Once I start heartbeat both the server become active as they both have warnings that the other node is dead.
Also I am not able to ping each other. After stopping... (1 Reply)
Discussion started by: amrita garg
1 Replies
7. AIX
Hi Guys,
I have to design a multinode hacmp cluster and am not sure if the design I am thinking of makes any sense.
I have to make an environment that currently resides on 5 nodes more resilient but I have the constrain of only having 4 frames. In addition the business doesnt want to pay for... (7 Replies)
Discussion started by: zxmaus
7 Replies
8. AIX
Hi
I had an active passive cluster. Node A went down and all resource groups moved to Node B.
Now we brought up Node A. What is the procedure to bring everything back to Node A.
Node A #lssrc -a | grep cl
clcomdES clcomdES 323782 active
clstrmgrES cluster... (9 Replies)
Discussion started by: samsungsamsung
9 Replies
9. HP-UX
Hi Experts,
I have configured HP-UX Service Guard cluster and it dumps crash every time i reboot a cluster node. Can anyone please help me to prevent these unnecessary crash dumps at the time of rebooting SG cluster node?
Thanks in advance.
Vaishey (2 Replies)
Discussion started by: Vaishey
2 Replies
10. OS X (Apple)
MacPro (2013) 12-Core, 64GB RAM (today's crash):
panic(cpu 2 caller 0xffffff7f8b333ad5): userspace watchdog timeout: no successful checkins from com.apple.WindowServer in 120 seconds
service: com.apple.logd, total successful checkins since load (318824 seconds ago): 31883, last successful... (3 Replies)
Discussion started by: Neo
3 Replies
cmruncl(1m) cmruncl(1m)
NAME
cmruncl - run a high availability cluster
SYNOPSIS
cmruncl [-f] [-v] [-n node_name...] [-t | -w none]
DESCRIPTION
cmruncl causes all nodes in a configured cluster or all nodes specified to start their cluster daemons and form a new cluster.
To start a cluster, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configuration
file. See access policy in cmquerycl(1m).
This command should only be run when the cluster is not active on any of the configured nodes. This command verifies the network configu-
ration before causing the nodes to start their cluster daemons. If a cluster is already running on a subset of the nodes, the cmrunnode
command should be used to start the remaining nodes and force them to join the existing cluster.
If node_name is not specified, the cluster daemons will be started on all the nodes in the cluster. All nodes in the cluster must be
available for the cluster to start unless a subset of nodes is specified.
Options
cmruncl supports the following options:
-f Force cluster startup without warning message and continuation prompt that are printed with the -n option.
-v Verbose output will be displayed.
-t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages.
The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the
nodes can meet any external dependencies such as EMS resources, package subnets, and storage.
-n node_name...
Start the cluster daemon on the specified subset of node(s).
-w none By default network probing is performed to check that the network connectivity is the same as when the cluster was config-
ured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The
option should only be used if this network configuration is known to be correct from a recent check.
RETURN VALUE
cmruncl returns the following value:
0 Successful completion.
1 Command failed.
EXAMPLES
Run the cluster daemon:
cmruncl
Run the cluster daemons on node1 and node2:
cmruncl -n node1 -n node2
AUTHOR
cmruncl was developed by HP.
SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m).
Requires Optional Serviceguard Software cmruncl(1m)