09-23-2010
All loggin from the cluster is kept in the /var/adm/messages files, so if those files has been overwritten, then it is no way for you to see why the cluster did the failover unless your application did som logging of its self.
10 More Discussions You Might Find Interesting
1. High Performance Computing
I have rcently setup a 4 node cluster running sun cluster 3.2
and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename>
this worked ok.
theni I tried to siwitch the zone to node a thei work... (14 Replies)
Discussion started by: lesliek
14 Replies
2. HP-UX
I have a 2-node ServiceGuard cluster. One of the cluster packages has a volume group assigned to it. When I fail the package over to the other node, the volume group does not come up automatically on the other node.
I have to manually do a "vgchange -a y vgname" on the node before the package... (5 Replies)
Discussion started by: Wotan31
5 Replies
3. High Performance Computing
Dear All,
Can anyone explain about Pros and Cons of SUN and Veritas Cluster ?
Any comparison chart is highly appreciated.
Regards,
RAA (4 Replies)
Discussion started by: RAA
4 Replies
4. High Performance Computing
I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier.
Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies
5. Solaris
Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC.
Am thinking how to migrate to sun cluster setup instead.
My plan as follows leave the existing vcs intact as a fallback plan.
Then install and build suncluster on... (5 Replies)
Discussion started by: sparcguy
5 Replies
6. Gentoo
How to failover the cluster ? GNU/Linux
By which command,
My Linux version
2008 x86_64 x86_64 x86_64 GNU/Linux
What are the prerequisites we need to take while failover ?
if any
Regards (3 Replies)
Discussion started by: sidharthmellam
3 Replies
7. Solaris
Hello experts -
I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts.
(1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over)
(2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies
8. Solaris
Dear Experts,
If there is a possible Solaris Cluster failover to second node based on scan rate?
I need the documentation If solaris cluster can do this.
Thank You in Advance
Edy (3 Replies)
Discussion started by: edydsuranta
3 Replies
9. Red Hat
Hi Guys,
I am not much aware of clusters but i have few questions can someone provide the overview as it would be very helpful for me.
How can i perform cluster failover test to see all the services are failing back to other node ? If it is using veritas cluster then what kind of... (2 Replies)
Discussion started by: munna529
2 Replies
10. Solaris
Hi
I am new to this forum & oracle DBA also, I would like to know that can we add Oracle ASM in failover mode in sun cluster 3.3 or 4.0 means that if suppose oracle is running along with ASM on node1 & this node went down due to hardware issue then both oracle along with ASM must move to... (1 Reply)
Discussion started by: hb00
1 Replies
cmruncl(1m) cmruncl(1m)
NAME
cmruncl - run a high availability cluster
SYNOPSIS
cmruncl [-f] [-v] [-n node_name...] [-t | -w none]
DESCRIPTION
cmruncl causes all nodes in a configured cluster or all nodes specified to start their cluster daemons and form a new cluster.
To start a cluster, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configuration
file. See access policy in cmquerycl(1m).
This command should only be run when the cluster is not active on any of the configured nodes. This command verifies the network configu-
ration before causing the nodes to start their cluster daemons. If a cluster is already running on a subset of the nodes, the cmrunnode
command should be used to start the remaining nodes and force them to join the existing cluster.
If node_name is not specified, the cluster daemons will be started on all the nodes in the cluster. All nodes in the cluster must be
available for the cluster to start unless a subset of nodes is specified.
Options
cmruncl supports the following options:
-f Force cluster startup without warning message and continuation prompt that are printed with the -n option.
-v Verbose output will be displayed.
-t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages.
The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the
nodes can meet any external dependencies such as EMS resources, package subnets, and storage.
-n node_name...
Start the cluster daemon on the specified subset of node(s).
-w none By default network probing is performed to check that the network connectivity is the same as when the cluster was config-
ured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The
option should only be used if this network configuration is known to be correct from a recent check.
RETURN VALUE
cmruncl returns the following value:
0 Successful completion.
1 Command failed.
EXAMPLES
Run the cluster daemon:
cmruncl
Run the cluster daemons on node1 and node2:
cmruncl -n node1 -n node2
AUTHOR
cmruncl was developed by HP.
SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m).
Requires Optional Serviceguard Software cmruncl(1m)