Sponsored Content
Operating Systems Solaris SC3.2 issue - cluster transport configuration not right - resulting fail Post 302382093 by fugitive on Tuesday 22nd of December 2009 06:13:16 AM
Old 12-22-2009
Are you trying to configure the cluster on virtual machines .. if yes then i can point you to the answer
 

10 More Discussions You Might Find Interesting

1. HP-UX

MC/SG Fail to join cluster node

Hi, Please advise me whereas I have two node cluster server configured with MC/SG. Application and DB are running on Node 1, while Node 2 is standby. All the volume group devices are part of cluster environment. There is only one package running at node 1. Node 2 is having the problem to... (1 Reply)
Discussion started by: rauphelhunter
1 Replies

2. AIX

HACMP 5.4.1 Two-Node-Cluster-Configuration-Assistant fails

This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Discussion started by: shockneck
0 Replies

3. Solaris

Cluster interconnect, adapter configuration

Hello I have problem with scinstall. I found information that i should`t configure public network before using scinstall. Each node configuration: 4 node, 3 network adapter on each node. 1 for public adapter and 2 for cluster interconnect. Two switches. After scinstall first node is... (1 Reply)
Discussion started by: time0ut
1 Replies

4. Solaris

In cluster configuration ora* VGs are not controlled by VCS

Need some one to explain "In cluster configuration ora* VGs are not controlled by VCS". Actually, I am not aware of this. Thanks, Rama (0 Replies)
Discussion started by: ramareddi16
0 Replies

5. Solaris

Sun Cluster configuration issue

I am using VMware Workstation-7 on Windows-XP host . I am trying to configure Solaris 10-X86 guest os based 2 nodes Sun Cluster . I have added one extra Virtual Lan adapter on my VMware with another subnet (that I would like to put for SUN Cluster private communication). I have... (0 Replies)
Discussion started by: sanjee
0 Replies

6. UNIX for Advanced & Expert Users

Veritas Cluster automatic fail-back option on Solaris

Hi - Please help me to understand the Veritas Cluster fail-over capability. We configured oracle database file system on veritas cluster file system and it is automatically failing-over from node 1 to node 2. Does Veritas cluster softward have any option to fail-back from node 2 to node 1... (6 Replies)
Discussion started by: Mansoor8810
6 Replies

7. Red Hat

Cluster configuration file

Hi Everybody, I would like to know meaning of following parameters in cluster.conf file in linux <device name="blade_fence" blade="2" option="off" <clusternode name="Blade1-int" votes="1" nodeid="1"> what do you mean by "option=off" and "votes=1" Thanks (1 Reply)
Discussion started by: mastansaheb
1 Replies

8. UNIX and Linux Applications

Configuration of Linux cluster managment on Red Hat 5.x server

Hi Experts, I have question regarding linux cluster managment on Red Hat 5.x server. When I try to install 'luci' or 'ricci' in one of our linux servers it is giving me below error:- yum install luci Loaded plugins: katello, product-id, rhnplugin, security, subscription-manager Updating... (0 Replies)
Discussion started by: Amey Joshi
0 Replies

9. Emergency UNIX and Linux Support

HP-UX: Help to Change network configuration from APA manual mode (2Gbps) to simple fail over (1Gbps)

Hello HP-UX experts, Server = rx8640 Node partition OS = HP-UX 11.23 arch = IA64 Network switch = Foundry 16 port switch (1Gbps) Existing configuration: Tough to explain as it is very messy (see below for the link of zip of network related fles). 2 x 2Gbps aggregates configured some time... (1 Reply)
Discussion started by: prvnrk
1 Replies

10. Red Hat

Cluster form fail

why does my cluster form but fail after a few minutes, or why do my multicast communications stop working after a short amount of time? (1 Reply)
Discussion started by: gema.utama
1 Replies
cmruncl(1m)															       cmruncl(1m)

NAME
cmruncl - run a high availability cluster SYNOPSIS
cmruncl [-f] [-v] [-n node_name...] [-t | -w none] DESCRIPTION
cmruncl causes all nodes in a configured cluster or all nodes specified to start their cluster daemons and form a new cluster. To start a cluster, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configuration file. See access policy in cmquerycl(1m). This command should only be run when the cluster is not active on any of the configured nodes. This command verifies the network configu- ration before causing the nodes to start their cluster daemons. If a cluster is already running on a subset of the nodes, the cmrunnode command should be used to start the remaining nodes and force them to join the existing cluster. If node_name is not specified, the cluster daemons will be started on all the nodes in the cluster. All nodes in the cluster must be available for the cluster to start unless a subset of nodes is specified. Options cmruncl supports the following options: -f Force cluster startup without warning message and continuation prompt that are printed with the -n option. -v Verbose output will be displayed. -t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages. The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the nodes can meet any external dependencies such as EMS resources, package subnets, and storage. -n node_name... Start the cluster daemon on the specified subset of node(s). -w none By default network probing is performed to check that the network connectivity is the same as when the cluster was config- ured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The option should only be used if this network configuration is known to be correct from a recent check. RETURN VALUE cmruncl returns the following value: 0 Successful completion. 1 Command failed. EXAMPLES
Run the cluster daemon: cmruncl Run the cluster daemons on node1 and node2: cmruncl -n node1 -n node2 AUTHOR
cmruncl was developed by HP. SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m). Requires Optional Serviceguard Software cmruncl(1m)
All times are GMT -4. The time now is 03:58 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy