12-22-2010
garskoci,
yes that was my plan to have 1 sun cluster and keep the other vcs running parallel the purpose is for safety to keep the old vcs configuration just in case the sun cluster setup is a bust but I have to agree with you that this may create many problems and new unforseen ones and more risky than anticipated.
I think your approach to have 2 single node cluster will be a better idea and less complex.
thanx for your advice (=
Last edited by sparcguy; 12-22-2010 at 08:08 PM..
10 More Discussions You Might Find Interesting
1. Solaris
Hi
I want to install VCS 5 on solaris 10
the product states it needs 3 nic cards. how to install it if I have 2 cards only (this is just for demo)?
thank you for your help. (3 Replies)
Discussion started by: melanie_pfefer
3 Replies
2. High Performance Computing
Dear All,
Can anyone explain about Pros and Cons of SUN and Veritas Cluster ?
Any comparison chart is highly appreciated.
Regards,
RAA (4 Replies)
Discussion started by: RAA
4 Replies
3. Solaris
Hi all,
I want to put a local disk on a Sun Cluster node but scconf command explodes :eek:
My system:
* two node cluster on two VMWare virtual machines
* Solaris 10 SunOS 5.10 Generic_141415-05 i86pc i386 i86pc
* Sun Cluster 3.2 u2
* Veritas Volume manager
The situation... (2 Replies)
Discussion started by: gxmsgx
2 Replies
4. Solaris
I have 2 sun blade 1500 with Qlogic card and T3 storedge. I can connect to the T3 storedge through my boxes. Now I want to setup cluster, in order to setup I need to buy a Fiber Channel switch. Can I buy any fiber channel switch? Can you recommend some old switches? I can get it from ebay.
... (4 Replies)
Discussion started by: mokkan
4 Replies
5. Solaris
Is it possible to configure veritas cluster server using 2 Ldoms on same host? I just want to test and learn VCS. We can do a cluster (sun cluster3.2 ) in a box using 2 Ldoms but i 'm not sure if thats possible with veritas cluster or not ? (1 Reply)
Discussion started by: fugitive
1 Replies
6. Solaris
How to create your cluster members routes and manual nat, and proxy. (2 Replies)
Discussion started by: padmaja@tech
2 Replies
7. Solaris
Hello experts -
I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts.
(1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over)
(2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies
8. Solaris
Hello everyone
I've inherited an Oracle Solaris system holding ASE Sybase databases. The system consists of two nodes inside a Sun Cluster. Each of the nodes is hosting 2 Sybase database instances, where one of the nodes is active and other is standing by. The scenario at hand is that when any of... (3 Replies)
Discussion started by: abohmeed
3 Replies
9. Solaris
Hi Bros,
I am a newbie to this cluster environment...my workplace is a mess of Sun solaris server, oracle database, Cisco switch, router, MS Server, Windows PC..so and so..now my boss ask me to come out with a procedure for maintenance of storage array in our sun cluster environment..I read few... (12 Replies)
Discussion started by: xavierantony
12 Replies
10. UNIX for Beginners Questions & Answers
Hi Experts,
I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system.
#hastatus -sum
-- System State Frozen
A node1 running 0
A node2 running 0
-- Group State
-- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies
cmruncl(1m) cmruncl(1m)
NAME
cmruncl - run a high availability cluster
SYNOPSIS
cmruncl [-f] [-v] [-n node_name...] [-t | -w none]
DESCRIPTION
cmruncl causes all nodes in a configured cluster or all nodes specified to start their cluster daemons and form a new cluster.
To start a cluster, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configuration
file. See access policy in cmquerycl(1m).
This command should only be run when the cluster is not active on any of the configured nodes. This command verifies the network configu-
ration before causing the nodes to start their cluster daemons. If a cluster is already running on a subset of the nodes, the cmrunnode
command should be used to start the remaining nodes and force them to join the existing cluster.
If node_name is not specified, the cluster daemons will be started on all the nodes in the cluster. All nodes in the cluster must be
available for the cluster to start unless a subset of nodes is specified.
Options
cmruncl supports the following options:
-f Force cluster startup without warning message and continuation prompt that are printed with the -n option.
-v Verbose output will be displayed.
-t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages.
The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the
nodes can meet any external dependencies such as EMS resources, package subnets, and storage.
-n node_name...
Start the cluster daemon on the specified subset of node(s).
-w none By default network probing is performed to check that the network connectivity is the same as when the cluster was config-
ured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The
option should only be used if this network configuration is known to be correct from a recent check.
RETURN VALUE
cmruncl returns the following value:
0 Successful completion.
1 Command failed.
EXAMPLES
Run the cluster daemon:
cmruncl
Run the cluster daemons on node1 and node2:
cmruncl -n node1 -n node2
AUTHOR
cmruncl was developed by HP.
SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m).
Requires Optional Serviceguard Software cmruncl(1m)