Sponsored Content
Operating Systems Solaris SC3.2 issue - cluster transport configuration not right - resulting fail Post 302381662 by frustin on Saturday 19th of December 2009 11:22:49 AM
Old 12-19-2009
SC3.2 issue - cluster transport configuration not right - resulting fail

I am trying to set up a two host cluster. trouble is with the cluster transport configuration.

i'm using e1000g2 and g3 for the cluster transport. global0 and global1 are my two nodes, and I am running the scinstall from global1.

i think i should be expecting, is this:

Code:
The following connections were discovered:
global1:e1000g2  switch1  global0:e1000g2
global1:e1000g3  switch2  global0:e1000g3

but what i am actually getting, is this:

Code:
 The following connections were discovered:
global1:e1000g2  switch1  global0:e1000g2
global1:e1000g2  switch1  global0:e1000g3

I think it is because of the failure above that the cluster does not work.

I've attached the scinstall logfile. Is there any other info i need to add?

I've tried all sorts of combinations. I am using link-based IPMP on e1000g0 and g1, on both of the servers.
frustin
 

10 More Discussions You Might Find Interesting

1. HP-UX

MC/SG Fail to join cluster node

Hi, Please advise me whereas I have two node cluster server configured with MC/SG. Application and DB are running on Node 1, while Node 2 is standby. All the volume group devices are part of cluster environment. There is only one package running at node 1. Node 2 is having the problem to... (1 Reply)
Discussion started by: rauphelhunter
1 Replies

2. AIX

HACMP 5.4.1 Two-Node-Cluster-Configuration-Assistant fails

This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Discussion started by: shockneck
0 Replies

3. Solaris

Cluster interconnect, adapter configuration

Hello I have problem with scinstall. I found information that i should`t configure public network before using scinstall. Each node configuration: 4 node, 3 network adapter on each node. 1 for public adapter and 2 for cluster interconnect. Two switches. After scinstall first node is... (1 Reply)
Discussion started by: time0ut
1 Replies

4. Solaris

In cluster configuration ora* VGs are not controlled by VCS

Need some one to explain "In cluster configuration ora* VGs are not controlled by VCS". Actually, I am not aware of this. Thanks, Rama (0 Replies)
Discussion started by: ramareddi16
0 Replies

5. Solaris

Sun Cluster configuration issue

I am using VMware Workstation-7 on Windows-XP host . I am trying to configure Solaris 10-X86 guest os based 2 nodes Sun Cluster . I have added one extra Virtual Lan adapter on my VMware with another subnet (that I would like to put for SUN Cluster private communication). I have... (0 Replies)
Discussion started by: sanjee
0 Replies

6. UNIX for Advanced & Expert Users

Veritas Cluster automatic fail-back option on Solaris

Hi - Please help me to understand the Veritas Cluster fail-over capability. We configured oracle database file system on veritas cluster file system and it is automatically failing-over from node 1 to node 2. Does Veritas cluster softward have any option to fail-back from node 2 to node 1... (6 Replies)
Discussion started by: Mansoor8810
6 Replies

7. Red Hat

Cluster configuration file

Hi Everybody, I would like to know meaning of following parameters in cluster.conf file in linux <device name="blade_fence" blade="2" option="off" <clusternode name="Blade1-int" votes="1" nodeid="1"> what do you mean by "option=off" and "votes=1" Thanks (1 Reply)
Discussion started by: mastansaheb
1 Replies

8. UNIX and Linux Applications

Configuration of Linux cluster managment on Red Hat 5.x server

Hi Experts, I have question regarding linux cluster managment on Red Hat 5.x server. When I try to install 'luci' or 'ricci' in one of our linux servers it is giving me below error:- yum install luci Loaded plugins: katello, product-id, rhnplugin, security, subscription-manager Updating... (0 Replies)
Discussion started by: Amey Joshi
0 Replies

9. Emergency UNIX and Linux Support

HP-UX: Help to Change network configuration from APA manual mode (2Gbps) to simple fail over (1Gbps)

Hello HP-UX experts, Server = rx8640 Node partition OS = HP-UX 11.23 arch = IA64 Network switch = Foundry 16 port switch (1Gbps) Existing configuration: Tough to explain as it is very messy (see below for the link of zip of network related fles). 2 x 2Gbps aggregates configured some time... (1 Reply)
Discussion started by: prvnrk
1 Replies

10. Red Hat

Cluster form fail

why does my cluster form but fail after a few minutes, or why do my multicast communications stop working after a short amount of time? (1 Reply)
Discussion started by: gema.utama
1 Replies
cmdeleteconf(1m)														  cmdeleteconf(1m)

NAME
cmdeleteconf - Delete either the cluster or the package configuration SYNOPSIS
cmdeleteconf [-f] [-v] [-c cluster_name] [[-p package_name]...] DESCRIPTION
cmdeleteconf deletes either the entire cluster configuration, including all its packages, or only the specified package configuration. If neither cluster_name nor package_name is specified, cmdeleteconf will delete the local cluster's configuration and all its packages. If the local node's cluster configuration is outdated, cmdeleteconf without any argument will only delete the local node's configuration. If only the package_name is specified, the configuration of package_name in the local cluster is deleted. If both cluster_name and pack- age_name are specified, the package must be configured in the cluster_name, and only the package package_name will be deleted. cmdelete- conf with only cluster_name specified will delete the entire cluster configuration on all the nodes in the cluster, regardless of the con- figuration version. The local cluster is the cluster that the node running the cmdeleteconf command belongs to. Only a superuser, whose effective user ID is zero (see id(1) and su(1)), can delete the configuration. To delete the cluster configuration, halt the cluster first. To delete a package configuration you must halt the package first, but you do not need to halt the cluster (it may remain up or be brought down). To delete the package VxVM-CVM-pkg (HP-UX only), you must first delete all packages with STORAGE_GROUP defined. While deleting the cluster, if any of the cluster nodes are powered down, the user can choose to continue deleting the configuration. In this case, the cluster configuration on the down node will remain in place and, therefore, be out of sync with the rest of the cluster. If the powered-down node ever comes up, the user should execute the cmdeleteconf command with no argument on that node to clean up the config- uration before doing any other Serviceguard command. Options cmdeleteconf supports the following options: -f Force the deletion of either the cluster configuration or the package configuration. -v Verbose output will be displayed. -c cluster_name Name of the cluster to delete. The cluster must be halted already, if intending to delete the cluster. -p package_name Name of an existing package to delete from the cluster. The package must be halted already. There should not be any packages in the cluster with STORAGE_GROUP defined before having a package_name of VxVM-CVM-pkg (HP-UX only). RETURN VALUE
Upon completion, cmdeleteconf returns one of the following values: 0 Successful completion. 1 Command failed. EXAMPLES
The high availability environment contains the cluster, clusterA , and a package, pkg1. To delete package pkg1 in clusterA, do the following: cmdeleteconf -f -c clusterA -p pkg1 To delete the cluster clusterA and all its packages, do the following: cmdeleteconf -f -c clusterA AUTHOR
cmdeleteconf was developed by HP. SEE ALSO
cmcheckconf(1m), cmapplyconf(1m), cmgetconf(1m), cmmakepkg(1m), cmquerycl(1m). Requires Optional Serviceguard Software cmdeleteconf(1m)
All times are GMT -4. The time now is 11:46 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy