SC3.2 issue - cluster transport configuration not right - resulting fail


 
Thread Tools Search this Thread
Operating Systems Solaris SC3.2 issue - cluster transport configuration not right - resulting fail
Prev   Next
# 1  
Old 12-19-2009
SC3.2 issue - cluster transport configuration not right - resulting fail

I am trying to set up a two host cluster. trouble is with the cluster transport configuration.

i'm using e1000g2 and g3 for the cluster transport. global0 and global1 are my two nodes, and I am running the scinstall from global1.

i think i should be expecting, is this:

Code:
The following connections were discovered:
global1:e1000g2  switch1  global0:e1000g2
global1:e1000g3  switch2  global0:e1000g3

but what i am actually getting, is this:

Code:
 The following connections were discovered:
global1:e1000g2  switch1  global0:e1000g2
global1:e1000g2  switch1  global0:e1000g3

I think it is because of the failure above that the cluster does not work.

I've attached the scinstall logfile. Is there any other info i need to add?

I've tried all sorts of combinations. I am using link-based IPMP on e1000g0 and g1, on both of the servers.
frustin
 
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Red Hat

Cluster form fail

why does my cluster form but fail after a few minutes, or why do my multicast communications stop working after a short amount of time? (1 Reply)
Discussion started by: gema.utama
1 Replies

2. Emergency UNIX and Linux Support

HP-UX: Help to Change network configuration from APA manual mode (2Gbps) to simple fail over (1Gbps)

Hello HP-UX experts, Server = rx8640 Node partition OS = HP-UX 11.23 arch = IA64 Network switch = Foundry 16 port switch (1Gbps) Existing configuration: Tough to explain as it is very messy (see below for the link of zip of network related fles). 2 x 2Gbps aggregates configured some time... (1 Reply)
Discussion started by: prvnrk
1 Replies

3. UNIX and Linux Applications

Configuration of Linux cluster managment on Red Hat 5.x server

Hi Experts, I have question regarding linux cluster managment on Red Hat 5.x server. When I try to install 'luci' or 'ricci' in one of our linux servers it is giving me below error:- yum install luci Loaded plugins: katello, product-id, rhnplugin, security, subscription-manager Updating... (0 Replies)
Discussion started by: Amey Joshi
0 Replies

4. Red Hat

Cluster configuration file

Hi Everybody, I would like to know meaning of following parameters in cluster.conf file in linux <device name="blade_fence" blade="2" option="off" <clusternode name="Blade1-int" votes="1" nodeid="1"> what do you mean by "option=off" and "votes=1" Thanks (1 Reply)
Discussion started by: mastansaheb
1 Replies

5. UNIX for Advanced & Expert Users

Veritas Cluster automatic fail-back option on Solaris

Hi - Please help me to understand the Veritas Cluster fail-over capability. We configured oracle database file system on veritas cluster file system and it is automatically failing-over from node 1 to node 2. Does Veritas cluster softward have any option to fail-back from node 2 to node 1... (6 Replies)
Discussion started by: Mansoor8810
6 Replies

6. Solaris

Sun Cluster configuration issue

I am using VMware Workstation-7 on Windows-XP host . I am trying to configure Solaris 10-X86 guest os based 2 nodes Sun Cluster . I have added one extra Virtual Lan adapter on my VMware with another subnet (that I would like to put for SUN Cluster private communication). I have... (0 Replies)
Discussion started by: sanjee
0 Replies

7. Solaris

In cluster configuration ora* VGs are not controlled by VCS

Need some one to explain "In cluster configuration ora* VGs are not controlled by VCS". Actually, I am not aware of this. Thanks, Rama (0 Replies)
Discussion started by: ramareddi16
0 Replies

8. Solaris

Cluster interconnect, adapter configuration

Hello I have problem with scinstall. I found information that i should`t configure public network before using scinstall. Each node configuration: 4 node, 3 network adapter on each node. 1 for public adapter and 2 for cluster interconnect. Two switches. After scinstall first node is... (1 Reply)
Discussion started by: time0ut
1 Replies

9. AIX

HACMP 5.4.1 Two-Node-Cluster-Configuration-Assistant fails

This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Discussion started by: shockneck
0 Replies

10. HP-UX

MC/SG Fail to join cluster node

Hi, Please advise me whereas I have two node cluster server configured with MC/SG. Application and DB are running on Node 1, while Node 2 is standby. All the volume group devices are part of cluster environment. There is only one package running at node 1. Node 2 is having the problem to... (1 Reply)
Discussion started by: rauphelhunter
1 Replies
Login or Register to Ask a Question
PACEMAKER(8)						  System Administration Utilities					      PACEMAKER(8)

NAME
Pacemaker - Part of the Pacemaker cluster resource manager SYNOPSIS
crm_node command [options] DESCRIPTION
crm_node - Tool for displaying low-level node information OPTIONS
-?, --help This text -$, --version Version information -V, --verbose Increase debug output -Q, --quiet Essential output only Stack: -A, --openais Only try connecting to an OpenAIS-based cluster -H, --heartbeat Only try connecting to a Heartbeat-based cluster Commands: -n, --name Display the name used by the cluster for this node -N, --name-for-id=value Display the name used by the cluster for the node with the specified id -e, --epoch Display the epoch during which this node joined the cluster -q, --quorum Display a 1 if our partition has quorum, 0 if not -l, --list Display all known members (past and present) of this cluster (Not available for heartbeat clusters) -p, --partition Display the members of this partition -i, --cluster-id Display this node's cluster id -R, --remove=value (Advanced) Remove the (stopped) node with the specified name from Pacemaker's configuration and caches In the case of Heartbeat, CMAN and Corosync 2.0, requires that the node has already been removed from the underlying cluster Additional Options: -f, --force AUTHOR
Written by Andrew Beekhof REPORTING BUGS
Report bugs to pacemaker@oss.clusterlabs.org Pacemaker 1.1.10-29.el7 June 2014 PACEMAKER(8)