SC3.2 issue - cluster transport configuration not right - resulting fail


 
Thread Tools Search this Thread
Operating Systems Solaris SC3.2 issue - cluster transport configuration not right - resulting fail
Prev   Next
# 1  
Old 12-19-2009
SC3.2 issue - cluster transport configuration not right - resulting fail

I am trying to set up a two host cluster. trouble is with the cluster transport configuration.

i'm using e1000g2 and g3 for the cluster transport. global0 and global1 are my two nodes, and I am running the scinstall from global1.

i think i should be expecting, is this:

Code:
The following connections were discovered:
global1:e1000g2  switch1  global0:e1000g2
global1:e1000g3  switch2  global0:e1000g3

but what i am actually getting, is this:

Code:
 The following connections were discovered:
global1:e1000g2  switch1  global0:e1000g2
global1:e1000g2  switch1  global0:e1000g3

I think it is because of the failure above that the cluster does not work.

I've attached the scinstall logfile. Is there any other info i need to add?

I've tried all sorts of combinations. I am using link-based IPMP on e1000g0 and g1, on both of the servers.
frustin
 
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Red Hat

Cluster form fail

why does my cluster form but fail after a few minutes, or why do my multicast communications stop working after a short amount of time? (1 Reply)
Discussion started by: gema.utama
1 Replies

2. Emergency UNIX and Linux Support

HP-UX: Help to Change network configuration from APA manual mode (2Gbps) to simple fail over (1Gbps)

Hello HP-UX experts, Server = rx8640 Node partition OS = HP-UX 11.23 arch = IA64 Network switch = Foundry 16 port switch (1Gbps) Existing configuration: Tough to explain as it is very messy (see below for the link of zip of network related fles). 2 x 2Gbps aggregates configured some time... (1 Reply)
Discussion started by: prvnrk
1 Replies

3. UNIX and Linux Applications

Configuration of Linux cluster managment on Red Hat 5.x server

Hi Experts, I have question regarding linux cluster managment on Red Hat 5.x server. When I try to install 'luci' or 'ricci' in one of our linux servers it is giving me below error:- yum install luci Loaded plugins: katello, product-id, rhnplugin, security, subscription-manager Updating... (0 Replies)
Discussion started by: Amey Joshi
0 Replies

4. Red Hat

Cluster configuration file

Hi Everybody, I would like to know meaning of following parameters in cluster.conf file in linux <device name="blade_fence" blade="2" option="off" <clusternode name="Blade1-int" votes="1" nodeid="1"> what do you mean by "option=off" and "votes=1" Thanks (1 Reply)
Discussion started by: mastansaheb
1 Replies

5. UNIX for Advanced & Expert Users

Veritas Cluster automatic fail-back option on Solaris

Hi - Please help me to understand the Veritas Cluster fail-over capability. We configured oracle database file system on veritas cluster file system and it is automatically failing-over from node 1 to node 2. Does Veritas cluster softward have any option to fail-back from node 2 to node 1... (6 Replies)
Discussion started by: Mansoor8810
6 Replies

6. Solaris

Sun Cluster configuration issue

I am using VMware Workstation-7 on Windows-XP host . I am trying to configure Solaris 10-X86 guest os based 2 nodes Sun Cluster . I have added one extra Virtual Lan adapter on my VMware with another subnet (that I would like to put for SUN Cluster private communication). I have... (0 Replies)
Discussion started by: sanjee
0 Replies

7. Solaris

In cluster configuration ora* VGs are not controlled by VCS

Need some one to explain "In cluster configuration ora* VGs are not controlled by VCS". Actually, I am not aware of this. Thanks, Rama (0 Replies)
Discussion started by: ramareddi16
0 Replies

8. Solaris

Cluster interconnect, adapter configuration

Hello I have problem with scinstall. I found information that i should`t configure public network before using scinstall. Each node configuration: 4 node, 3 network adapter on each node. 1 for public adapter and 2 for cluster interconnect. Two switches. After scinstall first node is... (1 Reply)
Discussion started by: time0ut
1 Replies

9. AIX

HACMP 5.4.1 Two-Node-Cluster-Configuration-Assistant fails

This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Discussion started by: shockneck
0 Replies

10. HP-UX

MC/SG Fail to join cluster node

Hi, Please advise me whereas I have two node cluster server configured with MC/SG. Application and DB are running on Node 1, while Node 2 is standby. All the volume group devices are part of cluster environment. There is only one package running at node 1. Node 2 is having the problem to... (1 Reply)
Discussion started by: rauphelhunter
1 Replies
Login or Register to Ask a Question
scdpm(1M)						  System Administration Commands						 scdpm(1M)

NAME
scdpm - manage disk path monitoring daemon SYNOPSIS
scdpm [-a] {node | all} scdpm -f filename scdpm -m {[node | all][:/dev/did/rdsk/]dN | [:/dev/rdsk/]cNtXdY | all} scdpm -n {node | all} scdpm -p [-F] {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} scdpm -u {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scdpm command manages the disk path monitoring daemon in a cluster. You use this command to monitor and unmonitor disk paths. You can also use this command to display the status of disk paths or nodes. All of the accessible disk paths in the cluster or on a specific node are printed on the standard output. You must run this command on a cluster node that is online and in cluster mode. You can specify either a global disk name or a UNIX path name when you monitor a new disk path. Additionally, you can force the daemon to reread the entire disk configuration. You can use this command only in the global zone. OPTIONS
The following options are supported: -a Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use this option only in the global zone. Rebooting the node restarts all resource and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this option. See rbac(5). -F If you specify the -F option with the -p option, scdpm also prints the faulty disk paths in the cluster. The -p option prints the cur- rent status of a node or a specified disk path from all the nodes that are attached to the storage. -f filename Reads a list of disk paths to monitor or unmonitor in filename. You can use this option only in the global zone. The following example shows the contents of filename. u schost-1:/dev/did/rdsk/d5 m schost-2:all Each line in the file must specify whether to monitor or unmonitor the disk path, the node name, and the disk path name. You specify the m option for monitor and the u option for unmonitor. You must insert a space between the command and the node name. You must also insert a colon (:) between the node name and the disk path name. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -m Monitors the new disk path that is specified by node:diskpath. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -n Disables the automatic rebooting of a node when all monitored disk paths fail. You can use this option only in the global zone. If all monitored disk paths on the node fail, the node is not rebooted. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -p Prints the current status of a node or a specified disk path from all the nodes that are attached to the storage. You can use this option only in the global zone. If you also specify the -F option, scdpm prints the faulty disk paths in the cluster. Valid status values for a disk path are Ok, Fail, Unmonitored, or Unknown. The valid status value for a node is Reboot_on_disk_failure. See the description of the -a and the -n options for more information about the Reboot_on_disk_failure status. You need solaris.cluster.device.read RBAC authorization to use this option. See rbac(5). -u Unmonitors a disk path. The daemon on each node stops monitoring the specified path. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). EXAMPLES
Example 1 Monitoring All Disk Paths in the Cluster Infrastructure The following command forces the daemon to monitor all disk paths in the cluster infrastructure. # scdpm -m all Example 2 Monitoring a New Disk Path The following command monitors a new disk path.All nodes monitor /dev/did/dsk/d3 where this path is valid. # scdpm -m /dev/did/dsk/d3 Example 3 Monitoring New Disk Paths on a Single Node The following command monitors new paths on a single node. The daemon on the schost-2 node monitors paths to the /dev/did/dsk/d4 and /dev/did/dsk/d5 disks. # scdpm -m schost-2:d4 -m schost-2:d5 Example 4 Printing All Disk Paths and Their Status The following command prints all disk paths in the cluster and their status. # scdpm -p schost-1:reboot_on_disk_failure enabled schost-2:reboot_on_disk_failure disabled schost-1:/dev/did/dsk/d4 Ok schost-1:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d5 Unmonitored schost-2:/dev/did/dsk/d6 Ok Example 5 Printing All Failed Disk Paths The following command prints all of the failed disk paths on the schost-2 node. # scdpm -p -F all schost-2:/dev/did/dsk/d4 Fail Example 6 Printing the Status of All Disk Paths From a Single Node The following command prints the disk path and the status of all disks that are monitored on the schost-2 node. # scdpm -p schost-2:all schost-2:reboot_on_disk_failure disabled schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok EXIT STATUS
The following exit values are returned: 0 The command completed successfully. 1 The command failed completely. 2 The command failed partially. Note - The disk path is represented by a node name and a disk name. The node name must be the host name or all. The disk name must be the global disk name, a UNIX path name, or all. The disk name can be either the full global path name or the disk name: /dev/did/dsk/d3 or d3. The disk name can also be the full UNIX path name: /dev/rdsk/c0t0d0s0. Disk path status changes are logged with the syslogd LOG_INFO facility level. All failures are logged with the LOG_ERR facility level. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevice(1CL), clnode(1CL), attributes(5) Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 22 Jun 2006 scdpm(1M)