Sponsored Content
Operating Systems Solaris Cluster interconnect, adapter configuration Post 302550771 by time0ut on Saturday 27th of August 2011 06:11:57 AM
Old 08-27-2011
Cluster interconnect, adapter configuration

Hello
I have problem with scinstall. I found information that i should`t configure public network before using scinstall.

Each node configuration:
4 node, 3 network adapter on each node. 1 for public adapter and 2 for cluster interconnect. Two switches.

After scinstall first node is rebooted but "ifconfig -a" on rebooted node shows that ip interconnect adapter is 0.0.0.0 and virtual adapter is disconnected.

I tried all. rm /etc/hostname.* ; change privileges ; using hand IP configuration (then i had problem with "The * adapter is using for public network").

ipnodes and hosts are configured for first public adapter and ssh communication.

,time0ut
 

10 More Discussions You Might Find Interesting

1. High Performance Computing

Normal (not crossover) cable for Sun cluster interconnect..

Hi grus, has anybody tried for Sun cluster interconnect normal cable ,, I mean not interconnect .. What do u think ..does it support this ? Not long ago I tried Veritas cluster and its interconnections work great with normal cable .. I wonder what will Sun cluster say to it :)) (3 Replies)
Discussion started by: samar
3 Replies

2. Solaris

SC3.2 issue - cluster transport configuration not right - resulting fail

I am trying to set up a two host cluster. trouble is with the cluster transport configuration. i'm using e1000g2 and g3 for the cluster transport. global0 and global1 are my two nodes, and I am running the scinstall from global1. i think i should be expecting, is this: The following... (19 Replies)
Discussion started by: frustin
19 Replies

3. AIX

HACMP 5.4.1 Two-Node-Cluster-Configuration-Assistant fails

This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Discussion started by: shockneck
0 Replies

4. Solaris

In cluster configuration ora* VGs are not controlled by VCS

Need some one to explain "In cluster configuration ora* VGs are not controlled by VCS". Actually, I am not aware of this. Thanks, Rama (0 Replies)
Discussion started by: ramareddi16
0 Replies

5. Solaris

Sun Cluster configuration issue

I am using VMware Workstation-7 on Windows-XP host . I am trying to configure Solaris 10-X86 guest os based 2 nodes Sun Cluster . I have added one extra Virtual Lan adapter on my VMware with another subnet (that I would like to put for SUN Cluster private communication). I have... (0 Replies)
Discussion started by: sanjee
0 Replies

6. IP Networking

How to interconnect two Asterisk Servers with a SIP trunk Internationally

How to interconnect two Asterisk Servers with a SIP trunk Internationally Is it possible to setup an asterisk box in ex. Colombia S.A and another in the USA, setting up trunks between the boxes to speak to each other via sip or aix, create extensions, forward any incoming call on that local... (0 Replies)
Discussion started by: metallica1973
0 Replies

7. Red Hat

Cluster configuration file

Hi Everybody, I would like to know meaning of following parameters in cluster.conf file in linux <device name="blade_fence" blade="2" option="off" <clusternode name="Blade1-int" votes="1" nodeid="1"> what do you mean by "option=off" and "votes=1" Thanks (1 Reply)
Discussion started by: mastansaheb
1 Replies

8. UNIX and Linux Applications

Configuration of Linux cluster managment on Red Hat 5.x server

Hi Experts, I have question regarding linux cluster managment on Red Hat 5.x server. When I try to install 'luci' or 'ricci' in one of our linux servers it is giving me below error:- yum install luci Loaded plugins: katello, product-id, rhnplugin, security, subscription-manager Updating... (0 Replies)
Discussion started by: Amey Joshi
0 Replies

9. High Performance Computing

Encrypting interconnect

Hi, i've got a qusetion regarding interconnect of compute nodes. In our company we are running a Simulation Cluster which is administrated by the Simulation department. Now our central IT requires to encrypt the interconnect of the compute nodes. Does anybody in that business encrypt... (3 Replies)
Discussion started by: fiberkill
3 Replies

10. AIX

Misconfiguration detected Adapter interface name en 3 Adapter offset 0

Hi, We had a hardware problem with an IBM System p5 server, with AIX 5.2 We restore from a tape the last backup we had, but the server does not boot up as expected. The server try to mount some directories from a storage, but could not comunicate with them, we check the FC and everything is... (12 Replies)
Discussion started by: trevian3969
12 Replies
clinterconnect(1CL)					 Sun Cluster Maintenance Commands				       clinterconnect(1CL)

NAME
clinterconnect, clintr - manage the Sun Cluster interconnect SYNOPSIS
/usr/cluster/bin/clinterconnect -V /usr/cluster/bin/clinterconnect [subcommand] -? /usr/cluster/bin/clinterconnect subcommand [options] -v [endpoint[,endpoint] ...] /usr/cluster/bin/clinterconnect add [-d] endpoint[,endpoint] ... /usr/cluster/bin/clinterconnect add -i {- | clconfigfile} [-d] [-n node[,...] ] {+ | endpoint[,endpoint] ...} /usr/cluster/bin/clinterconnect disable [-n node[,...] ] {+ | endpoint[,endpoint] ...} /usr/cluster/bin/clinterconnect enable [-n node[,...] ] {+ | endpoint[,endpoint] ...} /usr/cluster/bin/clinterconnect export [-o {- | configfile}] [-n node[,...] ] [+ | endpoint[,endpoint] ...] /usr/cluster/bin/clinterconnect remove [-l] endpoint[,endpoint] ... /usr/cluster/bin/clinterconnect show [-n node[,...] ] [+ | endpoint[,endpoint] ...] /usr/cluster/bin/clinterconnect status [-n node[,...] ] [+ | endpoint[,endpoint] ...] DESCRIPTION
The clinterconnect command manages configuration of the cluster interconnect and displays configuration and status information. The clintr command is the short form of the clinterconnect command. The clinterconnect command and the clintr command are identical. You can use either form of the command. The cluster interconnect consists of two endpoints which are connected with cables. An endpoint can be an adapter on a node or a switch, also called a junction. A cable can connect an adapter and a switch or connect two adapters in certain topologies. The cluster topology manager uses available cables to build end-to-end interconnect paths between nodes. The names of cluster interconnect components that are supplied to this command should accurately reflect the actual physical configuration. Failure to do so will prevent the system from build- ing end-to-end cluster interconnect paths. This lack of functional cluster interconnects would result in cluster nodes that are unable to communicate with each other, nodes that panic, and similar conditions. You must run the clinterconnect command from a cluster node that is online and is in cluster mode. The general form of this command is as follows: clinterconnect [subcommand] [options] [operands] You can omit subcommand only if options specifies the -? option or the -V option. Each option of this command has a long form and a short form. Both forms of each option are given with the description of the option in the OPTIONS section of this man page. You can use some forms of this command in a non-global zone, referred to simply as a zone. For more information about valid uses of this command in zones, see the descriptions of the individual subcommands. For ease of administration, use this command in the global zone. SUBCOMMANDS
The following subcommands are supported: add Adds the new cluster interconnect components that are specified as operands to the command. You can use this subcommand only in the global zone. The syntax of the operand determines whether you are adding a cable, a switch, or an adapter. Refer to the OPERANDS section of this man page for more information. Use the add subcommand to configure an interconnect cable between an adapter and either an adapter on another node or an interconnect switch. The adapter or switch endpoints that constitute the cable do not need to already exist. You can also use this subcommand to add adapters or switches to the configuration. When you add an adapter or a switch to the configuration, the command also enables the adapter or switch. When you add a cable, the command also enables each of the cable's endpoints, if the endpoints are not already enabled. In a two-node cluster, if you add a cable with an adapter at each endpoint, a virtual switch is also created. Use the -d option to add an endpoint in the disabled state. If you specify a configuration file with the -i option, you can specify the plus sign (+) as the operand. When you use this operand, the command creates all interconnect components that are specified in the configuration file which do not already exist in the cluster. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. For information about removing interconnect components, see the description of the remove command. disable Disables the interconnect components that are specified as operands to the command. You can use this subcommand only in the global zone. The syntax of the operand determines whether you are disabling a cable, a switch, or an adapter. Refer to the OPERANDS section of this man page for more information. If you attempt to disable an adapter or a switch that is connected to an enabled cable, the operation results in an error. You must first disable the cable before you attempt to disable the connected adapter or switch. When you disable a cable, the command also disables each endpoint that is associated with the cable, which can be an adapter or a switch port. The command also disables the switch if all of the switch ports are in a disabled state. If you attempt to disable the cable or an endpoint of the last cluster interconnect path of an active cluster node, the operation results in an error. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. For information about enabling interconnect components, see the description of the enable subcommand. enable Enables the interconnect components that are specified as operands to the command. You can use this subcommand only in the global zone. The syntax of the operand determines whether you are enabling a cable, a switch, or an adapter. Refer to the OPERANDS section of this man page for more information. When you enable a cable, the command also enables each endpoint that is associated with the cable, which can be an adapter or a switch port. For information about disabling interconnect components, see the description of the disable subcommand. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. export Exports the cluster interconnect configuration information. You can use this subcommand only in the global zone. If you supply a file name with the -o option, the configuration information is written to that new file. If you do not use the -o option, the output is written to standard output. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. remove Removes the cluster interconnect components that are specified as operands to the command. You can use this subcommand only in the global zone. The syntax of the operand determines whether you are removing a cable, a switch, or an adapter. Refer to the OPERANDS section of this man page for more information. The following behaviors apply when you remove a cable: o You must first disable a cable before you can remove the cable. o If you attempt to remove a cable that is enabled, the remove operation results in an error. o If you remove a disabled cable, the cable's endpoints are also removed except in the following circumstances: o The switch is in use by another cable. o You also specify the -l option. The following behaviors apply when you remove an adapter or switch endpoint: o If you remove an endpoint that is not associated with a cable, the specified endpoint is removed. o If you attempt to remove an endpoint that is associated with a cable, the remove operation results in an error. This occurs regardless of whether the cable is enabled or disabled. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. For information about adding interconnect components, see the description of the add subcommand. show Displays the configuration of the interconnect components that are specified as operands to the command. You can use this subcommand only in the global zone. The configuration information includes whether the component is enabled or disabled. By default, the configuration of all interconnect components is printed. The show subcommand accepts the plus sign (+) as an operand to specify all components. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. status Displays the status of the interconnect paths. By default, the report displays the status of all interconnect paths in the system. You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. The following are the possible states of an interconnect path. faulted The interconnect path has encountered an error that prevents it from functioning. Path online The interconnect path is online and is providing service. waiting The interconnect path is in transition to the Path online state. To determine whether an interconnect component is enabled or disabled, use the show subcommand. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. OPTIONS
The following options are supported: -? --help Displays help information. When this option is used, no other processing is performed. You can use this option either alone or with a subcommand. o If you specify this option alone, the list of available subcommands is printed. o If you specify this option with a subcommand, the usage options for that subcommand are printed. -d Specifies that the endpoint is added in the disabled state. -i {- | clconfigfile} --input={- | clconfigfile-} --input {- | clconfigfile-} Specifies configuration information that is to be used for adding or modifying cables. This information must conform to the format that is defined in the clconfiguration(5CL) man page. This information can be contained in a file or supplied through standard input. To specify standard input, supply the minus sign (-) instead of a file name. Options that you specify in the command override any options that are set in the cluster configuration file. If required elements are missing from a cluster configuration file, you must specify these elements on the command line. You can use the minus sign (-) argument with this option to specify that the configuration is supplied as standard input. -l --limited Specifies that the cable removal operation removes only the cable but not any of its endpoints. The -l option is only valid with the remove subcommand. If you do not specify this option with the remove subcommand, the command removes the specified cables as well as any associated adapters. In addition, if the cable removal operation removes the last connec- tion to a switch, the command also removes the switch from the configuration. -n node[,...] --node=node[,...] --node node[,...] Specifies a node or list of nodes. Use this option to limit the operation to adapters and cables that are attached only to the speci- fied node. You can specify a node either by its node name or by its node ID. -o {- | clconfigfile} --output={- | clconfigfile} --output {- | clconfigfile} Displays the interconnect configuration in the format that is described by the clconfiguration(5CL) man page. Only the export subcommand accepts the -o option. If you supply a file name as the argument to this option, the command creates a new file and the configuration is printed to that file. If a file of the same name already exists, the command exits with an error. No change is made to the existing file. If you supply the minus sign (-) as the argument to this option, the command displays the configuration information to standard output. All other standard output for the command is suppressed. -V --version Displays the version of the command. Do not specify this option with subcommands, operands, or other options. The subcommands, operands, or other options are ignored. The -V option only displays the version of the command. No other operations are performed. -v --verbose Displays verbose messages to standard output. You can use this option with any form of the command. OPERANDS
This command accepts interconnect endpoints or pairs of comma-separated endpoints as operands. An endpoint can be an adapter or a switch. A comma-separated pair of endpoints indicates a cable. For those forms of the command that accept more than one interconnect component, you can use the plus sign (+) argument to specify all pos- sible components. The following operands are supported: node:adapter Specifies an adapter endpoint. An adapter endpoint has a node name and an adapter name. The adapter name is constructed from an interconnect name that is immediately followed by a physical-unit number, such as hme0. The node that hosts the adapter does not need to be active in the cluster for these operations to succeed. The following types of adapters can be configured as cluster transport adapters: Ethernet You can connect an Ethernet adapter to another Ethernet adapter or to an Ethernet switch. InfiniBand You can connect an InfiniBand adapter only to an InfiniBand switch. SCI-PCI You can connect an SCI adapter only to another SCI adapter or to an SCI switch. The form of the SCI adapter name is sciN, where N is the physical-unit number. By default, adapters are configured as using the dlpi transport type. To specify a tagged-VLAN adapter, use the tagged-VLAN adapter name that is derived from the physical device name and the VLAN instance number. The VLAN instance number is the VLAN ID multiplied by 1000 plus the original physical-unit number. For example, a VLAN ID of 11 on the physical device ce2 translates to the tagged-VLAN adapter name ce11002. switch[@port] Specifies a switch endpoint. Each interconnect switch name must be unique across the namespace of the cluster. You can use letters, digits, or a combination of both. The first character of the switch name must be a letter. If you do not supply a port component for a switch endpoint, the command assumes the default port name. The default port name is equal to the node ID of the node that is attached to the other end of the cable. You can configure the following types of switches as cluster transport switches: Dolphin SCI Use the Dolphin SCI switch with SCI-PCI adapters. When you connect an SCI-PCI adapter to a port on an SCI switch, the port name must match the port number that is printed on the SCI-PCI switch. Failure to supply the correct port name results in the same condition as when the cable between an adapter and a switch is removed. Ethernet Use the Ethernet switch with Ethernet adapters. InfiniBand Use the InfiniBand switch with InfiniBand adapters. By default, switches are configured as using the switch type. node:adapter,node:adapter node:adapter,switch[@port] Specifies a cable. A cable is a comma-separated pair of adapter or switch endpoints. The order of endpoints is not important. Use cable operands to add a complete cluster interconnect. Because the clinterconnect command automatically creates both endpoints when you add a cable, you do not need to separately create adapter or switch endpoints. EXIT STATUS
The complete set of exit status codes for all commands in this command set are listed on the Intro(1CL) man page. If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command pro- cesses the next operand in the operand list. The returned exit code always reflects the error that occurred first. This command returns the following exit status codes: 0 CL_NOERR No error 1 CL_ENOMEM Not enough swap space 3 CL_EINVAL Invalid argument 6 CL_EACCESS Permission denied 35 CL_EIO I/O error 36 CL_ENOENT No such object 37CL_EOP Operation not allowed 38 CL_EBUSY Object busy 39 CL_EEXIST Object exists EXAMPLES
Example 1 Creating a Direct-Connect Cluster Interconnect Cable The following example shows how to add a cable that connects ports between the adapter hme0 on the node phys-schost-1 and the adapter hme0 on the node phys-schost-2. # clinterconnect add phys-schost-1:hme0,phys-schost-2:hme0 Example 2 Creating a Cable Between a Switch and an Adapter The following example shows how to add a cable between the adapter hme0 on the node phys-schost-1 and the switch ether_switch. # clinterconnect add phys-schost-1:hme0,ether_switch Example 3 Disabling a Cable The following example shows how to disable the cable that is connected between the adapter hme0 on the node phys-schost-1 and the switch ether_switch. # clinterconnect disable phys-schost-1:hme0,ether_switch Example 4 Removing a Cluster Interconnect Cable The following example shows how to remove the cable that is connected between the adapter hme0 on the node phys-schost-1 and the switch ether_switch. # clinterconnect remove phys-schost-1:hme0,ether_switch Example 5 Creating a Cable Between a Tagged-VLAN Adapter and a Switch The following example shows how to add a cable between the tagged VLAN adapter ce73002 on the node phys-schost-1 and the VLAN-capable switch switch1. The physical name of the adapter is ce2 and the VLAN ID is 73. # clinterconnect add phys-schost-1:ce73002,switch1 Example 6 Enabling a Switch The following example shows how to enable the switch endpoint switch1. # clinterconnect enable switch1 ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cluster(1CL), clconfiguration(5CL), rbac(5) Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS Sun Cluster Software Installation Guide for Solaris OS NOTES
The superuser can run all forms of this command. Any user can run this command with the following options. o -? (help) option o -V (version) option To run this command with other subcommands, users other than superuser require RBAC authorizations. See the following table. +-----------+---------------------------------------------------------+ |Subcommand | RBAC Authorization | +-----------+---------------------------------------------------------+ |add | solaris.cluster.modify | +-----------+---------------------------------------------------------+ |disable | solaris.cluster.modify | +-----------+---------------------------------------------------------+ |enable | solaris.cluster.modify | +-----------+---------------------------------------------------------+ |export | solaris.cluster.read | +-----------+---------------------------------------------------------+ |remove | solaris.cluster.modify | +-----------+---------------------------------------------------------+ |show | solaris.cluster.read | +-----------+---------------------------------------------------------+ |status | solaris.cluster.read | +-----------+---------------------------------------------------------+ Sun Cluster 3.2 13 Aug 2007 clinterconnect(1CL)
All times are GMT -4. The time now is 04:46 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy