Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

clnode(1cl) [opensolaris man page]

clnode(1CL)						 Sun Cluster Maintenance Commands					       clnode(1CL)

NAME
clnode - manage Sun Cluster nodes SYNOPSIS
/usr/cluster/bin/clnode -V /usr/cluster/bin/clnode [subcommand] -? /usr/cluster/bin/clnode subcommand [options] -v [node ...] /usr/cluster/bin/clnode add {-n sponsornode} [-i {- | clconfigfile}] [-c clustername] [-G globaldevfs] [-e endpoint,endpoint] node /usr/cluster/bin/clnode clear [-F] node... /usr/cluster/bin/clnode evacuate [-T seconds] node /usr/cluster/bin/clnode export [-o {- | clconfigfile}] [+ | node ...] /usr/cluster/bin/clnode list [+ | node ...] /usr/cluster/bin/clnode remove [-n sponsornode] [-G globaldevfs] [-F] [node ] /usr/cluster/bin/clnode set [-p name=value] [...] {+ | node ...} /usr/cluster/bin/clnode show [-p name[,...]] [+ | node ...] /usr/cluster/bin/clnode show-rev [node] /usr/cluster/bin/clnode status [ -m] [+ | node ...] DESCRIPTION
This command does the following: o Adds a node to the cluster o Removes a node from the cluster o Attempts to switch over all resource groups and device groups o Modifies the properties of a node o Reports or exports the status and configuration of one or more nodes Most of the subcommands for the clnode command operate in cluster mode. You can run most of these subcommands from any node in the cluster. However, the add and remove subcommands are exceptions. You must run these subcommands in noncluster mode. When you run the add and remove subcommands, you must run them on the node that you are adding or removing. The clnode add command also initializes the node itself for joining the cluster. The clnode remove command also performs cleanup operations on the removed node. You can omit subcommand only if options is the -? option or the -V option. Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS. The clnode command does not have a short form. You can use some forms of this command in a non-global zone, referred to simply as a zone. For more information about valid uses of this command in zones, see the descriptions of the individual subcommands. For ease of administration, use this command in the global zone. SUBCOMMANDS
The following subcommands are supported: add Configures and adds a node to the cluster. You can use this subcommand only in the global zone. You must run this subcommand in noncluster mode. To configure and add the node, you must use the -n sponsornode option. This option specifies an existing active node as the sponsor node. The sponsor node is always required when you configure nodes in the cluster. If you do not specify -c clustername, this subcommand uses the name of the first node that you add as the new cluster name. The operand node is optional. However, if you specify an operand, it must be the host name of the node on which you run the subcommand. Note - You must run the claccess command to allow the node to be added to the cluster. See the claccess(1CL) man page. This subcommand does not install cluster software packages. To add a node and install cluster software packages, use the scinstall -a command. See the scinstall(1M) man page. Users other than superuser require solaris.cluster.modify role-based access control (RBAC) authorization to use this subcommand. See the rbac(5) man page. clear Cleans up or clears any remaining information about cluster nodes after you run the remove subcommand. You can use this subcommand only in the global zone. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page. evacuate Attempts to switch over all resource groups and device groups from the specified node to a new set of primary nodes. You can use this subcommand only in the global zone. The system attempts to select new primary nodes based on configured preferences for each group. All evacuated resource groups are not necessarily remastered by the same primary node. If one or more resource groups or device groups cannot be evacuated from the specified node, this subcommand fails. If this subcommand fails, it issues an error message and exits with a nonzero exit code. If this subcom- mand cannot change primary ownership of a group to another node, the original node retains primary ownership of that group. Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page. export Exports the node configuration information to a file or to the standard output (stdout). You can use this subcommand only in the global zone. If you specify the -o option and the name of a file, the configuration information is written to that file. If you do not provide the -o option and a file name, the output is written to the standard output. This subcommand does not modify cluster configuration data. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. list Displays the names of nodes that are configured in the cluster. You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. If you do not specify the node operand, or if you specify the plus sign operand (+), this subcommand displays all node members. You must run this subcommand in cluster mode. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand option. See the rbac(5) man page. remove Removes a node from the cluster. You can use this subcommand only in the global zone. You must run this subcommand in noncluster mode. To remove a node from a cluster, observe the following guidelines. If you do not observe these guidelines, your removing a node might compromise quorum in the cluster. o Unconfigure the node to be removed from any quorum devices, unless you also specify the -F option. o Ensure that the node to be removed is not an active cluster member. o Do not remove a node from a three-node cluster unless at least one shared quorum device is configured. The subcommand attempts to remove a subset of references to the node from the cluster configuration database. If you specify the -F option, this subcommand attempts to remove all references to the node from the cluster configuration database. Note - You must run the claccess command to allow the node to be removed from the cluster. See the claccess(1CL) man page. This subcommand does not remove cluster software packages. To remove both a node and cluster software packages, use the scinstall -r command. See the scinstall(1M) man page. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page. set Modifies the properties that are associated with the node that you specify. You can use this subcommand only in the global zone. See the -p option in OPTIONS. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page. show Displays the configuration of, or information about the properties on, the specified node or nodes. You can use this subcommand only in the global zone. If you do not specify operands or if you specify the plus sign (+), this subcommand displays information for all cluster nodes. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. show-rev Displays the names of and release information about the Sun Cluster packages that are installed on a node. You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. You can run this subcommand in noncluster mode and cluster mode. If you run it in noncluster mode, you can only specify the name of and get information about the node on which you run it. If you run it in cluster mode, you can specify and get information about any node in the cluster. When you use this subcommand with -v, this subcommand displays the names of packages, their versions, and patches that have been applied to those packages. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. status Displays the status of the node or nodes that you specify or Internet Protocol (IP) Network Multipathing groups. You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. If you do not specify operands or if you specify the plus sign (+), this subcommand displays the status of all cluster nodes. The sta- tus of a node can be Online or Offline. If you specify the -m option with this subcommand, it displays only Solaris IP multipathing (IPMP) groups. If you specify the verbose option -v with this subcommand, it displays both the status of cluster nodes and Solaris IP Network Multi- pathing groups. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. OPTIONS
Note - Both the short and long form of each option is shown in this section. The following options are supported: -? --help Displays help information. You can specify this option with or without a subcommand. If you do not specify a subcommand, the list of all available subcommands is displayed. If you specify a subcommand, the usage for that subcommand is displayed. If you specify this option and other options, the other options are ignored. -c clustername --clustername=clustername --clustername clustername Specifies the name of the cluster to which you want to add a node. Use this option only with the add subcommand. If you specify this option, the clustername that you specify must match the name of an existing cluster. Otherwise, an error occurs. -e endpoint-,endpoint --endpoint=endpoint-,endpoint --endpoint endpoint-,endpoint Specifies transport connections. Use this option only with the add subcommand. You specify this option to establish the cluster transport topology. You establish the topology by configuring the cables that connect the adapters and the switches. You can specify an adapter or a switch as the endpoint. To indicate a cable, you specify a comma separated pair of endpoints. The cable establishes a connection from a cluster transport adapter on the current node to one of the following: o A port on a cluster transport switch, also called a transport junction. o An adapter on another node that is already included in the cluster. If you do not specify the -e option, the add subcommand attempts to configure a default cable. However, if you configure more than one transport adapter or switch within one instance of the clnode command, clnode cannot construct a default. The default is to configure a cable from the singly configured transport adapter to the singly configured, or default, transport switch. You must always specify two endpoints that are separated by a comma every time you specify the -e option. Each pair of endpoints defines a cable. Each individual endpoint is specified in one of the following ways: o Adapter endpoint: node:adapter o Switch endpoint: switch[@port] To specify a tagged-VLAN adapter, use the tagged-VLAN adapter name that is derived from the physical device name and the VLAN instance number. The VLAN instance number is the VLAN ID multiplied by 1000 plus the original physical-unit number. For example, a VLAN ID of 11 on the physical device ce2 translates to the tagged-VLAN adapter name ce11002. If you do not specify a port component for a switch endpoint, a default port is assigned for all but SCI switches. -F --force Forcefully removes or clears the specified node without verifying that global mounts remain on that node. Use this option only with the clear or the remove subcommand. -G globaldevfs --globaldevfs=globaldevfs --globaldevfs globaldevfs Specifies either an existing file system or a raw special disk device to use for the global devices file system. Use this option only with the add or remove subcommand. Each cluster node must have a local file system that is mounted globally on /global/.devices/node@nodeID before the node can success- fully participate as a cluster member. However, the node ID is unknown until the clnode command runs. The file system that you specify is remounted at /globaldevices. The clnode command attempts to add the entry to the vfstab file when the command cannot find a node ID mount. See the vfstab(4) man page. As a guideline, the file system that you specify must be at least 512 Mbytes in size. If this partition or file system is not available or is not large enough, you might need to reinstall the Solaris Operating System. Use this option with the remove subcommand to specify the new mount point name to use to restore a former /global/.devices mount point. When used with the remove subcommand, this option specifies the new mount point name to use to restore the former /global/.devices mount point. If you do not specify the -G option, the mount point is renamed /globaldevices by default. -i {- | clconfigfile} --input={- | clconfigfile-} --input {- | clconfigfile-} Reads node configuration information from a file or from the standard input (stdin). The format of the configuration information is described in the clconfiguration(5CL) man page. If you specify a file name with this option, this option reads the node configuration information in the file. If you specify - with this option, the configuration information is read from the standard input (stdin). -m Specifies IP multipathing (IPMP) groups. Use with the status subcommand to display only the status of IPMP groups. -n sponsornode --sponsornode=sponsornode --sponsornode sponsornode Specifies the name of the sponsor node. You can specify a name or a node identifier for sponsornode. When you add a node to the cluster by using the add subcommand, the spon- sor node is the first active node that you add to the cluster. From that point, that node remains the sponsornode for that cluster. When you remove a node by using the remove subcommand, you can specify any active node other than the node to be removed as the sponsor node. By default, whenever you specify sponsornode with a subcommand, the cluster to which sponsornode belongs is the cluster that is affected by that subcommand. -o {- | clconfigfile} --output={- | clconfigfile-} --output {- | clconfigfile-} Writes node configuration information to a file or to the standard output (stdout). The format of the configuration information is described in the clconfiguration(5CL) man page. If you specify a file name with this option, this option creates a new file. Configuration information is then placed in that file. If you specify - with this option, the configuration information is sent to the standard output (stdout). All other standard output for the command is suppressed. You can use this option only with the export subcommand. -p name --property=name --property name Specifies the node properties about which you want to display information with the show subcommand. For information about the properties that you can add or modify with the set subcommand, see the description of the -p name=value option. You can specify the following properties with this option: adapterlist This property specifies one or more transport adapters, each separated by a comma (,), for a node. defaultpsetmin This property specifies the minimum number of CPUs that are available in the default processor set resource. You can set this prop- erty to any value between 1 and the number of CPUs on the machine (or machines) on which this property is set. globalzoneshares This property specifies the number of shares that are assigned to the global zone. You can set this property to any value between 1 and 65535, inclusive. monitoring Values to which you can set this property are enabled and disabled. privatehostname The private host name is used for IP access of a given node over the private cluster interconnect. By default, when you add a node to a cluster, this option uses the private host name clusternodenodeid-priv. reboot_on_path_failure Values to which you can set this property are enabled and disabled. zprivatehostname The zones private host name is used for IP access of a given zone on a node over the private cluster interconnect. -p name=value --property=name-=value --property name-=value Specifies the node properties that you want to add or modify with the set subcommand. Multiple instances of -p name=value are allowed. For information about the properties about which you can display information with the show subcommand, see the description of the -p name option. You can modify the following properties with this option: adapterlist Specifies one or more transport adapters, each separated by a comma (,), for a node. defaultpsetmin Sets the minimum number of CPUs that are available in the default processor set resource. The default value is 1 and the minimum value is 1. The maximum value is the number of CPUs on the machine (or machines) on which you are setting this property. globalzoneshares Sets the number of shares that are assigned to the global zone. You can specify a value between 1 and 65535, inclusive. To understand this upper limit, see the prctl(1) man page for information about the zone.cpu-shares attribute. The default value for globalzoneshares is 1. monitoring Enables or disables disk path monitoring on the specified node. Values to which you can set this property are enabled and disabled. privatehostname Is used for IP access of a given node over the private cluster transport. By default, when you add a node to a cluster, this option uses the private host name clusternodenodeid-priv. Before you modify a private host name, you must disable, on all nodes, all resources or applications that use that private host name. See the example titled "Changing the Private Hostname" in How to Change the Node Private Host Name in Sun Cluster System Administration Guide for Solaris OS. Do not store private host names in the hosts database or in any naming services database. See the hosts(4) man page. A special nss- witch command performs all host name lookups for private host names. See the nsswitch.conf(4) man page. If you do not specify a value, this option uses the default private host name clusternodenodeid-priv. reboot_on_path_failure Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use only the set subcommand to modify this property. You can set this property to enabled or to disabled. Rebooting the node restarts all resource groups and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. If you set this property to disabled and all monitored disk paths on the node fail, the node does not reboot. zprivatehostname Does the following: o Assigns an IP address to a local zone and plumbs this IP address over the private cluster interconnect. o Changes an IP address for a local zone and plumbs this IP address over the private cluster interconnect. o Frees an IP address that is assigned to a local zone, unplumbs it, and makes it available for use elsewhere. You specify a value as follows: zprivatehostname=[hostalias] node:zone hostalias Provides the host alias to be used for accessing a zone on a node over the private cluster interconnect. If a host alias does not exist for the zone on the node, specifying this value creates a new host alias. If a host alias already exists, specifying this value changes the existing host alias to the new host alias that you specify. If you do not specify a hostalias, the host alias that is assigned to node:zone is freed for use elsewhere. node:zone Provides the name or ID of a zone on a node to be assigned the private host name or host alias. Before you modify a private host name or alias, you must disable, on all nodes, all resources or applications that use that private host name or alias. See the example titled "Changing the Private Hostname" in How to Change the Node Private Host Name in Sun Clus- ter System Administration Guide for Solaris OS. If you do not specify a value, this option uses the default private host name clusternodenodeid-priv. -T seconds --time=seconds --time seconds Specifies the number of seconds to keep resource groups from switching back onto a node after you have evacuated resource groups from the node. You can use this option only with the evacuate subcommand. You must specify an integer value between 0 and 65535 for seconds. If you do not specify a value, 60 seconds is used by default. Resource groups cannot fail over or automatically switch over onto the node while that node is being evacuated. This option also speci- fies that after a node is evacuated, resource groups cannot fail over or automatically switch over for seconds seconds. You can, how- ever, initiate a switchover onto the evacuated node by using the clresourcegroup command before continue_evac seconds have passed. Only automatic switchovers are prevented. See the clresourcegroup(1CL) man page. -v --verbose Displays verbose information on the standard output (stdout). -V --version Displays the version of the command. If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the com- mand is displayed. No other processing occurs. OPERANDS
The following operands are supported: node The name of the node that you want to manage. When you use the add subcommand, you specify the host name for node. When you use another subcommand, you specify the node name or node identifier for node. + All nodes in the cluster. EXIT STATUS
The complete set of exit status codes for all commands in this command set are listed on the Intro(1CL) man page. If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command pro- cesses the next operand in the operand list. The returned exit code always reflects the error that occurred first. This command returns the following exit status codes: 0 CL_NOERR No error The command that you issued completed successfully. 1 CL_ENOMEM Not enough swap space A cluster node ran out of swap memory or ran out of other operating system resources. 3 CL_EINVAL Invalid argument You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the -i option was incorrect. 6 CL_EACCESS Permission denied The object that you specified is inaccessible. You might need superuser or RBAC access to issue the command. See the su(1M) and rbac(5) man pages for more information. 15 CL_EPROP Invalid property The property or value that you specified with the -p, -y, or -x option does not exist or is not allowed. 35 CL_EIO I/O error A physical input/output error has occurred. 36 CL_ENOENT No such object The object that you specified cannot be found for one of the following reasons: o The object does not exist. o A directory in the path to the configuration file that you attempted to create with the -o option does not exist. o The configuration file that you attempted to access with the -i option contains errors. 37 CL_EOP Operation not allowed You tried to perform an operation on an unsupported configuration, or you performed an unsupported operation. EXAMPLES
Example 1 Adding a Node to a Cluster The following command configures and adds the node on which you run the command into an existing cluster. By default, this example uses /globaldevices as the global devices mount point. By default, this example also uses clusternode1-priv as the private host name. This command names the cluster cluster-1 and specifies that the sponsor node is phys-schost-1. This command also specifies that adapter qfe1 is attached to transport switch switch1. Finally, this command specifies that adapter qfe2 is attached to transport switch switch2. # clnode add -c cluster-1 -n phys-schost-1 -e phys-schost-2:qfe1,switch1 -e phys-schost-2:qfe2,switch2 Example 2 Removing a Node From a Cluster The following command removes a node from a cluster. This command removes the node on which you run this command. The node is in noncluster mode. # clnode remove Example 3 Changing the Private Host Name That Is Associated With a Node The following command changes the private host name for node phys-schost-1 to the default setting. # clnode set -p privatehost=phys-schost-1 Example 4 Changing Private Host Name Settings for All Nodes The following command changes the private host name settings for all nodes to default values. In this case, you must insert a space between the equal sign (=) and the plus sign (+) to indicate that the + is the plus sign operand. # clnode set -p privatehost= + Example 5 Removing the Private Host Name for a Zone That Is Associated With a Node The following command disables or removes the private host name for zone dev-zone, which is associated with node phys-schost-1. # clnode set -p zprivatehostname= phys-schost-1:dev-zone Example 6 Displaying the Status of All Nodes in a Cluster The following command displays the status of all nodes in a cluster. # clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online Example 7 Displaying the Verbose Status of All Nodes in a Cluster The following command displays the verbose status of all nodes in a cluster. # clnode status -v === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online --- Node IPMP Group Status --- Group Name Node Name Adapter Status ---------- --------- ------- ------ sc_ipmp0 phys-schost-1 hme0 Online sc_ipmp0 phys-schost-2 hme0 Online Example 8 Displaying Configuration Information About All Nodes in a Cluster The following command displays configuration information about all nodes in a cluster. # clnode show === Cluster Nodes === Node Name: phys-schost-1 Node ID: 1 Enabled: yes privatehostname: clusternode1-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x4487349A00000001 Transport Adapter List: ce2, bge2 Node Name: phys-schost-2 Node ID: 2 Enabled: yes privatehostname: clusternode2-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x4487349A00000002 Transport Adapter List: ce2, bge2 Example 9 Displaying Configuration Information About a Particular Node in a Cluster The following command displays configuration information about phys-schost-1 in a cluster. # clnode show phys-schost-1 === Cluster Nodes === Node Name: phys-schost-1 Node ID: 1 Enabled: yes privatehostname: clusternode1-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x4487349A00000001 Transport Adapter List: ce2, bge2 Example 10 Displaying Configuration Information About All Nodes in a Cluster With Zones The following command displays configuration information about all nodes in a cluster with zones. # clnode show == Cluster Nodes === Node Name: phys-schost-1 Node ID: 1 Enabled: yes ... Zone List: phys-schost-1:lzphys-schost-1a phys-schost-1:zone1 Transport Adapter List: hme1, hme3 --- Zones for phys-schost-1 --- Zone Name: phys-schost-1:lzphys-schost-1a zprivatehostname priv_zone1a Zone Name: phys-schost-1:zone1 zprivatehostname priv_zone_1 ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
prctl(1), claccess(1CL), clresourcegroup(1CL), cluster(1CL), Intro(1CL), newfs(1M), su(1M), hosts(4), scinstall(1M), nsswitch.conf(4), vfstab(4), attributes(5), rbac(5), clconfiguration(5CL) The example that describes how to change the private hostname in Overview of Administering the Cluster in Sun Cluster System Administration Guide for Solaris OS NOTES
The superuser can run all forms of this command. All users can run this command with the -? (help) or -V (version) option. To run the clnode command with subcommands, users other than superuser require RBAC authorizations. See the following table. +-----------+---------------------------------------------------------+ |Subcommand | RBAC Authorization | +-----------+---------------------------------------------------------+ |add | solaris.cluster.modify | +-----------+---------------------------------------------------------+ |clear | solaris.cluster.modify | +-----------+---------------------------------------------------------+ |evacuate | solaris.cluster.admin | +-----------+---------------------------------------------------------+ |export | solaris.cluster.read | +-----------+---------------------------------------------------------+ |list | solaris.cluster.read | +-----------+---------------------------------------------------------+ |remove | solaris.cluster.modify | +-----------+---------------------------------------------------------+ |set | solaris.cluster.modify | +-----------+---------------------------------------------------------+ |show | solaris.cluster.read | +-----------+---------------------------------------------------------+ |show-rev | solaris.cluster.read | +-----------+---------------------------------------------------------+ |status | solaris.cluster.read | +-----------+---------------------------------------------------------+ Sun Cluster 3.2 24 Sep 2007 clnode(1CL)
Man Page