Linux and UNIX Man Pages

Test Your Knowledge in Computers #160
Difficulty: Easy
The Internet Autonomous Network Authority (IANA) is responsible for maintaining the official assignments of Internet port numbers.
True or False?
Linux & Unix Commands - Search Man Pages

cluster(1CL)						 Sun Cluster Maintenance Commands					      cluster(1CL)

NAME
cluster - manage the global configuration and status of a cluster SYNOPSIS
/usr/cluster/bin/cluster -V /usr/cluster/bin/cluster [subcommand] -? /usr/cluster/bin/cluster subcommand [options] -v [clustername ...] /usr/cluster/bin/cluster create -i {- | clconfigfile} [clustername] /usr/cluster/bin/cluster export [-o {- | configfile}] [-t objecttype[,...]] [clustername] /usr/cluster/bin/cluster list [clustername] /usr/cluster/bin/cluster list-cmds [clustername] /usr/cluster/bin/cluster rename -c newclustername [clustername] /usr/cluster/bin/cluster restore-netprops [clustername] /usr/cluster/bin/cluster set {-p name=value} [-p name=value ] [...] [clustername] /usr/cluster/bin/cluster set-netprops {-p name=value} [-p name=value ] [...] [clustername] /usr/cluster/bin/cluster show [-t objecttype[,...]] [clustername] /usr/cluster/bin/cluster show-netprops [clustername] /usr/cluster/bin/cluster shutdown [-y] [-g graceperiod] [-m message] [clustername] /usr/cluster/bin/cluster status [-t objecttype[,...]] [clustername] DESCRIPTION
The cluster command displays and manages cluster-wide configuration and status information. This command also shuts down a cluster. Almost all subcommands that you use with the cluster command operate in cluster mode. You can run these subcommands from any node in the cluster. However, the create, set-netprops, and restore-netprops subcommands are an exception. You must run these subcommands in noncluster mode. You can omit subcommand only if options is the -? option or the -V option. The cluster command does not have a short form. Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS. You can use some forms of this command in a non-global zone, referred to simply as a zone. For more information about valid uses of this command in zones, see the descriptions of the individual subcommands. For ease of administration, use this command in the global zone. SUBCOMMANDS
The following subcommands are supported: create Creates a new cluster by using configuration information that is stored in a clconfigfile file. The format of this configuration information is described in the clconfiguration(5CL) man page. You can use this subcommand only in the global zone. You must run this subcommand in noncluster mode. You must also run this subcommand from a host that is not already configured as part of a cluster. Sun Cluster software must already be installed on every node that is going to be a part of the cluster. If you do not specify a cluster name, the name of the cluster is taken from the clconfigfile file. Users other than superuser require solaris.cluster.modify role-based access control (RBAC) authorization to use this subcommand. See the rbac(5) man page. export Exports the configuration information. You can use this subcommand only in the global zone. If you specify a file with the -o option, the configuration information is written to that file. If you do not specify the -o option, the output is written to the standard output (stdout). The following option limits the information that is exported: -t objecttype[,...] Exports configuration information only for components that are of the specified types. You can export configuration information only for the cluster on which you issue the cluster command. If you specify the name of a cluster other than the one on which you issue the cluster command, this subcommand fails. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. list Displays the name of the cluster. You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. list-cmds Prints a list of all available Sun Cluster commands. You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. rename Renames the cluster. You can use this subcommand only in the global zone. Use the -c option with this subcommand to specify a new name for the cluster. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page. restore-netprops Resets the cluster private network settings of the cluster. You can use this subcommand only in the global zone. You must run this subcommand in noncluster mode. Use this subcommand only when the set-netprops subcommand fails and the following conditions exist: o You are attempting to modify the private network properties. o The failure indicates an inconsistent cluster configuration on the nodes. In this situation, you need to run the restore- netprops subcommand. You must run this subcommand on every node in the cluster. This subcommand repairs the cluster configuration. This subcommand also removes inconsistencies that are caused by the failure of the modification of the IP address range. In case of a failure, any attempts that you make to change the configuration settings are not guaranteed to work. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page. set Modifies the properties of the cluster. You can use this subcommand only in the global zone. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page. set-netprops Modifies the private network properties. You can use this subcommand only in the global zone. You must run this subcommand in noncluster mode. Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page. show Displays detailed configuration information about cluster components. You can use this subcommand only in the global zone. The following option limits the information that is displayed: -t objecttype[,...] Displays configuration information only for components that are of the specified types. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. show-netprops Displays information about the private network properties of the cluster. You can use this subcommand only in the global zone. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. shutdown Shuts down the cluster in an orderly fashion. You can use this subcommand only in the global zone. If you provide the name of a cluster other than the cluster on which you issue the cluster command, this subcommand fails. Run this subcommand from only one node in the cluster. This subcommand performs the following actions: o Takes offline all functioning resource groups in the cluster. If any transitions fail, this subcommand does not complete and displays an error message. o Unmounts all cluster file systems. If an unmount fails, this subcommand does not complete and displays an error message. o Shuts down all active device services. If any transition of a device fails, this subcommand does not complete and displays an error message. o Halts all nodes. Before this subcommand starts to shut down the cluster, it issues a warning message on all nodes. After issuing the warning, this sub- command issues a final message that prompts you to confirm that you want to shut down the cluster. To prevent this final message from being issued, use the -y option. By default, the shutdown subcommand waits 60 seconds before it shuts down the cluster. You can use the -g option to specify a different delay time. To specify a message string to appear with the warning, use the -m option. Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page. status Displays the status of cluster components. You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. The option -t objecttype[,...] displays status information for components that are of the specified types only. Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page. OPTIONS
The following options are supported: Note - Both the short and the long form of each option are shown in this section. -? --help Displays help information. You can specify this option with or without a subcommand. If you do not specify a subcommand, the list of all available subcommands is displayed. If you specify a subcommand, the usage for that subcommand is displayed. If you specify this option and other options, the other options are ignored. -c newclustername --newclustername=newclustername --newclustername newclustername Specifies a new name for the cluster. Use this option with the rename subcommand to change the name of the cluster. -g graceperiod --graceperiod=graceperiod --graceperiod graceperiod Changes the length of time before the cluster is shut down from the default setting of 60 seconds. You specify graceperiod in seconds. -i {- | clconfigfile} --input={- | clconfigfile-} --input {- | clconfigfile-} Uses the configuration information in the clconfigfile file. See the clconfiguration(5CL) man page. To provide configuration information through the standard input (stdin), specify a dash (-) with this option. If you specify other options, they take precedence over the options and information in the cluster configuration file. -m message --message=message --message message Specifies a message string that you want to display with the warning that is displayed when you issue the shutdown subcommand. The standard warning message is system will be shut down in .... If message contains more than one word, delimit it with single (') quotation marks or double (") quotation marks. The shutdown command issues messages at 7200, 3600, 1800, 1200, 600, 300, 120, 60, and 30 seconds before a shutdown begins. -o {- | clconfigfile} --output={- | clconfigfile-} --output {- | clconfigfile-} Writes cluster configuration information to a file or to the standard output (stdout). The format of the configuration information is described in the clconfiguration(5CL) man page. If you specify a file name with this option, this option creates a new file. Configuration information is then placed in that file. If you specify - with this option, the configuration information is sent to the standard output (stdout). All other standard output for the command is suppressed. You can use this option only with the export subcommand. -p name=value --property=name-=value --property name-=value Modifies cluster-wide properties. Multiple instances of -p name=value are allowed. Use this option with the set and the set-netprops subcommands to modify the following properties: installmode Specify the installation-mode setting for the cluster. You can specify either enabled or disabled for the installmode property. While the installmode property is enabled, nodes do not attempt to reset their quorum configurations at boot time. Also, while in this mode, many administrative functions are blocked. When you first install a cluster, the installmode property is enabled. After all nodes have joined the cluster for the first time, and shared quorum devices have been added to the configuration, you must explicitly disable the installmode property. When you disable the installmode property, the quorum vote counts are set to default values. If quorum is automatically configured during cluster creation, the installmode property is disabled as well after quorum has been configured. heartbeat_quantum Define how often to send heartbeats, in milliseconds. Sun Cluster software uses a 1 second, or 1,000 milliseconds, heartbeat quantum by default. Specify a value between 100 milliseconds and 10,000 milliseconds. heartbeat_timeout Define the time interval, in milliseconds, after which, if no heartbeats are received from the peer nodes, the corresponding path is declared as down. Sun Cluster software uses a 10 second, or 10,000 millisecond, heartbeat timeout by default. Specify a value between 2,500 millisec- onds and 60,000 milliseconds. The set subcommand allows you to modify the global heartbeat parameters of a cluster, across all the adapters. Sun Cluster software relies on heartbeats over the private interconnect to detect communication failures among cluster nodes. If you reduce the heartbeat timeout, Sun Cluster software can detect failures more quickly. The time that is required to detect fail- ures decreases when you decrease the values of heartbeat timeout. Thus, Sun Cluster software recovers more quickly from failures. Faster recovery increases the availability of your cluster. Even under ideal conditions, when you reduce the values of heartbeat parameters by using the set subcommand, there is always a risk that spurious path timeouts and node panics might occur. Always test and thoroughly qualify the lower values of heartbeat parame- ters under relevant workload conditions before actually implementing them in your cluster. The value that you specify for heartbeat_timeout must always be greater than or equal to five times the value that you specify for heartbeat_quantum (heartbeat_timeout >= (5*heartbeat_quantum)). global_fencing Specify the global default fencing algorithm for all shared devices. Acceptable values for this property are prefer3 and pathcount. The pathcount setting determines the fencing protocol by the number of DID paths that are attached to the shared device. For devices that use three or more DID paths, this property is set to the SCSI-3 protocol. By default, this property is set to pathcount. Private network properties You modify private network properties with the set-netprops subcommand only. You must modify these private network settings only if the default private network address collides with an address that is already in use. You must also modify these private network settings if the existing address range is not sufficient to accommodate the growing cluster configuration. All nodes of the cluster are expected to be available and in noncluster mode when you modify network properties. You modify the private network settings on only one node of the cluster, as the settings are propagated to all nodes. When you set the private_netaddr property, you can also set the private_netmask property or the max_nodes and max_privatenets prop- erties, or all properties. If you attempt to set the private_netmask property and either the max_nodes or the max_privatenets prop- erty, an error occurs. You must always set both the max_nodes or the max_privatenets properties together. The default private network address is 172.16.0.0, with a default netmask of 255.255.248.0. If you fail to set a property due to an inconsistent cluster configuration, run the cluster restore-netprops command on each node in noncluster mode. Private network properties are as follows: private_netaddr Specify the private network address. private_netmask Specify the cluster private network mask. The value that you specify in this case must be equal to or greater than the default netmask 255.255.248.0. You can set this property only in conjunction with the private_netaddr property. If you want to assign a smaller IP address range than the default, you can use the max_nodes and max_privatenets properties instead of or in addition to the private_netmask property. max_nodes Specify the maximum number of nodes that you expect to be a part of the cluster. Include in this number the expected number of non-global zones that will use the private network. You can set this property only in conjunction with the private_netaddr and max_privatenets properties, and optionally with the private_netmask property. The maximum value for max_nodes is 64. The mini- mum value is 2. max_privatenets Specify the maximum number of private networks that you expect to be used in the cluster. You can set this property only in conjunction with the private_netaddr and max_nodes properties, and optionally with the private_netmask property. The maximum value for max_privatenets is 128. The minimum value is 2. The command performs the following tasks for each combination of private network properties: -p private_netaddr=netaddr The command assigns the default netmask, 255.255.248.0, to the private interconnect. The default IP address range accommodates a maximum of 64 nodes and 10 private networks. -p private_netaddr=netaddr,private_netmask=netmask If the specified netmask is less than the default netmask, the command fails and exits with an error. If the specified netmask is equal to or greater than the default netmask, the command assigns the specified netmask to the pri- vate interconnect. The resulting IP address range accommodates a maximum of 64 nodes and 10 private networks. To assign a smaller IP address range than the default, specify the max_nodes and max_privatenets properties instead of or in addition to the private_netmask property. -p private_netaddr=netaddr,max_nodes=nodes, max_privatenets=privatenets The command calculates the minimum netmask to support the specified number of nodes and private networks. The command then assigns the calculated netmask to the private interconnect. -p private_netaddr=netaddr,private_netmask=netmask, max_nodes=nodes,max_privatenets=privatenets The command calculates the minimum netmask that supports the specified number of nodes and private networks. The command compares that calculation to the specified netmask. If the specified netmask is less than the calculated netmask, the command fails and exits with an error. If the specified netmask is equal to or greater than the calculated netmask, the command assigns the specified netmask to the private interconnect. -t objecttype[,...] --type=objecttype-[,...] --type objecttype-[,...] Specifies object types for the export, show, and status subcommands. Use this option to limit the output of the export, show, and status subcommands to objects of the specified type only. The following object or component types are supported. Note that the status is not available for some of the object types. +--------------------+--------------------+--------------------+ | Object Type | Short Object Type | Available Status | +--------------------+--------------------+--------------------+ |access |access | No | +--------------------+--------------------+--------------------+ |device |dev | Yes | +--------------------+--------------------+--------------------+ |devicegroup |dg | Yes | +--------------------+--------------------+--------------------+ |global |global | No | +--------------------+--------------------+--------------------+ |interconnect |intr | Yes | +--------------------+--------------------+--------------------+ |nasdevice |nas | No | +--------------------+--------------------+--------------------+ |node |node | Yes | +--------------------+--------------------+--------------------+ |quorum |quorum | Yes | +--------------------+--------------------+--------------------+ |reslogicalhostname |rslh | Yes | +--------------------+--------------------+--------------------+ |resource |rs | Yes | +--------------------+--------------------+--------------------+ |resourcegroup |rg | Yes | +--------------------+--------------------+--------------------+ |resourcetype |rt | No | +--------------------+--------------------+--------------------+ |ressharedaddress |rssa | Yes | +--------------------+--------------------+--------------------+ |snmphost |snmphost | No | +--------------------+--------------------+--------------------+ |snmpmib |snmpmib | No | +--------------------+--------------------+--------------------+ |snmpuser |snmpuser | No | +--------------------+--------------------+--------------------+ |telemetryattribute |ta | No | +--------------------+--------------------+--------------------+ -v --verbose Displays verbose information on the standard output (stdout). -V --version Displays the version of the command. If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the com- mand is displayed. No other processing occurs. -y --yes Prevents the prompt that asks you to confirm a shutdown from being issued. The cluster is shut down immediately, without user interven- tion. OPERANDS
The following operands are supported: clustername The name of the cluster that you want to manage. For all subcommands except create, the clustername that you specify must match the name of the cluster on which you issue the cluster command. You specify a new and a unique cluster name by using the create subcommand. EXIT STATUS
The complete set of exit status codes for all commands in this command set are listed in the Intro(1CL) man page. Returned exit codes are also compatible with the return codes that are described in the scha_calls(3HA) man page. If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command pro- cesses the next operand in the operand list. The returned exit code always reflects the error that occurred first. This command returns the following exit status codes: 0 CL_NOERR No error The command that you issued completed successfully. 1 CL_ENOMEM Not enough swap space A cluster node ran out of swap memory or ran out of other operating system resources. 3 CL_EINVAL Invalid argument You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the -i option was incorrect. 6 CL_EACCESS Permission denied The object that you specified is inaccessible. You might need superuser or RBAC access to issue the command. See the su(1M) and rbac(5) man pages for more information. 35 CL_EIO I/O error A physical input/output error has occurred. 36 CL_ENOENT No such object The object that you specified cannot be found for one of the following reasons: o The object does not exist. o A directory in the path to the configuration file that you attempted to create with the -o option does not exist. o The configuration file that you attempted to access with the -i option contains errors. EXAMPLES
Example 1 Displaying Cluster Configuration Information The following command displays all available configuration information for the cluster. # cluster show Enabled: yes privatehostname: clusternode1-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x441699B200000001 Transport Adapter List: hme1, qfe3 Node Zones: phys-schost-1:za --- Transport Adapters for phys-schost-1 --- Transport Adapter: hme1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): hme Adapter Property(device_instance): 1 Adapter Property(lazy_free): 0 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.0.129 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled Transport Adapter: qfe3 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): qfe Adapter Property(device_instance): 3 Adapter Property(lazy_free): 1 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.1.1 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled --- SNMP MIB Configuration on phys-schost-1 --- SNMP MIB Name: Event State: SNMPv --- SNMP Host Configuration on phys-schost-1 --- --- SNMP User Configuration on phys-schost-1 --- Node Name: phys-schost-2 Node ID: 2 Type: cluster Enabled: yes privatehostname: clusternode2-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x441699B200000002 Transport Adapter List: hme1, qfe3 Node Zones: phys-schost-2:za --- Transport Adapters for phys-schost-2 --- Transport Adapter: hme1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): hme Adapter Property(device_instance): 1 Adapter Property(lazy_free): 0 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.0.130 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled Transport Adapter: qfe3 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): qfe Adapter Property(device_instance): 3 Adapter Property(lazy_free): 1 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.1.2 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled --- SNMP MIB Configuration on phys-schost-2 --- SNMP MIB Name: Event State: SNMPv --- SNMP Host Configuration on phys-schost-2 --- --- SNMP User Configuration on phys-schost-2 --- === Transport Cables === Transport Cable: phys-schost-1:hme1,switch1@1 Cable Endpoint1: phys-schost-1:hme1 Cable Endpoint2: switch1@1 Cable State: Enabled Transport Cable: phys-schost-1:qfe3,switch2@1 Cable Endpoint1: phys-schost-1:qfe3 Cable Endpoint2: switch2@1 Cable State: Enabled Transport Cable: phys-schost-2:hme1,switch1@2 Cable Endpoint1: phys-schost-2:hme1 Cable Endpoint2: switch1@2 Cable State: Enabled Transport Cable: phys-schost-2:qfe3,switch2@2 Cable Endpoint1: phys-schost-2:qfe3 Cable Endpoint2: switch2@2 Cable State: Enabled === Transport Switches === Transport Switch: switch1 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled Transport Switch: switch2 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled === Quorum Devices === Quorum Device Name: d3 Enabled: yes Votes: 1 Global Name: /dev/did/rdsk/d3s2 Type: scsi Access Mode: scsi2 Hosts (enabled): phys-schost-1, phys-schost-2 === Device Groups === Device Group Name: db1 Type: SVM failback: no Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset name: db1 === Registered Resource Types === Resource Type: SUNW.LogicalHostname:2 RT_description: Logical Hostname Resource Type RT_version: 2 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hafoip Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: True Pkglist: SUNWscu RT_system: True Resource Type: SUNW.SharedAddress:2 RT_description: HA Shared Address Resource Type RT_version: 2 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hascip Single_instance: False Proxy: False Init_nodes: <Unknown> Installed_nodes: <All> Failover: True Pkglist: SUNWscu RT_system: True Resource Type: SUNW.qfs RT_description: SAM-QFS Agent on SunCluster RT_version: 3.1 API_version: 3 RT_basedir: /opt/SUNWsamfs/sc/bin Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: True Pkglist: <NULL> RT_system: False === Resource Groups and Resources === Resource Group: qfs-rg RG_description: <NULL> RG_mode: Failover RG_state: Managed Failback: False Nodelist: phys-schost-2 phys-schost-1 --- Resources for Group qfs-rg --- Resource: qfs-res Type: SUNW.qfs Type_version: 3.1 Group: qfs-rg R_description: Resource_project_name: default Enabled{phys-schost-2}: True Enabled{phys-schost-1}: True Monitored{phys-schost-2}: True Monitored{phys-schost-1}: True === DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: phys-schost-1:/dev/rdsk/c0t0d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d2 Full Device Path: phys-schost-1:/dev/rdsk/c0t6d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-2:/dev/rdsk/c1t1d0 Full Device Path: phys-schost-1:/dev/rdsk/c1t1d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys-schost-2:/dev/rdsk/c1t2d0 Full Device Path: phys-schost-1:/dev/rdsk/c1t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d5 Full Device Path: phys-schost-2:/dev/rdsk/c1t3d0 Full Device Path: phys-schost-1:/dev/rdsk/c1t3d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d6 Full Device Path: phys-schost-2:/dev/rdsk/ c6t60020F2000004B843BC38D21000A3116d0 Full Device Path: phys-schost-1:/dev/rdsk/ c6t60020F2000004B843BC38D21000A3116d0 Replication: none default_fencing: scsi3 DID Device Name: /dev/did/rdsk/d7 Full Device Path: phys-schost-2:/dev/rdsk/ c6t60020F2000004B843BC3746B000BB4A0d0 Full Device Path: phys-schost-1:/dev/rdsk/ c6t60020F2000004B843BC3746B000BB4A0d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d8 Full Device Path: phys-schost-2:/dev/rdsk/ c6t60020F2000004B843BC37F8600083E05d0 Full Device Path: phys-schost-1:/dev/rdsk/ c6t60020F2000004B843BC37F8600083E05d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d9 Full Device Path: phys-schost-2:/dev/rdsk/ c6t60020F2000004B843BC373F10005A987d0 Full Device Path: phys-schost-1:/dev/rdsk/ c6t60020F2000004B843BC373F10005A987d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d10 Full Device Path: phys-schost-2:/dev/rdsk/c3t50020F2300004677d1 Full Device Path: phys-schost-1:/dev/rdsk/c3t50020F2300004677d1 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d11 Full Device Path: phys-schost-2:/dev/rdsk/c3t50020F2300004677d0 Full Device Path: phys-schost-1:/dev/rdsk/c3t50020F2300004677d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d12 Full Device Path: phys-schost-2:/dev/rdsk/c0t0d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d13 Full Device Path: phys-schost-2:/dev/rdsk/c0t1d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d14 Full Device Path: phys-schost-2:/dev/rdsk/c0t6d0 Replication: none default_fencing: global === NAS Devices === === Telemetry Attributes === Example 2 Displaying Configuration Information About Selected Cluster Components The following command displays information about resources, resource types, and resource groups. Information is displayed for only the cluster. # cluster show -t resource,resourcetype,resourcegroup Single_instance: False Proxy: False Init_nodes: <Unknown> Installed_nodes: <All> Failover: True Pkglist: SUNWscu RT_system: True Resource Type: SUNW.qfs RT_description: SAM-QFS Agent on SunCluster RT_version: 3.1 API_version: 3 RT_basedir: /opt/SUNWsamfs/sc/bin Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: True Pkglist: <NULL> RT_system: False === Resource Groups and Resources === Resource Group: qfs-rg RG_description: <NULL> RG_mode: Failover RG_state: Managed Failback: False Nodelist: phys-schost-2 phys-schost-1 --- Resources for Group qfs-rg --- Resource: qfs-res Type: SUNW.qfs Type_version: 3.1 Group: qfs-rg R_description: Resource_project_name: default Enabled{phys-schost-2}: True Enabled{phys-schost-1}: True Monitored{phys-schost-2}: True Monitored{phys-schost-1}: True Example 3 Displaying Cluster Status The following command displays the status of all cluster nodes. # cluster status -t node === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online --- Node Status --- Node Name Status --------- ------ Alternately, you can also display the same information by using the clnode command. # clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online Example 4 Creating a Cluster The following command creates a cluster that is named cluster-1 from the cluster configuration file suncluster.xml. # cluster create -i /suncluster.xml cluster-1 Example 5 Changing a Cluster Name The following command changes the name of the cluster to cluster-2. # cluster rename -c cluster-2 Example 6 Disabling a Cluster's installmode Property The following command disables a cluster's installmode property. # cluster set -p installmode=disabled Example 7 Modifying the Private Network The following command modifies the private network settings of a cluster. The command sets the private network address to 172.10.0.0. The command also calculates and sets a minimum private netmask to support the specified eight nodes and four private networks. # cluster set-netprops -p private_netaddr=172.10.0.0 -p max_nodes=8 -p max_privatenets=4 You can also specify this command as follows: # cluster set-netprops -p private_netaddr=172.10.0.0,max_nodes=8, max_privatenets=4 ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), init(1M), su(1M), scha_calls(3HA), attributes(5), rbac(5), clconfiguration(5CL) NOTES
The superuser can run all forms of this command. All users can run this command with the -? (help) or -V (version) option. To run the cluster command with subcommands, users other than superuser require RBAC authorizations. See the following table. +-----------------+---------------------------------------------------------+ | Subcommand | RBAC Authorization | +-----------------+---------------------------------------------------------+ |create | solaris.cluster.modify | +-----------------+---------------------------------------------------------+ |export | solaris.cluster.read | +-----------------+---------------------------------------------------------+ |list | solaris.cluster.read | +-----------------+---------------------------------------------------------+ |list-cmds | solaris.cluster.read | +-----------------+---------------------------------------------------------+ |rename | solaris.cluster.modify | +-----------------+---------------------------------------------------------+ |restore-netprops | solaris.cluster.modify | +-----------------+---------------------------------------------------------+ |set | solaris.cluster.modify | +-----------------+---------------------------------------------------------+ |set-netprops | solaris.cluster.modify | +-----------------+---------------------------------------------------------+ |show | solaris.cluster.read | +-----------------+---------------------------------------------------------+ |show-netprops | solaris.cluster.read | +-----------------+---------------------------------------------------------+ |shutdown | solaris.cluster.admin | +-----------------+---------------------------------------------------------+ |status | solaris.cluster.read | +-----------------+---------------------------------------------------------+ Sun Cluster 3.2 15 Aug 2007 cluster(1CL)

Featured Tech Videos