Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

scinstall(1m) [opensolaris man page]

scinstall(1M)						  System Administration Commands					     scinstall(1M)

NAME
scinstall - initialize Sun Cluster software and establish new cluster nodes SYNOPSIS
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -i [-k] [-s srvc[,...]] [-F [-C clustername] [-T authentication-options] [-G [special | mount-point] [-o]] [-A adapter-options] [-B switch-options] [-m cable-options] [-w netaddr-options]] media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -i [-k] [-s srvc[,...]] [-N cluster-member [-C clustername] [-G {special | mount-point}] [-A adapter-options] [-B switch-options] [-m cable-options]] media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -a install-dir [-d dvdimage-dir] media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -c jumpstart-dir -h nodename [-d dvdimage-dir] [-s srvc[,...]] [-F [-C clustername] [-G {special | mount-point}]] [-T authentication-options [-A adapter-options] [-B switch-options] [-m cable-options] [-w netaddr-options]] media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -c jumpstart-dir -h nodename [-d dvdimage-dir] [-s srvc[,...]] [-N cluster-member [-C clustername] [-G {special | mount-point}] [-A adapter-options] [-B switch-options] [-m cable-options]] media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u upgrade-mode /usr/cluster/bin/scinstall -u upgrade-options /usr/cluster/bin/scinstall -r [-N cluster-member] [-G mount-point] scinstall -p [-v] DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scinstall command performs a number of Sun Cluster node creation and upgrade tasks, as follows. o The "initialize" form (-i) of scinstall establishes a node as a new Sun Cluster configuration member. It either establishes the first node in a new cluster (-F) or adds a node to an already-existing cluster (-N). Always run this form of the scinstall com- mand from the node that is creating the cluster or is being added to the cluster. o The "set up install server" form (-a) of scinstall creates an install-dir on any Solaris machine from which the command is run and then copies Sun Cluster installation media to that directory. Typically, you would create the target directory on an NFS server which has also been set up as a Solaris install server (see the setup_install_server(1M) man page). o The "add install client" form (-c) of scinstall establishes the specified nodename as a custom JumpStart client in the jump- start-dir on the machine from which the command is run. Typically, the jumpstart-dir is located on an already-established Solaris install server configured to JumpStart the Solaris nodename install client (see the add_install_client(1M) man page). o The "remove" form (-r) of scinstall removes cluster configuration information and uninstalls Sun Cluster software from a cluster node. o The "upgrade" form (-u) of scinstall, which has multiple modes and options, upgrades a Sun Cluster node. Always run this form of the scinstall command from the node being upgraded. o The "print release" form (-p) of scinstall prints release and package versioning information for the Sun Cluster software that is installed on the node from which the command is run. Without options, the scinstall command attempts to run in interactive mode. Run all forms of the scinstall command other than the "print release" form (-p) as superuser. The scinstall command is located in the Tools directory on the Sun Cluster installation media. If the Sun Cluster installation media has been copied to a local disk, media-mnt-pt is the path to the copied Sun Cluster media image. The SUNWsczu software package also includes a copy of the scinstall command. Except for the -p option, you can run this command only from the global zone. OPTIONS
Basic Options The following options direct the basic form and function of the command. None of the following options can be combined on the same command line. -a Specifies the "set up install server" form of the scinstall command. This option is used to create an install-dir on any Solaris machine from which the command is run and then make a copy of the Sun Cluster media in that directory. You can use this option only in the global zone. If the install-dir already exists, the scinstall command returns an error message. Typically, the target directory is created on an NFS server which has also been set up as a Solaris install server (see the setup_install_server(1M) man page). -c Specifies the "add install client" form of the scinstall command. This option establishes the specified nodename as a custom JumpStart client in the jumpstart-dir on the machine from which you issued the command. You can use this option only in the global zone. Typically, the jumpstart-dir is located on an already-established Solaris install server that is configured to JumpStart the nodename install client (see the add_install_client(1M) man page). This form of the command enables fully-automated cluster installation from a JumpStart server by helping to establish each cluster node, or nodename, as a custom JumpStart client on an already-established Solaris JumpStart server. The command makes all necessary updates to the rules file in the specified jumpstart-dir. In addition, special JumpStart class files and finish scripts that support cluster initialization are added to the jumpstart-dir, if they are not already installed. Configuration data that is used by the Sun Cluster-supplied finish script is established for each node that you set up by using this method. Users can customize the Solaris class file that the -c option to the scinstall command installs by editing the file directly in the normal way. However, it is always important to ensure that the Solaris class file defines an acceptable Solaris installation for a Sun Cluster node. Otherwise, the installation might need to be restarted. Both the class file and finish script that are installed by this form of the command are located in the following directory: jumpstart-dir/autoscinstall.d/3.1 The class file is installed as autoscinstall.class, and the finish script is installed as autoscinstall.finish. For each cluster nodename that you set up with the -c option as an automated Sun Cluster JumpStart install client, this form of the command sets up a configuration directory as the following: jumpstart-dir/autoscinstall.d/nodes/nodename Options for specifying Sun Cluster node installation and initialization are saved in files that are located in these directories. Never edit these files directly. You can customize the JumpStart configuration in the following ways: o You can add a user-written finish script as the following file name: jumpstart-dir/autoscinstall.d/nodes/nodename/finish The scinstall command runs the user-written finish scripts after it runs the finish script supplied with the product. o If the directory jumpstart-dir/autoscinstall.d/nodes/nodename/archive exists, the scinstall command copies all files in that directory to the new installation. In addition, if an etc/inet/hosts file exists in that directory, scinstall uses the hosts information found in that file to supply name-to-address mappings when a name service (NIS/NIS+/DNS) is not used. o If the directory jumpstart-dir/autoscinstall.d/nodes/nodename/patches exists, the scinstall command installs all files in that directory by using the patchadd(1M) command. This directory is intended for Solaris software patches and any other patches that must be installed before Sun Cluster software is installed. You can create these files and directories individually or as links to other files or directories that exist under jumpstart-dir. See the add_install_client(1M) man page and related JumpStart documentation for more information about how to set up custom JumpStart install clients. Run this form of the command from the install-dir (see the -a form of scinstall) on the JumpStart server that you use to initialize the cluster nodes. Before you use the scinstall command to set up a node as a custom Sun Cluster JumpStart client, you must first establish each node as a Solaris install client. The JumpStart directory that you specify with the -c option to the add_install_client command should be the same directory that you specify with the -c option to the scinstall command. However, the scinstall jumpstart-dir does not have a server component to it, since you must run the scinstall command from a Solaris JumpStart server. To remove a node as a custom Sun Cluster JumpStart client, simply remove it from the rules file. -i Specifies the "initialize" form of the scinstall command. This form of the command establishes a node as a new cluster member. The new node is the node from which you issue the scinstall command. You can use this option only in the global zone. If the -F option is used with -i, scinstall establishes the node as the first node in a new cluster. If the -o option is used with the -F option, scinstall establishes a single-node cluster. If the -N option is used with -i, scinstall adds the node to an already-existing cluster. If the -s option is used and the node is an already-established cluster member, only the specified srvc (data service) is installed. -p Prints release and package versioning information for the Sun Cluster software that is installed on the node from which the command is run. This is the only form of scinstall that you can run as a non-superuser. You can use this option in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. -r Removes cluster configuration information and uninstalls Sun Cluster framework and data-service software from a cluster node. You can then reinstall the node or remove the node from the cluster. You must run the command on the node that you uninstall, from a directory that is not used by the cluster software. The node must be in noncluster mode. You can use this option only in the global zone. -u upgrade-mode Upgrades Sun Cluster software on the node from which you invoke the scinstall command. The upgrade form of scinstall has multiple modes of operation, as specified by upgrade-mode. See Upgrade Options below for information specific to the type of upgrade that you intend to perform. You can use this option only in the global zone. Additional Options You can combine additional options with the basic options to modify the default behavior of each form of the command. Refer to the SYNOPSIS section for additional details about which of these options are legal with which forms of the scinstall command. The following additional options are supported: -d dvdimage-dir Specifies an alternate directory location for finding the media images of the Sun Cluster product and unbundled Sun Cluster data ser- vices. If the -d option is not specified, the default directory is the media image from which the current instance of the scinstall command is started. -h nodename Specifies the node name. The -h option is only legal with the "add install client" (-c) form of the command. The nodename is the name of the cluster node (that is, JumpStart install client) to set up for custom JumpStart installation. -k Specifies that scinstall will not install Sun Cluster software packages. The -k option is only legal with the "initialize" (-i) form of the command. In Sun Cluster 3.0 and 3.1 software, if this option is not specified, the default behavior is to install any Sun Cluster packages that are not already installed. As of the Sun Cluster 3.2 release, this option is unnecessary. It is provided only for backwards compatibil- ity with user scripts that use this option. -s srvc[,...] Specifies a data service. The -s option is only legal with the "initialize" (-i), "upgrade" (-u), or "add install client" (-c) forms of the command to install or upgrade the specified srvc (data service package). If a data service package cannot be located, a warning message is printed, but installation otherwise continues to completion. -v Prints release information in verbose mode. The -v option is only legal with the "print release" (-p) form of the command to specify verbose mode. In the verbose mode of "print release," the version string for each installed Sun Cluster software package is also printed. -F [config-options] Establishes the first node in the cluster. The -F option is only legal with the "initialize" (-i), "upgrade" (-u), or "add install client" (-c) forms of the command. The establishment of secondary nodes will be blocked until the first node is fully instantiated as a cluster member and is prepared to perform all necessary tasks that are associated with adding new cluster nodes. If the -F option is used with the -o option, a single- node cluster is created and no additional nodes can be added during the cluster-creation process. -N cluster-member [config-options] Specifies the cluster member. The -N option is only legal with the "initialize" (-i), "add install client" (-c), "remove" (-r), or "upgrade" (-u) forms of the command. o When used with the -i, -c, or -u option, the -N option is used to add additional nodes to an existing cluster. The specified cluster-member is typically the name of the first cluster node that is established for the cluster. However, the cluster- member can be the name of any cluster node that already participates as a cluster member. The node that is being initialized is added to the cluster of which cluster-member is already an active member. The process of adding a new node to an existing cluster involves updating the configuration data on the specified cluster-member, as well as creating a copy of the configu- ration database onto the local file system of the new node. o When used with the -r option, the -N option specifies the cluster-member, which can be any other node in the cluster that is an active cluster member. The scinstall command contacts the specified cluster-member to make updates to the cluster config- uration. If the -N option is not specified, scinstall makes a best attempt to find an existing node to contact. Configuration Options The config-options which can be used with the -F option or -N cluster-member option are as follows. media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall {-i | -c jumpstart-dir -h nodename} [-F [ -C clustername] [-G {special | mount-point} ] [-T authentication-options] [-A adapter-options] [-B switch-options] [-m endpoint=[this-node]:name[@port],endpoint=[node:]name[@port] ] [-o] [ -w netaddr-options] ] media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall {-i | -c jumpstart-dir -h nodename} [-N cluster-member [-C clustername] [-G {special | mount-point} ] [-A adapter-options] [-B switch-options] [-m endpoint=cable-options] ] -m cable-options Specifies the cluster interconnect connections. This option is only legal when the -F or -N option is also specified. The -m option helps to establish the cluster interconnect topology by configuring the cables connecting the various ports found on the cluster transport adapters and switches. Each new cable configured with this form of the command establishes a connection from a clus- ter transport adapter on the current node to either a port on a cluster transport switch or an adapter on another node already in the cluster. If you specify no -m options, the scinstall command attempts to configure a default cable. However, if you configure more than one transport adapter or switch with a given instance of scinstall, it is not possible for scinstall to construct a default. The default is to configure a cable from the singly-configured transport adapter to the singly-configured (or default) transport switch. The -m cable-options are as follows. -m endpoint=[this-node]:name[@port],endpoint=[node:]name[@port] The syntax for the -m option demonstrates that at least one of the two endpoints must be an adapter on the node that is being config- ured. For that endpoint, it is not required to specify this-node explicitly. The following is an example of adding a cable: -m endpoint=:hme1,endpoint=switch1 In this example, port 0 of the hme1 transport adapter on this node, the node that scinstall is configuring, is cabled to a port on transport switch switch1. The port number that is used on switch1 defaults to the node ID number of this node. You must always specify two endpoint options with each occurrence of the -m option. The name component of the option argument specifies the name of either a cluster transport adapter or a cluster transport switch at one of the endpoints of a cable. o If you specify the node component, the name is the name of a transport adapter. o If you do not specify the node component, the name is the name of a transport switch. If you specify no port component, the scinstall command attempts to assume a default port name. The default port for an adapter is always 0. The default port name for a switch endpoint is equal to the node ID of the node being added to the cluster. Refer to the individual cluster transport adapter and cluster transport switch man pages for more information regarding port assign- ments and other requirements. The man pages for cluster transport adapters use the naming convention scconf_transp_adap_adapter(1M). The man pages for cluster transport switches use the naming convention scconf_transp_jct_switch(1M). Before you can configure a cable, you must first configure the adapters and/or switches at each of the two endpoints of the cable (see -A and -B). -o Specifies the configuration of a single-node cluster. This option is only legal when the -i and -F options are also specified. Other -F options are supported but are not required. If the cluster name is not specified, the name of the node is used as the cluster name. You can specify transport configuration options, which will be stored in the CCR. The -G option is only required if the global- devices file system is not the default, /globaldevices. Once a single-node cluster is established, it is not necessary to configure a quorum device or to disable installmode. -w netaddr-options Specifies the network address for the private interconnect, or cluster transport. This option is only legal when the -F option is also specified. Use this option to specify a private-network address for use on the private interconnect. You can use this option when the default pri- vate-network address collides with an address that is already in use within the enterprise. You can also use this option to customize the size of the IP address range that is reserved for use by the private interconnect. For more information, see the networks(4) and netmasks(4) man pages. If not specified, the default network address for the private interconnect is 172.16.0.0. The default netmask is 255.255.248.0. This IP address range supports up to 64 nodes and 10 private networks. The -w netaddr-options are as follows: -w netaddr=netaddr[,netmask=netmask] -w netaddr=netaddr[,maxnodes=nodes,maxprivatenets=maxprivnets] -w netaddr=netaddr[,netmask=netmask,maxnodes=nodes,maxprivatenets=maxprivnets] netaddr=netaddr Specifies the private network address. The last two octets of this address must always be zero. [netmask=netmask] Specifies the netmask. The specified value must provide an IP address range that is greater than or equal to the default. To assign a smaller IP address range than the default, specify the maxnodes and maxprivatenets operands. [,maxnodes=nodes,maxprivatenets=maxprivnets] Specifies the maximum number of nodes and private networks that the cluster is ever expected to have. The command uses these values to calculate the minimum netmask that the private interconnect requires to support the specified number of nodes and private net- works. The maximum value for nodes is 64 and the minimum value is 2. The maximum value for maxprivnets is 128 and the minimum value is 2. [,netmask=netmask,maxnodes=nodes,maxprivatenets=maxprivnets] Specifies the netmask and the maximum number of nodes and private networks that the cluster is ever expected to have. You must specify a netmask that can sufficiently accommodate the specified number of nodes and privnets. The maximum value for nodes is 64 and the minimum value is 2. The maximum value for privnets is 128 and the minimum value is 2. If you specify only the netaddr suboption, the command assigns the default netmask of 255.255.248.0. The resulting IP address range accommodates up to 64 nodes and 10 private networks. To change the private-network address or netmask after the cluster is established, use the cluster command or the clsetup utility. -A adapter-options Specifies the transport adapter and, optionally, its transport type. This option is only legal when the -F or -N option is also speci- fied. Each occurrence of the -A option configures a cluster transport adapter that is attached to the node from which you run the scinstall command. If no -A options are specified, an attempt is made to use a default adapter and transport type. The default transport type is dlpi. In Sun Cluster 3.2 for SPARC, the default adapter is hme1. When the adapter transport type is dlpi, you do not need to specify the trtype suboption. In this case, you can use either of the fol- lowing two forms to specify the -A adapter-options: -A [trtype=type,]name=adaptername[,vlanid=vlanid][,other-options] -A adaptername [trtype=type] Specifies the transport type of the adapter. Use the trtype option with each occurrence of the -A option for which you want to specify the transport type of the adapter. An example of a transport type is dlpi (see the sctransp_dlpi(7P) man page). The default transport type is dlpi. name=adaptername Specifies the adapter name. You must use the name suboption with each occurrence of the -A option to specify the adaptername. An adaptername is constructed from a device name that is immediately followed by a physical-unit number, for example, hme0. If you specify no other suboptions with the -A option, you can specify the adaptername as a standalone argument to the -A option, as -A adaptername. vlanid=vlanid Specifies the VLAN ID of the tagged-VLAN adapter. [other-options] Specifies additional adapter options. When a particular adapter provides any other options, you can specify them by using the -A option. Refer to the individual Sun Cluster man page for the cluster transport adapter for information about any special options that you might use with the adapter. -B switch-options Specifies the transport switch, also called transport junction. This option is only legal when the -F or -N option is also specified. Each occurrence of the -B option configures a cluster transport switch. Examples of such devices can include, but are not limited to, Ethernet switches, other switches of various types, and rings. If you specify no -B options, scinstall attempts to add a default switch at the time that the first node is instantiated as a cluster node. When you add additional nodes to the cluster, no additional switches are added by default. However, you can add them explicitly. The default switch is named switch1, and it is of type switch. When the switch type is type switch, you do not need to specify the type suboption. In this case, you can use either of the following two forms to specify the -B switch-options. -B [type=type,]name=name[,other-options] -B name If a cluster transport switch is already configured for the specified switch name, scinstall prints a message and ignores the -B option. If you use directly-cabled transport adapters, you are not required to configure any transport switches. To avoid configuring default transport switches, use the following special -B option: -B type=direct [type=type] Specifies the transport switch type. You can use the type option with each occurrence of the -B option. Ethernet switches are an example of a cluster transport switch which is of the switch type switch. See the individual Sun Cluster man page for the cluster transport switch for more information. You can specify the type suboption as direct to suppress the configuration of any default switches. Switches do not exist in a transport configuration that consists of only directly connected transport adapters. When the type suboption is set to direct, you do not need to use the name suboption. name=name Specifies the transport switch name. Unless the type is direct, you must use the name suboption with each occurrence of the -B option to specify the transport switch name. The name can be up to 256 characters in length and is made up of either letters or digits, with the first character being a letter. Each transport switch name must be unique across the namespace of the cluster. If no other suboptions are needed with -B, you can give the switch name as a standalone argument to -B (that is, -B name). [other-options] Specifies additional transport switch options. When a particular switch type provides other options, you can specify them with the -B option. Refer to the individual Sun Cluster man page for the cluster transport switch for information about any special options that you might use with the switches. -C clustername Specifies the name of the cluster. This option is only legal when the -F or -N option is also specified. o If the node that you configure is the first node in a new cluster, the default clustername is the same as the name of the node that you are configuring. o If the node that you configure is being added to an already-existing cluster, the default clustername is the name of the cluster to which cluster-member already belongs. It is an error to specify a clustername that is not the name of the cluster to which cluster-member belongs. -G {special | mount-point} Specifies a raw special disk device or a file system for the global-devices mount point. This option is only legal when the -F, -N, or -r option is also specified. o When used with the -F or -N option, the -G option specifies the raw special disk device or the file system mount-point to use in place of the /globaldevices mount point. Each cluster node must have a local file system that is mounted globally on /global/.devices/node@nodeID before the node can successfully participate as a cluster member. However, since the node ID is not known until the scinstall command is run, scinstall attempts to add the necessary entry to the vfstab(4) file when it does not find a /global/.devices/node@nodeID mount. By default, the scinstall command looks for an empty file system that is mounted on /globaldevices. If such a file system is provided, the scinstall command makes the necessary changes to the vfstab file. These changes create a new /global/.devices/node@nodeID mount point and remove the default /globaldevices mount point. However, if /global/.devices/node@nodeID is not mounted and an empty /globaldevices file system is not provided, the -G option must be used to specify the raw special disk device or the file system mount-point to use in place of /globaldevices. If a raw special disk device name is specified and /global/.devices/node@nodeID is not mounted, a file system is created on the device by using the newfs command. It is an error to supply the name of a device with an already-mounted file system. As a guideline, this file system should be at least 512 Mbytes in size. If this partition or file system is not available, or is not large enough, it might be necessary to reinstall the Solaris operating environment. o When used with the -r option, the -G mount-point option specifies the new mount-point name to use to restore the former /global/.devices mount point. If the -G option is not specified, the mount point is renamed /globaldevices by default. -T authentication-options Specifies node-authentication options for the cluster. This option is only legal when the -F option is also specified. Use this option to establish authentication policies for nodes that attempt to add themselves to the cluster configuration. Specifi- cally, when a machine requests that it be added to the cluster as a cluster node, a check is made to determine whether or not the node has permission to join. If the joining node has permission, it is authenticated and allowed to join the cluster. You can only use the -T option with the scinstall command when you set up the very first node in the cluster. If the authentication list or policy needs to be changed on an already-established cluster, use the scconf command. The default is to allow any machine to add itself to the cluster. The -T authentication-options are as follows. -T node=nodename[,...][,authtype=authtype] node=nodename[,...] Specifies node names to add to the node authentication list. You must specify at least one node suboption to the -T option. This option is used to add node names to the list of nodes that are able to configure themselves as nodes in the cluster. If the authentication list is empty, any node can request that it be added to the cluster configuration. However, if the list has at least one name in it, all such requests are authenticated by using the authentication list. You can modify or clear this list of nodes at any time by using the scconf command or the clsetup utility from one of the active cluster nodes. [authtype=authtype] Specifies the type of node authentication. The only currently supported authtypes are des and sys (or unix). If no authtype is specified, sys is the default. If you will you specify des (Diffie-Hellman) authentication, first add entries to the publickey(4) database for each cluster node to be added, before you run the -T option to the scinstallcommand. You can change the authentication type at any time by using the scconf command or the clsetup utility from one of the active clus- ter nodes. Upgrade Options The -u upgrade-modes and the upgrade-options for standard (nonrolling) upgrade, rolling upgrade, live upgrade, and dual-partition upgrade are as follows. Standard (Nonrolling), Rolling, and Live Upgrade Use the -u update mode to upgrade a cluster node to a later Sun Cluster software release in standard (nonrolling), rolling, or live upgrade mode. o A standard, or nonrolling, upgrade process takes a cluster node out of production and upgrades the node to a later Sun Cluster software release. o Live upgrade mode works in conjunction with Solaris Live Upgrade to upgrade an inactive boot environment while your cluster node continues to serve cluster requests. Once the upgrade is complete, you will activate the new boot environment and reboot the node. o A rolling upgrade process takes only one cluster node out of product at a time. This process can only be used to upgrade Solaris and Sun Cluster software to an update release of the versions that are already installed. While you upgrade one node, cluster services continue on the rest of the cluster nodes. After a node is upgraded, you bring it back into the cluster and repeat the process on the next node to upgrade. After all nodes are upgraded, you must run the scversions command on one cluster node to commit the cluster to the upgraded version. Until this command is run, some new functionality that is introduced in the update release might not be available. The upgrade-options to -u update for standard and rolling mode are as follows. media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u update [-s {srvc[,...] | all}] [-d dvdimage-dir] [ -O ] [-S {interact | testaddr=testipaddr@adapter[,testaddr=...]} ] For live upgrade mode, also use the -R BE-mount-point option to specify the inactive boot environment. The upgrade-options to -u update for live upgrade mode are as follows. media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u update -R BE-mount-point [-s {srvc[,...] | all}] [-d dvdimage-dir] [ -O ] [-S {interact | testaddr=testipaddr@adapter[,testaddr=...]} ] -R BE-mount-point Specifies the root for an inactive boot environment. This is the mount point that is specified to the lumount command. This option is required if you are performing a live upgrade. -s {srvc[,...] | all} Upgrades data services. If the -s option is not specified, only cluster framework software is upgraded. If the -s option is specified, only the specified data services are upgraded. The -s option is not compatible with the -S test IP address option. The following suboptions to the -s option are specific to the update mode of upgrade: all Upgrades all data services. This suboption to -s is only legal with the update mode. This suboption upgrades all data services currently installed on the node, except those data services for which an update version does not exist in the update release. srvc Specifies the upgrade name of an individual data service. The value of srvc for a data service can be derived from the CLUSTER entry of the .clustertoc file for that data service. The .clustertoc file is located in the media-mnt-pt/components/srvc/Solaris_ver/Packages/ directory of the data service software. The CLUSTER entry takes the form SUNWC_DS_srvc. For example, the value of the CLUSTER entry for the Sun Cluster HA for NFS data service is SUNWC_DS_nfs. To upgrade only the Sun Cluster HA for NFS data service, you issue the command scinstall -u update -s nfs, where nfs is the upgrade name of the data service. The following is a list of Sun Cluster data services and the upgrade name to specify to the -s option. For a data service that can be upgraded with the scinstall command but that is not included in this list, see the .clustertoc file of the data service software to derive the upgrade name to use. +--------------------------------------------+--------------+ | Data Service |Upgrade Name | +--------------------------------------------+--------------+ |Sun Cluster HA for Agfa IMPAX |pax | +--------------------------------------------+--------------+ |Sun Cluster HA for Apache |apache | +--------------------------------------------+--------------+ |Sun Cluster HA for Apache Tomcat |tomcat | +--------------------------------------------+--------------+ |Sun Cluster HA for BEA WebLogic Server |wls | +--------------------------------------------+--------------+ |Sun Cluster HA for DHCP |dhcp | +--------------------------------------------+--------------+ |Sun Cluster HA for DNS |dns | +--------------------------------------------+--------------+ |Sun Cluster HA for Kerberos |krb5 | +--------------------------------------------+--------------+ |Sun Cluster HA for MySQL |mys | +--------------------------------------------+--------------+ |Sun Cluster HA for N1 Grid Engine |n1ge | +--------------------------------------------+--------------+ |Sun Cluster HA for N1 Grid Service Provi- |n1sps | |sioning System | | +--------------------------------------------+--------------+ |Sun Cluster HA for NFS |nfs | +--------------------------------------------+--------------+ |Sun Cluster HA for Oracle |oracle | +--------------------------------------------+--------------+ |Sun Cluster HA for Oracle E-Business Suite |ebs | +--------------------------------------------+--------------+ |Sun Cluster HA for PostgreSQL |PostgreSQL | +--------------------------------------------+--------------+ |Sun Cluster HA for Samba |smb | +--------------------------------------------+--------------+ |Sun Cluster HA for SAP |sap | +--------------------------------------------+--------------+ |Sun Cluster HA for SAP DB |sapdb | +--------------------------------------------+--------------+ |Sun Cluster HA for SAP liveCache |livecache | +--------------------------------------------+--------------+ |Sun Cluster HA for SAP Web Application |sapwebas | |Server | | +--------------------------------------------+--------------+ |Sun Cluster HA for Siebel |siebel | +--------------------------------------------+--------------+ |Sun Cluster HA for Solaris Containers |container | +--------------------------------------------+--------------+ |Sun Cluster HA for Sun Java System Appli- |s1as | |cation Server | | +--------------------------------------------+--------------+ |Sun Cluster HA for Sun Java System Appli- |hadb | |cation Server EE (HADB) | | +--------------------------------------------+--------------+ |Sun Cluster HA for Sun Java System Message |s1mq | |Queue | | +--------------------------------------------+--------------+ |Sun Cluster HA for Sun Java System Web |iws | |Server | | +--------------------------------------------+--------------+ |Sun Cluster HA for SWIFTAlliance Access |saa | +--------------------------------------------+--------------+ |Sun Cluster HA for SWIFTAlliance Gateway |sag | +--------------------------------------------+--------------+ |Sun Cluster HA for Sybase ASE |sybase | +--------------------------------------------+--------------+ |Sun Cluster HA for WebSphere MQ |mqs | +--------------------------------------------+--------------+ |Sun Cluster HA for WebSphere MQ Integrator |mqi | +--------------------------------------------+--------------+ |Sun Cluster Oracle Application Server(9i) |9ias | +--------------------------------------------+--------------+ |Sun Cluster Support for Oracle Real Appli- |oracle_rac | |cation Clusters | | +--------------------------------------------+--------------+ -O Overrides the hardware validation and bypasses the version-compatibility checks. -S {interact | testaddr=testipaddr@adapter[,testaddr=...] Specifies test IP addresses. This option allows the user either to direct the command to prompt the user for the required IP network multipathing (IPMP) addresses or to supply a set of IPMP test addresses on the command line for the conversion of NAFO to IPMP groups. See Chapter 30, Introducing IPMP (Overview), in System Administration Guide: IP Services for additional information about IPMP. It is illegal to combine both the interact and the testaddr suboptions on the same command line. Note - The -S option is only required when one or more of the NAFO adapters in pnmconfig is not already converted to use IPMP. The suboptions of the -S option are the following: interact Prompt the user to supply one or more IPMP test addresses individually. testaddr=testipaddr@adapter Directly specify one or more IPMP test addresses. testipaddr The IP address or hostname, in the /etc/inet/hosts file, that will be assigned as the routable, no-failover, depre- cated test IP address to the adapter. IPMP uses test addresses to detect failures and repairs. See IPMP Addressing in System Administration Guide: IP Services for additional information about configuring test IP addresses. adapter The name of the NAFO network adapter to add to an IPMP group. Dual-Partition Upgrade Use the -u upgrade-modes and upgrade-options for dual-partition upgrade to perform the multiple stages of a dual-partition upgrade. The dual-partition upgrade process first involves assigning cluster nodes into two groups, or partitions. Next, you upgrade one partition while the other partition provides cluster services. You then switch services to the upgraded partition, upgrade the remaining partition, and rejoin the upgraded nodes of the second partition to the cluster formed by the upgraded first partition. The upgrade-modes for dual-parti- tion upgrade also include a mode for recovery after a failure during a dual-partition upgrade. Dual-partition upgrade modes are used in conjunction with the -u update upgrade mode. See the upgrade chapter of the Sun Cluster Software Installation Guide for Solaris OS for more information. The upgrade-modes and upgrade-options to -u for dual-partition upgrade are as follows: media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u begin -h nodelist media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u plan media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u recover media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u status /usr/cluster/bin/scinstall -u apply /usr/cluster/bin/scinstall -u status apply Specifies that upgrade of a partition is completed. Run this form of the command from any node in the upgraded partition, after all nodes in that partition are upgraded. The apply upgrade mode performs the following tasks: First partition When run from a node in the first partition, the apply upgrade mode prepares all nodes in the first partition to run the new soft- ware. When the nodes in the first partition are ready to support cluster services, the command remotely executes the scripts /etc/clus- ter/ql/cluster_pre_halt_apps and /etc/cluster/ql/cluster_post_halt_apps that are on the nodes in the second partition. These scripts are used to call user-written scripts that stop applications that are not under Resource Group Manager (RGM) control, such as Oracle Real Application Clusters (RAC). o The cluster_pre_halt_apps script is run before applications that are under RGM control are stopped. o The cluster_post_halt_apps script is run after applications that are under RGM control are stopped, but before the node is halted. Note - Before you run the apply upgrade mode, modify the script templates as needed to call other scripts that you write to stop certain applications on the node. Place the modified scripts and the user-written scripts that they call on each node in the first parti- tion. These scripts are run from one arbitrary node in the first partition. To stop applications that are running on more than one node in the first partition, modify the user-written scripts accordingly. The unmodified scripts perform no default actions. After all applications on the second partition are stopped, the command halts the nodes in the second partition. The shutdown ini- tiates the switchover of applications and data services to the nodes in the first partition. Then the command boots the nodes in the second partition into cluster mode. If a resource group was offline because its node list contains only members of the first partition, the resource group comes back online. If the node list of a resource group has no nodes that belong to the first partition, the resource group remains offline. Second partition When run from a node in the second partition, the apply upgrade mode prepares all nodes in the second partition to run the new software. The command then boots the nodes into cluster mode. The nodes in the second partition rejoin the active cluster that was formed by the nodes in the first partition. If a resource group was offline because its node list contains only members of the second partition, the resource group comes back online. After all nodes have rejoined the cluster, the command performs final processing, reconfigures quorum devices, and restores quorum vote counts. begin Specifies the nodes to assign to the first partition that you upgrade and initiates the dual-partition upgrade process. Run this form of the command from any node of the cluster. Use this upgrade mode after you use the plan upgrade mode to determine the possible parti- tion schemes. First the begin upgrade mode records the nodes to assign to each partition. Next, all applications are stopped on one node, then the upgrade mode shuts down the node. The shutdown initiates switchover of each resource group on the node to a node that belongs to the second partition, provided that the node is in the resource-group node list. If the node list of a resource group contains no nodes that belong to the second partition, the resource group remains offline. The command then repeats this sequence of actions on each remaining node in the first partition, one node at a time. The nodes in the second partition remain in operation during the upgrade of the first partition. Quorum devices are temporarily uncon- figured and quorum vote counts are temporarily changed on the nodes. plan Queries the cluster storage configuration and displays all possible partition schemes that satisfy the shared-storage requirement. Run this form of the command from any node of the cluster. This is the first command that you run in a dual-partition upgrade. Dual-partition upgrade requires that each shared storage array must be physically accessed by at least one node in each partition. The plan upgrade mode can return zero, one, or multiple partition solutions. If no solutions are returned, the cluster configuration is not suitable for dual-partition upgrade. Use instead the standard upgrade method. For any partition solution, you can choose either partition group to be the first partition that you upgrade. recover Recovers the cluster configuration on a node if a fatal error occurs during dual-partition upgrade processing. Run this form of the command on each node of the cluster. You must shut down the cluster and boot all nodes into noncluster mode before you run this command. Once a fatal error occurs, you cannot resume or restart a dual-partition upgrade, even after you run the recover upgrade mode. The recover upgrade mode restores the /etc/vfstab file and the Cluster Configuration Repository (CCR) database to their original state, before the start of the dual-partition upgrade. The following list describes in which circumstances to use the recover upgrade mode and in which circumstances to take other steps. o If the failure occurred during -u begin processing, run the -u recover upgrade mode. o If the failure occurred after -u begin processing completed but before the shutdown warning for the second partition was issued, determine where the error occurred: o If the failure occurred on a node in the first partition, run the -u recover upgrade mode. o If the failure occurred on a node in the second partition, no recovery action is necessary. o If the failure occurred after the shutdown warning for the second partition was issued but before -u apply processing started on the second partition, determine where the error occurred: o If the failure occurred on a node in the first partition, run the -u recover upgrade mode. o If the failure occurred on a node in the second partition, reboot the failed node into noncluster mode. o If the failure occurred after -u apply processing was completed on the second partition but before the upgrade completed, determine where the error occurred: o If the failure occurred on a node in the first partition, run the -u recover upgrade mode. o If the failure occurred on a node in the first partition but the first partition stayed in service, reboot the failed node. o If the failure occurred on a node in the second partition, run the -u recover upgrade mode. In all cases, you can continue the upgrade manually by using the standard upgrade method, which requires the shutdown of all cluster nodes. status Displays the status of the dual-partition upgrade. The following are the possible states: Upgrade is in progress The scinstall -u begin command has been run but dual-partition upgrade has not completed. The cluster also reports this status if a fatal error occurred during the dual-partition upgrade. In this case, the state is not cleared even after recovery procedures are performed and the cluster upgrade is completed by using the standard upgrade method Upgrade not in progress Either the scinstall -u begin command has not yet been issued, or the dual-partition upgrade has completed successfully. Run the status upgrade mode from one node of the cluster. The node can be in either cluster mode or noncluster mode. The reported state is valid for all nodes of the cluster, regardless of which stage of the dual-partition upgrade the issuing node is in. The following option is supported with the dual-partition upgrade mode: -h nodelist Specifies a space-delimited list of all nodes that you assign to the first partition. You choose these from output dis- played by the plan upgrade mode as valid members of a partition in the partition scheme that you use. The remaining nodes in the cluster, which you do not specify to the begin upgrade mode, are assigned to the second partition. This option is only valid with the begin upgrade mode. EXAMPLES
Establishing a Two-Node Cluster The following sequence of commands establishes a typical two-node cluster with Sun Cluster software for Solaris 9 on SPARC based platforms. The example assumes that Sun Cluster software packages are already installed on the nodes. Insert the installation media on node1 and issue the following commands: node1# cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/ node1# ./scinstall -i -F Insert the installation media on node2 and issue the following commands: node2# cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/ node2# ./scinstall -i -N node1 Establishing a Single-Node Cluster The following sequence of commands establish a single-node cluster with Sun Cluster software for Solaris 9 on SPARC based platforms, with all defaults accepted. The example assumes that Sun Cluster software packages are already installed on the node. Insert the installation media and issue the following commands: # cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/ # ./scinstall -i -F -o Setting Up a Solaris Install Server The following sequence of commands sets up a JumpStart install server to install and initialize Sun Cluster software for Solaris 9 on SPARC based platforms on a three-node SCI-PCI cluster. Insert the installation media on the install server and issue the following commands: # cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/ # ./scinstall -a /export/sc3.1 # cd /export/sc3.1/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/ # ./scinstall -c /export/jumpstart -h node1 -F -A hme2 # ./scinstall -c /export/jumpstart -h node2 -N node1 -A hme2 # ./scinstall -c /export/jumpstart -h node3 -N node1 -A hme2 Upgrading the Framework and Data Service Software (Standard or Rolling Upgrade) The following sequence of commands upgrades the framework and data service software of a cluster to the next Sun Cluster release. This example uses the Sun Cluster version for Solaris 9 on SPARC based platforms. Perform these operations on each cluster node. Note - For a rolling upgrade, perform these operations on one node at a time, after you use the clnode evacuate command to move all resource groups and device groups to the other nodes which will remain in the cluster. Insert the installation media and issue the following commands: ok> boot -x # cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/ # ./scinstall -u update -S interact # cd / # eject cdrom # /usr/cluster/bin/scinstall -u update -s all -d /cdrom/cdrom0 # reboot Performing a Dual-Partition Upgrade The following sequence of commands uses the dual-partition method to upgrade the framework and data service software of a cluster to the next Sun Cluster release. This examples uses the Sun Cluster version for Solaris 9 on SPARC based platforms. The example queries the clus- ter for valid partition schemes, assigns nodes to partitions, reboots the node in the first partition, returns the first partition to oper- ation after upgrade and reboots the node in the second partition, and returns the second partition to the cluster after upgrade. # media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/scinstall -u plan Option 1 First partition phys-schost-1 Second partition phys-schost-2 ... # media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/scinstall -u begin -h phys-schost-1 phys-schost-3 ok boot -x (Upgrade the node in the first partition) phys-schost-1# /usr/cluster/bin/scinstall -u apply ok boot -x (Upgrade the node in the second partition) phys-schost-2# /usr/cluster/bin/scinstall -u apply Upgrading the Framework and Data Service Software (Live Upgrade) The following sequence of commands illustrates the process of performing a live upgrade on an inactive boot environment on a SPARC system that runs Solaris 9. In these commands, sc31u2 is the current boot environment and sc32 is the inactive boot environment being upgraded. In this example, the data services that are being upgraded are from the Agents installation media. Note - The commands shown below typically produce copious output. This output is not shown except where necessary for clarity. # lucreate -c sc31u2 -m /:/dev/dsk/c0t4d0s0:ufs -n sc32 lucreate: Creation of Boot Environment sc32 successful # luupgrade -u -n sc32 -s /net/installmachine/export/solarisX/OS_image The Solaris upgrade of the boot environment sc32 is complete. # lumount sc32 /sc32 # cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/ # ./scinstall -u update -R /sc32 # cd /usr/cluster/bin # ./scinstall -R /sc32 -u update -s all -d /cdrom/cdrom0 # cd / # eject /cdrom/cdrom0 # luumount -f sc32 # luactivate sc32 Activation of boot environment sc32 successful. # init 0 ok> boot Uninstalling a Node The following sequence of commands places the node in noncluster mode, then removes Sun Cluster framework and data-service software and configuration information from the cluster node, renames the global-devices mount point to the default name /globaldevices, and performs cleanup. This examples removes a Sun Cluster version for SPARC based platforms. ok> boot -x # cd / # /usr/cluster/bin/scinstall -r EXIT STATUS
The following exit values are returned: 0 Successful completion. non-zero An error occurred. FILES
media-mnt-pt/.cdtoc media-mnt-pt/Solaris_arch/Product/sun_cluster/.producttoc media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/ fR Packages/.clustertoc media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/ fR Packages/.order media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/ fR Tools/defaults media-mnt-pt/components/srvc/Solaris_ver/Packages/.clustertoc media-mnt-pt/components/srvc/Solaris_ver/Packages/.order /etc/cluster/ql/cluster_post_halt_apps /etc/cluster/ql/cluster_pre_halt_apps ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +--------------------+---------------------------------------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +--------------------+---------------------------------------------------------+ |Availability | Java Enterprise System installation media, SUNWsczu | +--------------------+---------------------------------------------------------+ |Interface Stability | Evolving | +--------------------+---------------------------------------------------------+ SEE ALSO
Intro(1CL), claccess(1CL), clinterconnect(1CL), clnode(1CL), clsetup(1CL), cluster(1CL), add_install_client(1M), luactivate(1M), lucre- ate(1M), lumount(1M), luupgrade(1M), luumount(1M), newfs(1M), patchadd(1M), scconf(1M), scprivipadm(1M), scsetup(1M), scversions(1M), set- up_install_server(1M), clustertoc(4), netmasks(4), networks(4), order(4), packagetoc(4), sctransp_dlpi(7P) Sun Cluster Software Installation Guide for Solaris OS, Sun Cluster System Administration Guide for Solaris OS, Sun Cluster Upgrade Guide for Solaris OS, System Administration Guide: IP Services Sun Cluster 3.2 13 Sep 2007 scinstall(1M)
Man Page