Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

scswitch(1m) [opensolaris man page]

scswitch(1M)						  System Administration Commands					      scswitch(1M)

NAME
scswitch - perform ownership and state change of resource groups and device groups in Sun Cluster configurations SYNOPSIS
scswitch -c -h node[:zone][,...] -j resource[,...] -f flag-name scswitch {-e | -n} [-M] -j resource[,...] [-h node[:zone][,...]] scswitch -F {-g resource-grp[,...] | -D device-group[,...]} scswitch -m -D device-group[,...] scswitch -Q [ -g resource-grp[,...]] [-k] scswitch -R -h node[:zone][,...] -g resource-grp[,...] scswitch -r [-g resource-grp[,...]] scswitch -S -h node[:zone][,...] [-K continue_evac] scswitch -s [-g resource-grp[,...]] [-k] scswitch {-u | -o} -g resource-grp[,...] scswitch -Z [-g resource-grp[,...]] scswitch -z -D device-group[,...] -h node[:zone][,...] scswitch -z [-g resource-grp[,...]] [-h node[:zone][,...]] DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scswitch command moves resource groups or device groups, also called disk device groups, to new primary nodes. It also provides options for evacuating all resource groups and device groups from a node by moving ownership elsewhere, bringing resource groups or device groups offline and online, enabling or disabling resources, switching resource groups to or from an Unmanaged state, or clearing error flags on resources. You can run the scswitch command from any node in a Sun Cluster configuration. If a device group is offline, you can use scswitch to bring the device group online onto any host in the node list. However, once the device group is online, a switchover to a spare node is not per- mitted. Only one invocation of scswitch at a time is permitted. Do not attempt to kill an scswitch operation that is already underway. You can use some forms of this command in a non-global zone, referred to simply as a zone. For more information about valid uses of this command in zones, see the descriptions of the individual options. For ease of administration, use this command in the global zone. OPTIONS
Basic Options The following basic options are supported. Options that you can use with some of these basic options are described in "Additional Options." -c Clears the -f flag-name error flag on the specified set of resources on the specified nodes or zones. For the current release of Sun Cluster software, the -c option is only implemented for the Stop_failed resource state. Clearing the Stop_failed resource state places the resource into the offline state on the specified nodes or zones. If you use this option in a non-global zone, this option successfully operates only on resources that can be mastered by that zone. If you use this option in the global zone, this option can operate on any resource. For ease of administration, use this form of the command in the global zone. If the Stop method fails on a resource and the Failover_mode property of the resource is set to Hard, the Resource Group Man- ager (RGM) halts or reboots the node or zone to force the resource (and all other resources mastered by that node or zone) offline. If the Stop method fails on a resource and the Failover_mode property is set to a value other than Hard, the individual resource goes into the Stop_failed resource state, and the resource group is placed into the Error_stop_failed state. A resource group in the Error_stop_failed state on any node cannot be brought online on any node, nor can it be edited (you can- not add or delete resources or change resource group properties or resource properties). You must clear the Stop_failed resource state by performing the procedure that is documented in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS. Caution - Make sure that both the resource and its monitor are stopped on the specified node or zone before you clear the Stop_failed resource state. Clearing the Stop_failed resource state without fully killing the resource and its monitor can lead to more than one instance of the resource executing on the cluster simultaneously. If you are using shared storage, this situation can cause data corruption. If necessary, as a last resort, execute a kill(1) command on the associated processes. -e Enables the specified resources. If you use this option in a non-global zone, this option successfully operates only on resources that can be mastered by that zone. If you use this option in the global zone, this option can operate on any resource. For ease of administration, use this form of the command in the global zone. Once you have enabled a resource, it goes online or offline depending on whether its resource group is online or offline. You can specify the -h option with the -e option to enable a resource on only a specified subset of nodes or zones. If you omit the -h option, the specified resources are enabled on all nodes or zones. -F Takes offline the specified resource groups (-g) or device groups (-D) on all nodes. If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. When you specify the -F option with the -D option, you can run the -F option only from the global zone. When the -F option takes a device group offline, the associated VxVM disk group or Solaris Volume Manager disk set is deported or released by the primary node. Before a device group can be taken offline, all access to its devices must be stopped, and all dependent file systems must be unmounted. You can start an offline device group by issuing an explicit scswitch call, by accessing a device within the group, or by mounting a file system that depends on the group. -m Takes the specified device groups offline from the cluster for maintenance. The resulting state survives reboots. You can use this option only in the global zone. Before a device group can be placed in maintenance mode, all access to its devices must be stopped, and all dependent file systems must be unmounted. If a device group is currently being accessed, the action fails and the specified device groups are not taken offline from the cluster. Device groups are brought back online by using the -z option. Only explicit calls to the scswitch command can bring a device group out of maintenance mode. -n Disables the specified resources. If you use this option in a non-global zone, this option successfully operates only on resources that can be mastered by that zone. If you use this option in the global zone, this option can operate on any resource. For ease of administration, use this form of the command in the global zone. A disabled resource that is online on its current masters is immediately brought offline from its current masters. The dis- abled resource remains offline regardless of the state of its resource group. You can specify the -h option with the -e option to disable a resource on only a specified subset of nodes or zones. If you omit the -h option, the specified resources are disabled on all nodes or zones. -o Takes the specified unmanaged resource groups out of the unmanaged state. If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. Once a resource group is in the managed state, the RGM attempts to bring the resource group online. -Q Brings the specified resource groups to a quiescent state. If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. If you omit the -g option, the -Q option applies to all resource groups. This option stops the specified resource groups from continuously switching from one node to another in the event of the fail- ure of a Start or Stop method. This form of the scswitch command does not exit until the resource groups have reached a quies- cent state in which they are no longer stopping or starting on any node. If a Monitor_stop, Stop, Postnet_stop, Start, or Prenet_start method fails on any resource in a group while the scswitch -Q command is executing, the resource behaves as if its Failover_mode property was set to None, regardless of its actual setting. Upon failure of one of these methods, the resource moves to an error state (either the Start_failed or Stop_failed resource state) rather than initiating a failover or a reboot of the node. When the scswitch -Q command exits, the specified resource groups might be online or offline or in the ONLINE_FAULTED or ERROR_STOPPED_FAILED state. You can determine their current state by executing the clresourcegroup status command. If a node dies during execution of the scswitch -Q command, execution might be interrupted, leaving the resource groups in a non-quiescent state. If execution is interrupted, scswitch -Q returns a nonzero exit code and writes an error message to the standard error. In this case, you can reissue the scswitch -Q command. You can specify the -k option with the -Q option to hasten the quiescing of the resource groups. If you specify the -k option, it immediately kills all methods that are running on behalf of resources in the affected resource groups. If you do not spec- ify the -k option, methods are allowed to continue running until they exit or exceed their configured timeout. -R Takes the specified resource groups offline and then back online on the specified primary nodes or zones. If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. The specified node or zone must be a current primary node of the resource group. -r Resumes the automatic recovery actions on the specified resource group, which were previously suspended by the -s option. If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. If you omit the -g option, the -r option applies to all resource groups. A suspended resource group is not automatically restarted or failed over until you explicitly issue the command that resumes automatic recovery. Whether online or offline, suspended data services remain in their current state. You can still manually switch the resource group to a different state on specified nodes or zones. You can also still enable or disable individual resources in the resource group. For information about how to suspend automatic recovery actions on resource groups, see the description of the -s option. -S Switches all resource groups and device groups off the specified node, or switches all resource groups off the specified zone. When used on a non-global zone, this option evacuates only the resource groups that are located in that zone. There is no effect on device groups. When executed in a global zone, this option can evacuate any specified node or zone in the cluster. When executed in a non- global zone, this option can only evacuate that non-global zone. The system attempts to select new primaries based on configured preferences for each group. All evacuated groups are not nec- essarily remastered by the same primary. If all groups that are mastered by the specified node or zone cannot be successfully evacuated from the specified node or zone, the command exits with an error. Resource groups are first taken offline before they are relocated to new primary nodes or zones. An evacuated resource group might remain offline if the system cannot start it on a new primary node or zone. If the primary ownership of a device group cannot be changed to one of the other nodes or zones, primary ownership for that device group is retained by the original node or zone. -s Suspends the automatic recovery actions on and quiesces the specified resource group. If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. If you omit the -g option, the -s option applies to all resource groups. A suspended resource group is not automatically started, restarted, or failed over until you explicitly resume monitoring of the resource group with this option. While monitoring of the resource group remains suspended, data services remain online. You can still manually switch the resource group online or offline on specified nodes or zones. You can also still enable or disable individual resources in the resource group. You might need to suspend the automatic recovery of a resource group to investigate and fix a problem in the cluster. Or, you might need to perform maintenance on resource group services. You can also specify the -k option to immediately kill all methods that are running on behalf of resources in the affected resource groups. By using the -k option, you can speed the quiescing of the resource groups. If you do not specify the -k option, methods are allowed to continue running until they exit or exceed their configured timeout. For information about how to resume automatic recovery actions on resource groups, see the description of the -r option. -u Puts the specified managed resource groups into the unmanaged state. If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. As a precondition of the -u option, all resources that belong to the indicated resource groups must first be disabled. -Z This option does the following: o Enables all resources of the specified resource groups o Moves those resource groups into the managed state o Brings those resource groups online on all the default primaries If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. If you omit the -g option, the -Z option applies to all resource groups. When the -g option is not specified, the scswitch command attempts to bring all resource groups online, except resource groups that are suspended. -z Requests a change in mastery of the specified resource group or device group. If you use this option in a non-global zone, this option successfully operates only on resource groups whose node list con- tains that zone. If you use this option in the global zone, this option can operate on any resource group. For ease of admin- istration, use this form of the command in the global zone. If you omit the -g option, the -z option applies to all resource groups. When used with the -D option, the -z option switches one or more specified device groups to the specified node. Only one pri- mary node name can be specified for a device group's switchover. When multiple device groups are specified, the -D option switches the device groups in the order specified. If the -z -D operation encounters an error, the operation stops and no fur- ther switches are performed. When used with only the -g option, the -z option brings the specified resource groups, which must already be managed, online on their most preferred nodes or zones. This form of scswitch does not bring a resource group online in violation of its strong RG_affinities, and it writes a warning message if the affinities of a resource group cannot be satisfied on any node or zone. This option does not enable any resources, enable monitoring on any resources, or take any resource groups out of the unmanaged state, as the -Z option does. When used with the -g and -h options, the -z option brings the specified resource groups online on the nodes or zones that are specified by the -h option, and it takes them offline on all other cluster nodes or zones. If the node list that is specified with the -h option is empty (-h ""), the -z option takes the resource groups that are specified by the -g option offline from all of their current masters. All nodes or zones that are specified by the -h option must be current members of the cluster and must be potential primaries of all of the resource groups that are specified by the -g option. The number of nodes or zones that are specified by the -h option must not exceed the setting of the Maximum_primaries property of any of the resource groups that are specified by the -g option. When used alone (scswitch -z), the -z option switches online all managed resource groups that are not suspended on their most preferred nodes or zones. If you configure the RG_affinities property of one or more resource groups and you issue the scswitch -z -g command (with or without the -h option), additional resource groups other than those that are specified after the -g option might be switched as well. RG_affinities is described in rg_properties(5). Additional Options You can combine the following additional options with the previous basic options as follows: -D Specifies the name of one or more device groups. This option is only legal with the -F, -m, and -z options. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this command option with the -F, -m, or -z option (in conjunction with the -h option). See rbac(5). You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a special kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run su to assume a role. You can also use pfexec to issue privileged Sun Cluster commands. -f Specifies the error flag-name. This option is only legal with the -c option. The only error flag that is currently supported is Stop_failed. You need solaris.cluster.resource.admin RBAC authorization to use this command option with the -c option. See rbac(5). You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a special kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run su to assume a role. You can also use pfexec to issue privileged Sun Cluster commands. -g Specifies the name of one or more resource groups. This option is legal only with the -F, -o, -Q, -r, -R, -s, -u, -z, and -Z options. You need solaris.cluster.resource.admin RBAC authorization to use this command option with the following options: o -F option o -o option o -Q option o -R option in conjunction with the -h option o -r option o -s option o -u option o -Z option o -z option in conjunction with the -h option See rbac(5). You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a special kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run su to assume a role. You can also use pfexec to issue privileged Sun Cluster commands. -h Specifies the name of one or more cluster nodes or zones. This option is only legal with the -c, -e, -n, -R, -S, and -z options. When used with the -c, -e, -n, -R, or -z option, the -h option accepts a comma-delimited list of nodes or zones. The specified zones must be in the node list for the specified resource group or in the node list for the resource group that contains the specified resource. To specify an empty node list to the -z option, specify two double quotation marks ("") as the argument to the -h option. To specify a non-global zone, use the following syntax: node:zone The node component is the name of the physical node where zone is located. The zone component is the name of the zone that you want to include in Nodelist. For example, to specify the non-global zone zone-1 which is located on the node phys-schost-1, you specify the following text: phys-schost-1:zone-1 For resource groups that are configured with multiple primaries, the node or zone names that the -h option lists must all be valid potential primaries of each resource group that the -g option specifies. If a resource group fails to start successfully on the node or zone that the -h option specifies, the resource group might fail over to a different node or zone. This behavior is determined by the setting of the Failover_mode resource property. See r_properties(5) for more information. When used with the -S option, the -h option specifies the name of a single node from which to evacuate resource groups and device groups, or the name of a single zone from which to evacuate resource groups only. You need solaris.cluster.resource.admin RBAC authorization to use this command option with the -c, -R option (in conjunction with the -g option), -S, or -z option (in conjunction with the -g option). In addition, you need solaris.cluster.device.admin RBAC authorization to use this command option with the -z option (in conjunction with the -D option). See rbac(5). You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a special kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run su to assume a role. You can also use pfexec to issue privileged Sun Cluster commands. -j Specifies the names of one or more resources. This option is legal only with the -c, -e, and -n options. You need solaris.cluster.resource.admin RBAC authorization to use this command option with the -c, -e, or -n option. See rbac(5). You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a special kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run su to assume a role. You can also use pfexec to issue privileged Sun Cluster commands. -K Specifies the number of seconds to keep resource groups from switching back onto a node or zone after that node or zone has been successfully evacuated. Resource groups cannot fail over or automatically switch over onto the node or zone while that node or zone is being evacu- ated, and, after evacuation is completed, for the number of seconds that you specify with this option. You can, however, ini- tiate a switchover onto the evacuated node or zone with the scswitch -z -g -h command before continue_evac seconds have passed. Only automatic switchovers are prevented. This option is legal only with the -S option. You must specify an integer value between 0 and 65535. If you do not specify a value, 60 seconds is used by default. You need solaris.cluster.resource.admin RBAC authorization to use this command option. See rbac(5). You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a special kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run su to assume a role. You can also use pfexec to issue privileged Sun Cluster commands. -k Immediately kills Resource Group Manager (RGM) resource methods that are running on behalf of resources in the specified resource groups. You can use this option with the -Q and -s options. If you do not specify the -k option, methods are allowed to continue run- ning until they exit or they exceed their configured timeout. -M Enables (-e) or disables (-n) monitoring for the specified resources. When you disable a resource, you need not disable moni- toring on it because both the resource and its monitor are kept offline. This option is legal only with the -e and -n options. You need solaris.cluster.resource.admin RBAC authorization to use this command option with the -e or -n option. See rbac(5). You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a special kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run su to assume a role. You can also use pfexec to issue privileged Sun Cluster commands. EXAMPLES
Example 1 Switching Over a Resource Group The following command switches over resource-grp-2 to be mastered by schost-1. schost-1# scswitch -z -h schost-1 -g resource-grp-2 Example 2 Bringing Online a Managed Resource Group Without Enabling Monitoring or Resources The following command brings resource-grp-2 online if resource-grp-2 is already managed, but does not enable any resources or enable moni- toring on any resources that are currently disabled. schost-1# scswitch -z -g resource-grp-2 Example 3 Switching Over a Resource Group Configured to Have Multiple Primaries The following command switches over resource-grp-3, a resource group that is configured to have multiple primaries, to be mastered by schost-1,schost-2,schost-3. schost-1# scswitch -z -h schost-1,schost-2,schost-3 -g resource-grp-3 Example 4 Moving All Resource Groups and Device Groups Off a Node The following command switches over all resource groups and device groups from schost-1 to a new set of primaries. schost-1# scswitch -S -h schost-1 Example 5 Moving All Resource Groups and Device Groups Persistently Off a Node The following command switches over all resource groups and device groups from schost-1 to a new set of primaries. The command also speci- fies a 120-second wait before resource groups and device groups are permitted to switch back to schost-1. The use of the -K option in the following command prevents resource groups from automatically switching back to schost-1 after schost-1 is successfully evacuated. An example of when a resource group might attempt to switch back to schost-1 is if the resource group fails to start on its new master. Another example is if a resource group has strong negative affinities configured with the RG_affinities property. schost-1# scswitch -S -h schost-1 -K 120 Example 6 Restarting Resource Groups The following command restarts resource-grp-1 and resource-grp-2 on the non-global zones schost-1:zone1 and schost-2:zone1. schost-1# scswitch -R -h schost-1:zone1,schost-2:zone1 -g resource-grp-1,resource-grp-2 Example 7 Disabling Resources schost-1# scswitch -n -j resource-1,resource-2 Example 8 Enabling a Resource schost-1# scswitch -e -j resource-1 Example 9 Taking Resource Groups to the Unmanaged State schost-1# scswitch -u -g resource-grp-1,resource-grp-2 Example 10 Taking Resource Groups Out of the Unmanaged State schost-1# scswitch -o -g resource-grp-1,resource-grp-2 Example 11 Switching Over a Device Group The following command switches over device-group-1 to be mastered by schost-2. schost-1# scswitch -z -h schost-2 -D device-group-1 Example 12 Putting a Device Group Into Maintenance Mode The following command puts device-group-1 into maintenance mode. schost-1# scswitch -m -D device-group-1 Example 13 Quiescing Resource Groups The following command brings resource groups RG1 and RG2 to a quiescent state. schost-1# scswitch -Q -g RG1,RG2 Example 14 Clearing a Start_failed Resource State by Switching Over a Resource Group The Start_failed resource state indicates that a Start or Prenet_start method failed or timed out on a resource, but its resource group came online anyway. The resource group comes online even though the resource has been placed in a faulted state and might not be providing service. This state can occur if the resource's Failover_mode property is set to None or to another value that prevents the failover of the resource group. Unlike the Stop_failed resource state, the Start_failed resource state does not prevent you or the Sun Cluster software from performing actions on the resource group. You do not need to issue the scswitch -c command to clear a Start_failed resource state. You only need to execute a command that restarts the resource. The following command clears a Start_failed resource state that has occurred on a resource in the resource-grp-2 resource group. The com- mand clears this condition by switching the resource group to the schost-2 node. schost-1# scswitch -z -h schost-2 -g resource-grp-2 Example 15 Clearing a Start_failed Resource State by Restarting a Resource Group The following command clears a Start_failed resource state that has occurred on a resource in the resource-grp-2 resource group. The com- mand clears this condition by restarting the resource group on the schost-1 node. For more information about the Start_failed resource state, see the r_properties(5) man page. schost-1# scswitch -R -h schost-1 -g resource-grp-2 Example 16 Clearing a Start_failed Resource State by Disabling and Enabling a Resource The following command clears a Start_failed resource state that has occurred on the resource resource-1 by disabling and then re-enabling the resource. For more information about the Start_failed resource state, see the r_properties(5) man page. schost-1# scswitch -n -j resource-1 schost-1# scswitch -e -j resource-1 EXIT STATUS
This command blocks until requested actions are completely finished or an error occurs. The following exit values are returned: 0 The command completed successfully. nonzero An error has occurred. scswitch writes an error message to the standard error. If the scswitch command exits with a nonzero exit status and the error message "cluster is reconfiguring" is displayed, the requested oper- ation might have completed successfully, despite the error. If you doubt the result, you can execute the scswitch command again with the same arguments after the reconfiguration is complete. If the scswitch command exits with a nonzero exit status and the error message "Resource group failed to start on chosen node and may fail over to other node(s)" is displayed, the resource group continues to reconfigure for some time after the scswitch command exits. Additional scswitch or clresourcegroup operations on that resource group fail until the resource group has reached a terminal state such as the Online, Online_faulted, or Offline state on all nodes. If you invoke the scswitch command on multiple resources or resource groups and multiple errors occur, the exit value might only reflect one of the errors. To avoid this possibility, invoke the scswitch command on just one resource or resource group at a time. Some operations are not permitted on a resource group (and its resources) whose RG_system property is True. See rg_properties(5) for more information. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
kill(1), pfcsh(1), pfexec(1), pfksh(1), pfsh(1), Intro(1CL), cldevicegroup(1CL), clresourcegroup(1CL), su(1M), attributes(5), rbac(5), r_properties(5), rg_properties(5) Sun Cluster Data Services Planning and Administration Guide for Solaris OS WARNINGS
If you take a resource group offline by using the -z or -F option with the -g option, the Offline state of the resource group does not sur- vive node reboots. If a node dies or joins the cluster, or if other resource groups are switching over, the resource group might come online. The resource group comes online on a node or zone even if you previously switched the resource group offline. Even if all of the resources are disabled, the resource group comes online. To prevent the resource group from coming online automatically, use the -s option to suspend the automatic recovery actions of the resource group. To resume automatic recovery actions, use the -r option. Sun Cluster 3.2 13 Aug 2007 scswitch(1M)
Man Page