clresourcegroup(1CL) Sun Cluster Maintenance Commands clresourcegroup(1CL)
NAME
clresourcegroup, clrg - manage resource groups for Sun Cluster data services
SYNOPSIS
/usr/cluster/bin/clresourcegroup -V
/usr/cluster/bin/clresourcegroup [subcommand]
-?
/usr/cluster/bin/clresourcegroup subcommand
[options] -v [resourcegroup ...]
/usr/cluster/bin/clresourcegroup add-node -n node[:zone][,...]
[-S] [-z zone] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup create [-S] [ -n
node[:zone][,...]] [-p name=value] [-z zone] [...]
resourcegroup...
/usr/cluster/bin/clresourcegroup create -i {-
| clconfigfile} [-S] [ -n node[:zone][,...]] [-p name=value]
[-z zone] [...] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup delete [-F] {+
| resourcegroup...}
/usr/cluster/bin/clresourcegroup evacuate -n node[:zone][,...]
[-T seconds] [-z zone] [+]
/usr/cluster/bin/clresourcegroup export [-o {-
| clconfigfile}] [+ | resourcegroup...]
/usr/cluster/bin/clresourcegroup list [-n node[:zone][,...]]
[-r resource[,...]] [-s state[,...]] [-t resourcetype[,...]]
[ [-z zone]] [+ | resourcegroup...]
/usr/cluster/bin/clresourcegroup manage {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup offline [-n node[:zone][,...]]
[-z zone] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup online [-e] [-m]
[-M] [-n node[:zone][,...]] [-z zone] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup quiesce [-k] {+
| resourcegroup...}
/usr/cluster/bin/clresourcegroup remaster {+
| resourcegroup...}
/usr/cluster/bin/clresourcegroup remove-node -n node[:zone][,...]
[-z zone] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup restart [-n node[:zone][,...]]
{+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup resume {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup set [-i {- | clconfigfile}]
[-n node[:zone][,...]] [-p name[+|-]=value] [...]
[-z zone] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup show [-n node[:zone][,...]]
[-p name[,...]] [-r resource[,...]] [-t resourcetype[,...]]
[-z zone] [+ | resourcegroup...]
/usr/cluster/bin/clresourcegroup status [-n node[:zone][,...]]
[-r resource [,]...] [-s state [,]...] [-t resourcetype
[,]...] [-z zone] [+ | resourcegroup...]
/usr/cluster/bin/clresourcegroup suspend [-k] {+
| resourcegroup...}
/usr/cluster/bin/clresourcegroup switch -n node[:zone][,...]
[-e] [-m] [-M] [-z zone] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup unmanage {+
| resourcegroup...}
DESCRIPTION
This command manages Sun Cluster data service resource groups.
You can omit subcommand only if options is the -? option or the -V option.
Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS.
The clrg command is the short form of the clresourcegroup command.
With the exception of list, show, and status, subcommands require at least one operand. But, many subcommands accept the plus sign operand
(+). This operand applies the subcommand to all applicable objects.
You can use some forms of this command in a non-global zone, referred to simply as a zone. For more information about valid uses of this
command in zones, see the descriptions of the individual subcommands. For ease of administration, use this command in the global zone.
Resources and Resource Groups
The resource state, resource group state, and resource status are all maintained on a per-node basis. For example, a given resource has a
distinct state on each cluster node and a distinct status on each cluster node.
Note -
State names, such as Offline and Start_failed, are not case sensitive. You can use any combination of uppercase and lowercase letters
when you specify state names.
The resource state is set by the Resource Group Manager (RGM) on each node, based only on which methods have been invoked on the resource.
For example, after the STOP method has run successfully on a resource on a given node, the resource's state is Offline on that node. If the
STOP method exits nonzero or times out, the state of the resource is Stop_failed.
Possible resource states include: Online, Offline, Start_failed, Stop_failed, Monitor_failed, Online_not_monitored, Starting, and Stopping.
Possible resource group states are: Unmanaged, Online, Offline, Pending_online, Pending_offline, Error_stop_failed, Online_faulted, and
Pending_online_blocked.
In addition to resource state, the RGM also maintains a resource status that can be set by the resource itself by using the API. The field
Status Message actually consists of two components: status keyword and status message. Status message is optionally set by the resource and
is an arbitrary text string that is printed after the status keyword.
Descriptions of possible values for a resource's status are as follows:
Degraded The resource is online, but its performance or availability might be compromised in some way.
Faulted The resource has encountered an error that prevents it from functioning.
Offline The resource is offline.
Online The resource is online and providing service.
Unknown The current status is unknown or is in transition.
SUBCOMMANDS
The following subcommands are supported:
add-node
Adds a node or zone to the end of the Nodelist property for a resource group.
You can use this subcommand only in the global zone.
The order of the nodes and zones in the list specifies the preferred order in which the resource group is brought online on those nodes
or zones. To add a node or zone to a different position in the Nodelist property, use the set subcommand.
Users other than superuser require solaris.cluster.modify role-based access control (RBAC) authorization to use this subcommand. See
the rbac(5) man page.
create
Creates a new resource group.
You can use this subcommand only in the global zone.
If you specify a configuration file with the -i option, you can specify the plus sign operand (+). This operand specifies that you want
to create all resources in that file that do not yet exist.
To set the Nodelist property for the new resource group, specify one of the following options:
o -n node-[:zone-][,...]
o -p Nodelist=-node--[:zone-][,...]
o -i clconfigfile
The order of the nodes or zones in the list specifies the preferred order in which the resource group is brought online on those nodes
or zones. If you do not specify a node list at creation, the Nodelist property is set to all nodes and zones that are configured in the
cluster. The order is arbitrary.
By default, resource groups are created with the RG_mode property set to Failover. However, by using the -S option or the -p
RG_mode=Scalable option, or by setting Maximum_primaries to a value that is greater than 1, you can create a scalable resource group.
You can set the RG_mode property of a resource group only when that group is created.
Resource groups are always placed in an unmanaged state when they are created. However, when you issue the manage subcommand, or when
you issue the online or switch subcommand with the -M option, the RGM changes their state to a managed state.
Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page.
delete
Deletes a resource group.
You can use this subcommand only in the global zone.
You can specify the plus sign operand (+) with this subcommand to delete all resource groups.
You cannot delete resource groups if they contain resources, unless you specify the -F option. If you specify the -F option, all
resources within each group, as well as the group, are deleted. All dependencies and affinities are deleted as well.
This subcommand deletes multiple resource groups in an order that reflects resource and resource group dependencies. The order in which
you specify resource groups on the command line does not matter.
The following forms of the clresourcegroup delete command are carried out in several steps:
o When you delete multiple resource groups at the same time
o When you delete a resource group with the -F option
If either of these forms of the command is interrupted, for example, if a node fails, some resource groups might be left in an invalid
configuration.
Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page.
evacuate
Brings offline all resource groups on the nodes or zones that you specify with the -n option.
When you specify a physical node (or global zone), this subcommand evacuates all resource groups, including resource groups in all
zones, off the specified node.
When run in a global zone, this subcommand can evacuate any specified node or zone in the cluster. When run in a non-global zone, this
subcommand can only evacuate that non-global zone.
Resource groups are brought offline in an order that reflects resource and resource group dependencies. The order in which you specify
resource groups on the command line does not matter.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
export
Writes the configuration information for a resource group to a file or to the standard output (stdout).
You can use this subcommand from the global zone or a non-global zone. If you use this subcommand from a non-global zone, the scope of
the command is not limited to that zone. Information about all resource groups that are supplied as operands to the command is
obtained, regardless of whether the non-global zone can master the resource groups.
The format of this configuration information is described in the clconfiguration(5CL) man page.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
list
Displays a list, filtered by qualifier options, of resource groups that you specify.
You can use this subcommand from the global zone or a non-global zone. If you use this subcommand from a non-global zone, the scope of
the command is not limited to that zone. Information about all resource groups that are supplied as operands to the command is
obtained, regardless of whether the non-global zone can master the resource groups.
You can use -r resource to include only those resource groups that contain resources. You can use -t resourcetype to include only those
resource groups that contain a resource type in resourcetype. You can use -n node or -n node:zone to include only those resource groups
that are online in one or more zones or on one or more nodes.
If you specify -s state, only those groups with the states that you specify are listed.
If you do not specify an operand or if you specify the plus sign operand (+), all resource groups, filtered by any qualifier options
that you specify, are listed.
If you specify the verbose option -v, the status (whether the resource group is online or offline) is displayed. A resource group is
listed as online even if it is online on only one node or zone in the cluster.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
manage
Brings a resource group that you specify to a managed state.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
offline
Brings a resource group that you specify to an offline state.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
If you specify the -n option, only resource groups on the nodes or zones that you specify are taken offline.
If you do not specify the -n option, resource groups on all nodes and zones are brought offline.
Resource groups are brought offline in an order that reflects resource and resource group dependencies. The order in which you specify
groups on the command line does not matter.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
online
Brings a resource group that you specify to an online state.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
Use the -n option to specify the list of nodes or zones on which to bring resource groups online. If you do not specify the -n option
and no resource-group affinities exist, this subcommand brings nodes and zones online in the order that is specified by the Nodelist
property. For failover groups, this node or zone is the first online node or zone that is listed in the Nodelist property. For scalable
resource groups, this node or zone is the first set of online nodes or zones that are listed in the Nodelist property, up to
Desired_primaries or Maximum_primaries, whichever is less. If you specify the -n option and resource-group affinities do exist, the
affinities settings override the order of nodes in the Nodelist property. See the rg_properties(5) man page for more information about
resource-group affinities.
Unlike the switch subcommand, this subcommand does not attempt to take any nodes or zones that are listed in the Nodelist property to
the Offline state.
If you specify the -e option with this subcommand, all resources in the set of resource groups that are brought online are enabled.
You can specify the -m option to enable monitoring for all resources in the set of resource groups that are brought online. However,
resources are not actually monitored unless they are first enabled and are associated with a MONITOR_START method.
You can also specify the -M option to indicate that all resource groups that are brought online are to be placed in a managed state. If
the -M option is not specified, this subcommand has no effect on unmanaged resource groups.
Resource groups are brought online in an order that reflects resource and resource group dependencies. The order in which you specify
resource groups on the command line does not matter.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
quiesce
Brings the specified resource group to a quiescent state.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
This command stops a resource group from continuously switching from one node or zone to another node or zone if a START or STOP method
fails.
Use the -k option to kill methods that are running on behalf of resources in the affected resource groups. If you do not specify the -k
option, methods are allowed to continue running until they exit or exceed their configured timeout.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
remaster
Switches the resource groups that you specify from their current primary nodes or zones to their most preferred nodes or zones. Prefer-
ence order is determined by the Nodelist and RG_affinities properties.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
Unlike the online subcommand, this subcommand can switch resource groups offline from their current masters to bring them online on
more preferred masters.
Resource groups are switched in an order that reflects resource group dependencies and affinities. The order in which you specify
resource groups on the command line does not matter.
This subcommand has no effect on unmanaged resource groups.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
remove-node
Removes a node or zone from the Nodelist property of a resource group.
You can use this subcommand only in the global zone.
After removing the node or zone, remove-node might reset the value of the Maximum_primaries or Desired_primaries property to the new
number of nodes or zones in the Nodelist property. remove-node resets the value of the Maximum_primaries or Desired_primaries property
only if either value exceeds the new number of nodes or zones in the Nodelist property.
Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page.
restart
Takes a resource group offline and then back online on the same set of primary nodes or zones that currently host the resource group.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
If you specify the -n option, the resource group is restarted only on current masters that are in the list of nodes or zones that you
specify.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
resume
Resumes the automatic recovery actions on the specified resource group, which were previously suspended by the suspend subcommand.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
A suspended resource group is not automatically restarted or failed over until you explicitly issue the command that resumes automatic
recovery. Whether online or offline, suspended data services remain in their current state. You can still manually switch the resource
group to a different state on specified nodes or zones. You can also still enable or disable individual resources in the resource
group.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
set
Modifies the properties that are associated with the resource groups that you specify.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
You can modify the Nodelist property either with -p Nodelist=-node-- or, as a convenience, with -n node.
You can also use the information in the clconfigfile file by specifying the -i option with the set subcommand. See the clconfigura-
tion(5CL) man page.
Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page.
show
Generates a configuration report, filtered by qualifier options, for resource groups that you specify.
You can use this subcommand from the global zone or a non-global zone. If you use this subcommand from a non-global zone, the scope of
the command is not limited to that zone. Information about all resource groups that are supplied as operands to the command is
obtained, regardless of whether the non-global zone can master the resource groups.
You can use -r resource to include only those resource groups that contain resources. You can use -t resourcetype to include only those
resource groups that contain a resource type in resourcetype. You can use -n node or -n node:zone to include only those resource groups
that are online in one or more zones or on one or more nodes.
You can use the -p option to display a selected set of resource group properties rather than all resource group properties.
If you do not specify an operand or if you specify the plus sign operand (+), all resource groups, filtered by any qualifier options
that you specify, are listed.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
status
Generates a status report, filtered by qualifier options, for resource groups that you specify.
You can use this subcommand from the global zone or a non-global zone. If you use this subcommand from a non-global zone, the scope of
the command is not limited to that zone. Information about all resource groups that are supplied as operands to the command is
obtained, regardless of whether the non-global zone can master the resource groups.
You can use -r resource to include only those resource groups that contain resources. You can use -t resourcetype to include only those
resource groups that contain a resource type in resourcetype. You can use -n node or -n node:zone to include only those resource groups
that are online in one or more zones or on one or more nodes.
If you specify -s state, only those groups with the states that you specify are listed.
Note -
You can specify either the -n option or the -s option with the status subcommand. But, you cannot specify both options at the same
time with the status subcommand.
If you do not specify an operand or if you specify the plus sign operand (+), all resource groups, filtered by any qualifier options
that you specify, are listed.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
suspend
Suspends the automatic recovery actions on and quiesces the specified resource group.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
A suspended resource group is not automatically restarted or failed over until you explicitly issue the command that resumes automatic
recovery. Whether online or offline, suspended data services remain in their current state. You can still manually switch the resource
group to a different state on specified nodes or zones. You can also still enable or disable individual resources in the resource
group.
You might need to suspend the automatic recovery of a resource group to investigate and fix a problem in the cluster or perform mainte-
nance on resource group services.
You can also specify the -k option to immediately kill methods that are running on behalf of resources in the affected resource groups.
By using the -k option, you can speed the quiescing of the resource groups. If you do not specify the -k option, methods are allowed to
continue running until they exit or they exceed their configured timeout.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
switch
Changes the node or zone, or set of nodes or zones, that is mastering a resource group that you specify.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
Use the -n option to specify the list of nodes or zones on which to bring the resource groups online.
If a resource group is not already online, it is brought online on the set of nodes or zones that is specified by the -n option. How-
ever, groups that are online are brought offline on nodes or zones that are not specified by the -n option before the groups are
brought online on new nodes or zones.
If you specify -e with this subcommand, all resources in the set of resource groups that are brought online are enabled.
You can specify -m to enable monitoring for all resources in the set of resource groups that are brought online. However, resources are
not actually monitored unless they are first enabled and are associated with a MONITOR_START method.
You can specify the -M option to indicate that all resource groups that are brought online are to be placed in a managed state. If the
-M option is not specified, this subcommand has no effect on unmanaged resource groups.
Resource groups are brought online in an order that reflects resource and resource group dependencies. The order in which you specify
groups on the command line does not matter.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
unmanage
Brings a resource group that you specify to an unmanaged state.
If you use this subcommand in a non-global zone, this subcommand successfully operates only on resource groups whose node list contains
that zone. If you use this subcommand in the global zone, this subcommand can operate on any resource group.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
OPTIONS
The following options are supported:
Note -
Both the short and long form of each option is shown in this section.
-?
--help
Displays help information.
You can specify this option with or without a subcommand.
If you specify this option without a subcommand, the list of all available subcommands is displayed.
If you specify this option with a subcommand, the usage for that subcommand is displayed.
If you specify this option with the create or set subcommands, help information is displayed for all resource group properties.
If you specify this option with other options, with subcommands, or with operands, they are all ignored. No other processing occurs.
-e
--enable
Enables all resources within a resource group when the group is brought online.
You can use this option only with the switch and online subcommands.
-F
--force
Deletes a resource group and all of its resources forcefully, even if those resources are enabled or online. This option also removes
both resources and resource groups from any dependency property settings or affinity property settings in other resources and in other
resource groups.
Use the -F option with the delete subcommand with care. A forced deletion might cause changes to other resource groups that reference
the deleted resource group, such as when a dependency or affinity is set. Dependent resources might be left with an invalid or error
state after the forced deletion. If this occurs, you might need to reconfigure or restart the affected dependent resources.
-i {- | clconfigfile}
--input={- | clconfigfile-}
--input {- | clconfigfile-}
Specifies that you want to use the configuration information that is located in the clconfigfile file. See the clconfiguration(5CL) man
page.
Specify a dash (-) with this option to provide configuration information through the standard input (stdin).
If you specify other options, they take precedence over the options and information in clconfigfile.
Only those resource groups that you specify are affected by this option.
-k
--kill
Kills RGM resource methods that are running on behalf of resources in the resource group that you specify.
You can use this option with the quiesce and suspend subcommands. If you do not specify the -k option, methods are allowed to continue
running until they exit or they exceed their configured timeout.
-m
--monitor
Enables monitoring for all resources within a resource group when the resource group is brought online.
Resources, however, are not actually monitored unless they are first enabled and are associated with a MONITOR_START method.
You can use this option only with the switch and online subcommands.
-M
--manage
Specifies that all resource groups that are brought online by the switch or online subcommand are to be in a managed state.
-n node-[:zone-[,...]]
--node=node-[:zone-[,...]]
--node node-[:zone-[,...]]
Specifies a node, or zone, or a list of nodes or ones.
You can specify the name or identifier of a node for node. You can also specify a zone with node.
When used with the list, show, and status subcommands, this option limits the output. Only those resource groups that are currently
online in one or more zones or on one or more nodes in the node list are included.
Specifying this option with the create, add-node, remove-node, and set subcommands is equivalent to setting the Nodelist property. The
order of the nodes or zones in the Nodelist property specifies the order in which the group is to be brought online on those nodes or
zones. If you do not specify a node list with the create subcommand, the Nodelist property is set to all nodes and zones in the clus-
ter. The order is arbitrary.
When used with the switch and online subcommands, this option specifies the nodes or zones on which to bring the resource group online.
When used with the evacuate and offline subcommands, this option specifies the nodes or zones on which to bring the resource group off-
line.
When used with the restart subcommand, this option specifies nodes or zones on which to restart the resource group. The resource group
is restarted on current masters which are in the specified list.
-o {- | clconfigfile}
--output={- | clconfigfile-}
--output {- | clconfigfile-}
Writes resource group configuration information to a file or to the standard output (stdout). The format of the configuration informa-
tion is described in the clconfiguration(5CL) man page.
If you specify a file name with this option, this option creates a new file. Configuration information is then placed in that file. If
you specify - with this option, the configuration information is sent to the standard output (stdout). All other standard output for
the command is suppressed.
You can use this option only with the export subcommand.
-p name
--property=name
--property name
Specifies a list of resource group properties.
You use this option with the show subcommand.
For information about the properties that you can set or modify with the create or set subcommand, see the description of the
-p name=value option.
If you do not specify this option, the show subcommand lists most resource group properties. If you do not specify this option and you
specify the -verbose option with the show subcommand, the subcommand lists all resource group properties.
Resource group properties that you can specify are described in Resource Group Properties in Sun Cluster Data Services Planning and
Administration Guide for Solaris OS in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
-p name=value
-p name-+=array-values
-p name-=array-values
--property=name-=value
--property=name-+=array-values
--property=name--=array-values
--property name-=value
--property name-+=array-values
--property name--=array-values
Sets or modifies the value of a resource group property.
You can use this option only with the create and set subcommands.
For information about the properties about which you can display information with the show subcommand, see the description of the
-p name option.
Multiple instances of -p are allowed.
The operators to use with this option are as follows:
= Sets the property to the specified value. The create and set subcommands accept this operator.
+= Adds one or more values to a list of property values. Only the set subcommand accepts this operator. You can specify this
operator only for properties that accept lists of string values, for example, Nodelist.
-= Removes one or more values to a list of property values. Only the set subcommand accepts this operator. You can specify
this operator only for properties that accept lists of string values, for example, Nodelist.
-r resource[,...]
--resource=resource-[,...]
--resource resource-[,...]
Specifies a resource or a list of resources.
You can use this option only with the list, show, and status subcommands. This option limits the output from these commands. Only those
resource groups that contain one or more of the resources in the resource list are output.
-s state[,...]
--state=state-[,...]
--state state-[,...]
Specifies a resource group state or a list of resource group states.
You can use this option only with the list and status subcommands. This option limits the output so that only those resource groups
that are in the specified state on any specified nodes or zones are displayed. You can specify one or more of the following arguments
(states) with this option:
Error_stop_failed
Any specified resource group that is in the Error_stop_failed state on any node or zone that you specify is displayed.
Not_online
Any specified resource group that is in any state other than online on any node or zone that you specify is displayed.
Offline
A specified resource group is displayed only if it is in the Offline state on all nodes and zones that you specify.
Online
Any specified resource group that is in the Online state on any node or zones that you specify is displayed.
Online_faulted
Any specified resource group that is in the Online_faulted state on any node or zone that you specify is displayed.
Pending_offline
Any specified resource group that is in the Pending_offline state on any node or zone that you specify is displayed.
Pending_online
Any specified resource group that is in the Pending_online state on any node or zone that you specify is displayed.
Pending_online_blocked
Any specified resource group that is in the Pending_online_blocked state on any node or zone that you specify is displayed.
Unmanaged
Any specified resource group that is in the Unmanaged state on any node or zone that you specify is displayed.
-S
--scalable
Creates a scalable resource group or updates the Maximum_primaries and Desired_primaries properties.
You can use this option only with the create and add-node subcommands.
When used with the create subcommand, this option creates a scalable resource group rather than a failover resource group. This option
also sets both the Maximum_primaries and Desired_primaries properties to the number of nodes and zones in the resulting Nodelist prop-
erty.
You can use this option with the add-node subcommand only if the resource group is already scalable. When used with the add-node sub-
command, this option updates both the Maximum_primaries and Desired_primaries properties to the number of nodes and zones in the
resulting Nodelist property.
You can also set the RG_mode, Maximum_primaries, and Desired_primaries properties with the -p option.
-t resourcetype[,...]
--type=resourcetype-[,...]
--type resourcetype-[,...]
Specifies a resource type or a list of resource types.
You can use this option only with the list, show, and status subcommands. This option limits the output from these commands. Only those
resource groups that contain one or more of the resources of a type that is included in the resource type list are output.
You specify resource types as [prefix.]type[:RT-version]. For example, an nfs resource type might be represented as SUNW.nfs:3.2,
SUNW.nfs, or nfs. You need to include an RT-version only if there is more than one version of a resource type that is registered in
the cluster. If you do not include a prefix, SUNW is assumed.
-T seconds
--time=seconds
--time seconds
Specifies the number of seconds to keep resource groups from switching back onto a node or zone after you have evacuated resource
groups from the node or zone.
You can use this option only with the evacuate subcommand. You must specify an integer value between 0 and 65535 for seconds. If you do
not specify a value, 60 seconds is used by default.
Resource groups cannot fail over or automatically switch over onto the node or zone while that node or zone is being taken offline.
This option also specifies that after a node or zone is evacuated, resource groups cannot fail over or automatically switch over for
seconds seconds. You can, however, initiate a switchover onto the evacuated node or zone by using the switch and online subcommands
before the timer expires. Only automatic switchovers are prevented.
-v
--verbose
Displays verbose information on the standard output (stdout).
-V
--version
Displays the version of the command.
If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the com-
mand is displayed. No other processing occurs.
-z zone
--zone=zone
--zone zone
Applies the same zone name to all nodes in a node list for which a zone is not explicitly specified. You can specify this option only
when you use the -n option.
OPERANDS
The following operands are supported:
resourcegroup The name of the resource group that you want to manage.
+ All resource groups.
EXIT STATUS
The complete set of exit status codes for all commands in this command set are listed in the Intro(1CL) man page. Returned exit codes are
also compatible with the return codes that are described in the scha_calls(3HA) man page.
If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command pro-
cesses the next operand in the operand list. The returned exit code always reflects the error that occurred first.
This command returns the following exit status codes:
0 CL_NOERR
No error
The command that you issued completed successfully.
1 CL_ENOMEM
Not enough swap space
A cluster node ran out of swap memory or ran out of other operating system resources.
3 CL_EINVAL
Invalid argument
You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the -i option was
incorrect.
6 CL_EACCESS
Permission denied
The object that you specified is inaccessible. You might need superuser or RBAC access to issue the command. See the su(1M) and rbac(5)
man pages for more information.
35 CL_EIO
I/O error
A physical input/output error has occurred.
36 CL_ENOENT
No such object
The object that you specified cannot be found for one of the following reasons:
o The object does not exist.
o A directory in the path to the configuration file that you attempted to create with the -o option does not exist.
o The configuration file that you attempted to access with the -i option contains errors.
38 CL_EBUSY
Object busy
You attempted to remove a cable from the last cluster interconnect path to an active cluster node. Or, you attempted to remove a node
from a cluster configuration from which you have not removed references.
39 CL_EEXIST
Object exists
The device, device group, cluster interconnect component, node, cluster, resource, resource type, or resource group that you specified
already exists.
EXAMPLES
Example 1 Creating a New Failover Resource Group
The first command in the following example creates the failover resource groups rg1 and rg2. The second command adds the resources that are
included in the configuration file cluster-1.xml to these resource groups.
# clresourcegroup create rg1 rg2
# clresource create -g rg1,rg2 -i /net/server/export/cluster-1.xml +
Example 2 Bringing All Resource Groups Online
The following command brings all resource groups online, with all resources enabled and monitored.
# clresourcegroup online -emM +
Example 3 Adding a Node to the Nodelist Property
The following command adds the node phys-schost-4 to the Nodelist property for all resource groups.
# clresourcegroup set -p Nodelist+=phys-schost-4 +
Example 4 Adding a Zone to the Nodelist Property
The following command adds the zone zone1 on node phys-schost-4 to the Nodelist property for all resource groups.
# clresourcegroup set -p Nodelist+=phys-schost-4:zone1 +
Example 5 Evacuating All Resource Groups From a Node
The following command evacuates all resource groups from the node phys-schost-3.
# clresourcegroup evacuate -n phys-schost-3 +
Example 6 Evacuating All Resource Groups From a Zone
The following command evacuates all resource groups from the zone zone1 on node phys-schost-3.
# clresourcegroup evacuate -n phys-schost-3:zone1 +
Example 7 Bringing a Resource Group Offline on All Nodes and Zones
The following command brings the resource group rg1 offline on all nodes and zones.
# clresourcegroup offline rg1
Example 8 Refreshing an Entire Resource Group Manager Configuration
The first command in the following example deletes all resources and resource groups, even if they are enabled and online. The second com-
mand unregisters all resource types. The third command creates the resources that are included in the configuration file cluster-1.xml.
The third command also registers the resources' resource types and creates all resource groups upon which the resource types depend.
# clresourcegroup delete --force +
# clresourcetype unregister +
# clresource -i /net/server/export/cluster-1.xml -d +
Example 9 Listing All Resource Groups
The following command lists all resource groups.
# clresourcegroup list
rg1
rg2
Example 10 Listing All Resource Groups With Their Resources
The following command lists all resource groups with their resources. Note that rg3 has no resources.
# clresourcegroup list -v
Resource Group Resource
-------------- --------
rg1 rs-2
rg1 rs-3
rg1 rs-4
rg1 rs-5
rg2 rs-1
rg3 -
Example 11 Listing All Resource Groups That Include Particular Resources
The following command lists all groups that include Sun Cluster HA for NFS resources.
# clresource list -t nfs
rg1
Example 12 Clearing a Start_failed Resource State by Switching Over a Resource Group
The Start_failed resource state indicates that a Start or Prenet_start method failed or timed out on a resource, but its resource group
came online anyway. The resource group comes online even though the resource has been placed in a faulted state and might not be providing
service. This state can occur if the resource's Failover_mode property is set to None or to another value that prevents the failover of the
resource group.
Unlike the Stop_failed resource state, the Start_failed resource state does not prevent you or the Sun Cluster software from performing
actions on the resource group. You do not need to issue the reset subcommand to clear a Start_failed resource state. You only need to exe-
cute a command that restarts the resource.
The following command clears a Start_failed resource state that has occurred on a resource in the resource-grp-2 resource group. The com-
mand clears this condition by switching the resource group to the schost-2 node.
# clresourcegroup switch -n schost-2 resource-grp-2
Example 13 Clearing a Start_failed Resource State by Restarting a Resource Group
The following command clears a Start_failed resource state that has occurred on a resource in the resource-grp-2 resource group. The com-
mand clears this condition by restarting the resource group on the schost-1 node, which originally hosted the resource group.
# clresourcegroup restart resource-grp-2
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|Availability |SUNWsczu |
+-----------------------------+-----------------------------+
|Interface Stability |Evolving |
+-----------------------------+-----------------------------+
SEE ALSO
clresource(1CL), clresourcetype(1CL), cluster(1CL), Intro(1CL), su(1M), scha_calls(3HA), attributes(5), rbac(5), rg_properties(5), clcon-
figuration(5CL)
NOTES
The superuser can run all forms of this command.
All users can run this command with the -? (help) or -V (version) option.
If you take a resource group offline with the offline subcommand, the Offline state of the resource group does not survive node reboots. In
other words, if a node dies or joins the cluster, the resource group might come online on some node or zone, even if you previously
switched the resource group offline. Even if all of the resources are disabled, the resource group will come online.
To prevent the resource group from coming online automatically, use the suspend subcommand to suspend the automatic recovery actions of the
resource group. To resume automatic recovery actions, use the resume subcommand.
To run the clresourcegroup command with other subcommands, users other than superuser require RBAC authorizations. See the following table.
+------------+---------------------------------------------------------+
|Subcommand | RBAC Authorization |
+------------+---------------------------------------------------------+
|add-node | solaris.cluster.modify |
+------------+---------------------------------------------------------+
|create | solaris.cluster.modify |
+------------+---------------------------------------------------------+
|delete | solaris.cluster.modify |
+------------+---------------------------------------------------------+
|evacuate | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|export | solaris.cluster.read |
+------------+---------------------------------------------------------+
|list | solaris.cluster.read |
+------------+---------------------------------------------------------+
|manage | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|offline | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|online | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|quiesce | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|remaster | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|remove-node | solaris.cluster.modify |
+------------+---------------------------------------------------------+
|restart | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|resume | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|set | solaris.cluster.modify |
+------------+---------------------------------------------------------+
|show | solaris.cluster.read |
+------------+---------------------------------------------------------+
|status | solaris.cluster.read |
+------------+---------------------------------------------------------+
|suspend | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|switch | solaris.cluster.admin |
+------------+---------------------------------------------------------+
|unmanage | solaris.cluster.admin |
+------------+---------------------------------------------------------+
Sun Cluster 3.2 31 Jul 2007 clresourcegroup(1CL)