Sponsored Content
Full Discussion: Sun Cluster 3.2
Operating Systems Solaris Sun Cluster 3.2 Post 302420566 by hergp on Wednesday 12th of May 2010 03:31:51 AM
Old 05-12-2010
RG_ON_PENDING_R_RESTART (thats the correct name) means, that the resources in a resource group are being restarted. This can be triggered by the clrg restart command or by a restart dependency.
 

9 More Discussions You Might Find Interesting

1. High Performance Computing

SUN Cluster Vs Veritas Cluster

Dear All, Can anyone explain about Pros and Cons of SUN and Veritas Cluster ? Any comparison chart is highly appreciated. Regards, RAA (4 Replies)
Discussion started by: RAA
4 Replies

2. Solaris

sun cluster

hi can u please send me how to configure cluster in x86 machines. i am using VM ware. Is there any difference in X86 and SPARC cluster installations please help me thaks to all (1 Reply)
Discussion started by: sijocg
1 Replies

3. Solaris

Sun Cluster

Hi All I hav working knowledge on solarois 10. Could anyone suggest me any doc or material for learning SUN cluster from scratch. Anything from basic level . Thanks in advance!! (1 Reply)
Discussion started by: ningy
1 Replies

4. Solaris

Sun cluster and Veritas cluster question.

Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC. Am thinking how to migrate to sun cluster setup instead. My plan as follows leave the existing vcs intact as a fallback plan. Then install and build suncluster on... (5 Replies)
Discussion started by: sparcguy
5 Replies

5. Solaris

Sun cluster 3.2

Hi I have a new mount point which is to added to to an existing resource group. I read many docs but I am not able to find the exact methods. Could anyone help me pls. (1 Reply)
Discussion started by: ningy
1 Replies

6. Solaris

Sun cluster 3.1

Hi, I have upgraded a Sparc T2000 server which is node 2, in a Sun Cluster 3.1 two node cluster from Solaris 10 Update 2, to 10 Update 7. This is a requirement for a NetApps solution as we currently have a Sun 3510 SAN. I am at a stage where I believe the two nodes will not communicate over the... (0 Replies)
Discussion started by: zetex
0 Replies

7. Solaris

Sun cluster

Any body can tell me where i can download sun cluster installation and configuration video. is there any option to install cluster in 32 bit virtual machine. please help (4 Replies)
Discussion started by: sunnybee
4 Replies

8. Solaris

Sun cluster 4.0 - zone cluster failover doubt

Hello experts - I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts. (1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over) (2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies

9. Solaris

Sun cluster v3.2 - Is this right?

Running Sun Cluster v3.2 it appears. Two clustered physcial servers both running Solaris 10. Both servers run a number of Oracle DBs etc. BUT I'm a bit concerned that its been set up but will never switch in the even of failure of one of the hosts? Some of the cluster groups we've... (3 Replies)
Discussion started by: psychocandy
3 Replies
scstat(1M)						  System Administration Commands						scstat(1M)

NAME
scstat - monitor the status of a Sun Cluster configuration SYNOPSIS
scstat [-DWginpqv [v]] [-h node] DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scstat command displays the current state of Sun Cluster components. Only one instance of the scstat command needs to run on any machine in the Sun Cluster configuration. When run without any options, scstat displays the status for all components of the cluster. This display includes the following informa- tion: o A list of cluster members o The status of each cluster member o The status of resource groups and resources o The status of every path on the cluster interconnect o The status of every disk device group o The status of every quorum device o The status of every IP network multipathing (IPMP) group and public network adapter From a non-global zone, referred to simply as a zone, you can run all forms of this command except the -i option. When you run the scstat command from a non-global zone, the output is the same as when run from the global zone except that no status information is displayed for IP network multipathing groups or public network adapters. You need solaris.cluster.device.read, solaris.cluster.transport.read, solaris.cluster.resource.read, solaris.cluster.node.read, solaris.cluster.quorum.read, and solaris.cluster.system.read RBAC authorization to use this command without options. See rbac(5). Resources and Resource Groups The resource state, resource group state, and resource status are all maintained on a per-node basis. For example, a given resource has a distinct state on each cluster node and a distinct status on each cluster node. The resource state is set by the Resource Group Manager (RGM) on each node, based only on which methods have been invoked on the resource. For example, after the STOP method has run successfully on a resource on a given node, the resource's state will be OFFLINE on that node. If the STOP method exits nonzero or times out, then the state of the resource is Stop_failed. Possible resource states include: Online, Offline, Start_failed, Stop_failed, Monitor_failed, Online_not_monitored, Starting, and Stopping. Possible resource group states are: Unmanaged, Online, Offline, Pending_online, Pending_offline, Error_stop_failed, Online_faulted, and Pending_online_blocked. In addition to resource state, the RGM also maintains a resource status that can be set by the resource itself by using the API. The field Status Message actually consists of two components: status keyword and status message. Status message is optionally set by the resource and is an arbitrary text string that is printed after the status keyword. Descriptions of possible values for a resource's status are as follows: DEGRADED The resource is online, but its performance or availability might be compromised in some way. FAULTED The resource has encountered an error that prevents it from functioning. OFFLINE The resource is offline. ONLINE The resource is online and providing service. UNKNOWN The current status is unknown or is in transition. Device Groups Device group status reflects the availability of the devices in that group. The following are possible values for device group status and their descriptions: DEGRADED The device group is online, but not all of its potential primaries (secondaries) are up. For two-node connectivity, this status basically indicates that a stand-by primary does not exist, which means a failure of the primary node will result in a loss of access to the devices in the group. OFFLINE The device group is offline. There is no primary node. The device group must be brought online before any of its devices can be used. ONLINE The device group is online. There is a primary node, and devices within the group are ready for I/O. WAIT The device group is between one status and another. This status might occur, for example, when a device group is going from offline to online. IP Network Multipathing Groups IP network multipathing (IPMP) group status reflects the availability of the backup group and the adapters in the group. The following are possible values for IPMP group status and their descriptions: OFFLINE The backup group failed. All adapters in the group are offline. ONLINE The backup group is functional. At least one adapter in the group is online. UNKNOWN Any other state than those listed before. This could result when an adapter is detached or marked as down by Solaris commands such as if_mpadm or ifconfig. The following are possible values for IPMP adapter status and their descriptions: OFFLINE The adapter failed or the backup group is offline. ONLINE The adapter is functional. STANDBY The adapter is on standby. UNKNOWN Any other state than those listed before. This could result when an adapter is detached or marked as down by Solaris commands such as if_mpadm or ifconfig. OPTIONS
You can specify command options to request the status for specific components. If more than one option is specified, the scstat command prints the status in the specified order. The following options are supported: -D Shows status for all disk device groups. You can use this option in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. Output is the same when run from a zone as when run from the global zone. You need solaris.cluster.device.read RBAC authorization to use this command option. See rbac(5). -g Shows status for all resource groups. You can use this option in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. Output is the same when run from a zone as when run from the global zone. You need solaris.cluster.resource.read RBAC authorization to use this command option. See rbac(5). -h node Shows status for the specified node (node) and status of the disk device groups of which this node is the primary node. Also shows the status of the quorum devices to which this node holds reservations of the resource groups to which the node is a potential master, and holds reservations of the transport paths to which the node is attached. You need solaris.cluster.device.read, solaris.cluster.transport.read, solaris.cluster.resource.read, solaris.clus- ter.node.read, solaris.cluster.quorum.read, and solaris.cluster.system.read RBAC authorization to use this command option. See rbac(5). -i Shows status for all IPMP groups and public network adapters. You can use this option only in the global zone. -n Shows status for all nodes. You can use this option in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. Output is the same when run from a zone as when run from the global zone. You need solaris.cluster.node.read RBAC authorization to use this command option. See rbac(5). -p Shows status for all components in the cluster. Use with -v to display more verbose output. You can use this option in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. Output is the same when run from a zone as when run from the global zone, except that no status for IPMP groups or public net- work adapters is displayed. You need solaris.cluster.device.read, solaris.cluster.transport.read, solaris.cluster.resource.read, solaris.clus- ter.node.read, solaris.cluster.quorum.read, and solaris.cluster.system.read RBAC authorization to use -p with -v. See rbac(5). -q Shows status for all device quorums and node quorums. You can use this option in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. Output is the same when run from a zone as when run from the global zone. You need solaris.cluster.quorum.read RBAC authorization to use this command option. See rbac(5). -v[v] Shows verbose output. -W Shows status for cluster transport path. You can use this option in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone. Output is the same when run from a zone as when run from the global zone. You need solaris.cluster.transport.read RBAC authorization to use this command option. See rbac(5). EXAMPLES
Example 1 Using the scstat Command The following command displays the status of all resource groups followed by the status of all components related to the specified host: % scstat -g -h host The output that is displayed appears in the order in which the options are specified. These results are the same results you would see by typing the two commands: % scstat -g and % scstat -h host EXIT STATUS
The following exit values are returned: 0 The command completed successfully. nonzero An error has occurred. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cluster(1CL), if_mpadm(1M), ifconfig(1M), scha_resource_setstatus(1HA), scha_resource_setstatus(3HA), attributes(5) NOTES
An online quorum device means that the device was available for contributing to the formation of quorum when quorum was last established. From the context of the quorum algorithm, the device is online because it actively contributed to the formation of quorum. However, an online quorum device might not necessarily continue to be in a healthy enough state to contribute to the formation of quorum when quorum is re-established. The current version of Sun Cluster does not include a disk monitoring facility or regular probes to the quorum devices. Sun Cluster 3.2 10 Jul 2006 scstat(1M)
All times are GMT -4. The time now is 12:26 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy