Switching services from active node to failovernode in cluster
Hi All,
I was asked to Switch the service from failover to active node as we need to reboot active node for regular maintenance. Unfortunately our senior admin left the organization. It would be helpful if some one answer this query. Detailed info with steps will be appreciated.
Here are the below details. I need to reboot lab1 and need to switch all services to lab2 or lab3.
I believe the /appfs is related to cluster. I'm bit rusty with cluster administration
Please help!!!
Code:
#hostname
lab2
#clustat
Member Name Status
------ ---- ------
lab1-int Online, rgmanager
lab2-int Online, Local, rgmanager
lab3-int Online, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
app_gfs_svc lab2-int started
app_prod lab1-int started
app_test (lab1-int) disabled
app_upgrade (none) disabled
# df -TPH /appfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/dm-50 gfs 32G 24G 8.3G 75% /appfs
Last edited by Scrutinizer; 03-20-2016 at 12:05 PM..
Reason: code tags
Hello,
Under ksh I have to run a script on one of the nodes of a Solaris 8 cluster which at some time must execute a command on the alternate node:
# rsh <name> "command"
I have to implement this script on all the clusters of my company (a lot of...).
Fortunately, the names of the two nodes... (11 Replies)
My apologies in advance if this question has already been asked or if I use incorrect terminology; I have tried searching for nearly an hour without any luck.
Is there any way to change the active TTY for a session? I was monitoring a long (~5 hour) process through an SSH connection and the... (4 Replies)
Hi,
Please advise me whereas I have two node cluster server configured with MC/SG. Application and DB are running on Node 1, while Node 2 is standby.
All the volume group devices are part of cluster environment. There is only one package running at node 1.
Node 2 is having the problem to... (1 Reply)
hi,
i am trying to setup a 2 node cluster environment. following is what i have;
1. 2 x sun ultra60 - 450MHz procs, 1GB RAM, 9GB HDD, solaris 10
2. 2 x HBA cards
3. 2 x Connection leads to connect ultra60 with D1000
4. 1 x D1000 storage box.
5. 3 x 9GB HDD + 2 x 36GB HDD
first of all,... (1 Reply)
Hi,
Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is... (0 Replies)
I have installed sun cluster 3.2 on two sprac servers. Configured a failover resource group. Added a LogicalHostname resource to it. LogicalHostname is also added to /etc/hosts with ip address.
I am able to access cluster by share ip used for logical hostname but when i try to switch the resource... (0 Replies)
Setting up HACMP 6.1 on a two node cluster. The other node works fine and can start properly on STABLE state (VGs varied, FS mounted, Service IP aliased). However, the other node is always stuck on ST_JOINING state. Its taking forever and you can't stop the cluster as well or recover from script... (2 Replies)
Discussion started by: depam
2 Replies
LEARN ABOUT DEBIAN
rgmanager
clusvcmgrd(8) Red Hat Cluster Suite clusvcmgrd(8)NAME
rgmanager - Resource Group (Cluster Service) Manager Daemon
DESCRIPTION
rgmanager handles management of user-defined cluster services (also known as resource groups). This includes handling of user requests
including service start, service disable, service relocate, and service restart. The service manager daemon also handles restarting and
relocating services in the event of failures.
HOW IT WORKS
The service manager is spawned by an init script after the cluster infrastructure has been started and only functions when the cluster is
quorate and locks are working.
During initialization, the service manager runs scripts which ensure that all services are clear to be started. After that, it determines
which services need to be started and starts them.
When an event is received, members which are no longer online have their services taken away from them. The event should only occur in the
case that the member has been fenced whenever fencing is available.
When a cluster member determines that it is no longer in the cluster quorum, the service manager stops all services and waits for a new
quorum to form.
COMMAND LINE OPTIONS -f Run in the foreground (do not fork).
-d Enable debug-level logging.
-w Disable internal process monitoring (for debugging).
-N Do not perform stop-before-start. Combined with the -Z flag to clusvcadm, this can be used to allow rgmanager to be upgraded with-
out stopping a given user service or set of services.
SEE ALSO clusvcadm(8)
Jan 2005 clusvcmgrd(8)