Sponsored Content
Special Forums UNIX and Linux Applications High Performance Computing sun Cluster resource group cant failover Post 302213074 by lesliek on Wednesday 9th of July 2008 06:03:16 AM
Old 07-09-2008
Hi,

Thanks for geting back in contact. I have attached a copy of the messages file when trying to failover the prox2-rg resource group,
I will send additional info from the files you have requested:-

Jul 9 11:54:51 C2SRV2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_prenet_start> for resource <proxy2-HAS-rs>, resource group <proxy2-rg>, node <C2SRV2>, timeout <1800> seconds
Jul 9 11:54:51 C2SRV2 Cluster.RGM.rgmd: [ID 252072 daemon.notice] 50 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hastorageplus/hastorageplus_prenet_start>:tag=<proxy2-rg.proxy2-HAS-rs.10>: Calling security_clnt_connect(..., host=<C2SRV2>, sec_type {0:WEAK, 1:STRONG, 2SmilieES} =<1>, ...)
Jul 9 11:54:51 C2SRV2 Cluster.RGM.rgmd: [ID 285716 daemon.notice] 20 fe_rpc_command: cmd_type(enum):<2>:cmd=<null>:tag=<proxy2-rg.proxy2-HAS-rs.10>: Calling security_clnt_connect(..., host=<C2SRV2>, sec_type {0:WEAK, 1:STRONG, 2SmilieES} =<0>, ...)
Jul 9 11:54:51 C2SRV2 Cluster.RGM.rgmd: [ID 316625 daemon.notice] Timeout monitoring on method tag <proxy2-rg.proxy2-HAS-rs.10> has been suspended.
Jul 9 11:54:54 C2SRV2 Cluster.Framework: [ID 801593 daemon.notice] stdout: becoming primary for proxy2-dg
Jul 9 11:54:56 C2SRV2 scsi: [ID 243001 kern.info] /scsi_vhci (scsi_vhci0):
Jul 9 11:54:56 C2SRV2 /scsi_vhci/ssd@g600a0b800029d28e000005ff48649a5c (ssd24): path /pci@780/SUNW,qlc@0/fp@0,0 (fp1) target address 200a00a0b829d290,b is now STANDBY because of an externally initiated failover
Jul 9 11:55:01 C2SRV2 scsi: [ID 243001 kern.info] /scsi_vhci (scsi_vhci0):
Jul 9 11:55:01 C2SRV2 Initiating failover for device ssd (GUID 600a0b800029d28e000005ff48649a5c)
Jul 9 11:55:03 C2SRV2 scsi: [ID 243001 kern.info] /scsi_vhci (scsi_vhci0):
Jul 9 11:55:03 C2SRV2 Failover operation completed successfully for device ssd (GUID 600a0b800029d28e000005ff48649a5c): failed over from <none> to primary
Jul 9 11:55:03 C2SRV2 scsi: [ID 243001 kern.info] /scsi_vhci (scsi_vhci0):
Jul 9 11:55:03 C2SRV2 /scsi_vhci/ssd@g600a0b800029d2160000057148649e21 (ssd25): path /pci@780/SUNW,qlc@0/fp@0,0 (fp1) target address 200a00a0b829d290,c is now STANDBY because of an externally initiated failover
Jul 9 11:55:08 C2SRV2 scsi: [ID 243001 kern.info] /scsi_vhci (scsi_vhci0):
Jul 9 11:55:08 C2SRV2 Initiating failover for device ssd (GUID 600a0b800029d2160000057148649e21)
Jul 9 11:55:09 C2SRV2 scsi: [ID 243001 kern.info] /scsi_vhci (scsi_vhci0):
Jul 9 11:55:09 C2SRV2 Failover operation completed successfully for device ssd (GUID 600a0b800029d2160000057148649e21): failed over from <none> to secondary
Jul 9 11:55:10 C2SRV2 Cluster.RGM.rgmd: [ID 285716 daemon.notice] 20 fe_rpc_command: cmd_type(enum):<3>:cmd=<null>:tag=<proxy2-rg.proxy2-HAS-rs.10>: Calling security_clnt_connect(..., host=<C2SRV2>, sec_type {0:WEAK, 1:STRONG, 2SmilieES} =<0>, ...)
Jul 9 11:55:10 C2SRV2 Cluster.RGM.rgmd: [ID 316625 daemon.notice] Timeout monitoring on method tag <proxy2-rg.proxy2-HAS-rs.10> has been resumed.
Jul 9 11:55:12 C2SRV2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hastorageplus_prenet_start> completed successfully for resource <proxy2-HAS-rs>, resource group <proxy2-rg>, node <C2SRV2>, time used: 1% of timeout <1800 seconds>
Jul 9 11:55:12 C2SRV2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_monitor_start> for resource <proxy2-HAS-rs>, resource group <proxy2-rg>, node <C2SRV2>, timeout <90> seconds
Jul 9 11:55:12 C2SRV2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <gds_svc_start> for resource <proxy2-zone-rs>, resource group <proxy2-rg>, node <C2SRV2>, timeout <300> seconds
Jul 9 11:55:12 C2SRV2 Cluster.RGM.rgmd: [ID 333393 daemon.notice] 49 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hastorageplus/hastorageplus_monitor_start>:tag=<proxy2-rg.proxy2-HAS-rs.7>: Calling security_clnt_connect(..., host=<C2SRV2>, sec_type {0:WEAK, 1:STRONG, 2SmilieES} =<1>, ...)
Jul 9 11:55:12 C2SRV2 Cluster.RGM.rgmd: [ID 252072 daemon.notice] 50 fe_rpc_command: cmd_type(enum):<1>:cmd=</opt/SUNWscgds/bin/gds_svc_start>:tag=<proxy2-rg.proxy2-zone-rs.0>: Calling security_clnt_connect(..., host=<C2SRV2>, sec_type {0:WEAK, 1:STRONG, 2SmilieES} =<1>, ...)
Jul 9 11:55:12 C2SRV2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hastorageplus_monitor_start> completed successfully for resource <proxy2-HAS-rs>, resource group <proxy2-rg>, node <C2SRV2>, time used: 0% of timeout <90 seconds>
Jul 9 11:55:13 C2SRV2 genunix: [ID 408114 kern.info] /pseudo/zconsnex@1/zcons@1 (zcons1) online


Jul 9 12:00:16 C2SRV2 Cluster.RGM.rgmd: [ID 764140 daemon.error] Method <gds_svc_start> on resource <proxy2-zone-rs>, resource group <proxy2-rg>, node <C2SRV2>: Timeout.
Jul 9 12:00:16 C2SRV2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_monitor_stop> for resource <proxy2-HAS-rs>, resource group <proxy2-rg>, node <C2SRV2>, timeout <90> seconds
Jul 9 12:00:16 C2SRV2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <gds_svc_stop> for resource <proxy2-zone-rs>, resource group <proxy2-rg>, node <C2SRV2>, timeout <300> seconds
Jul 9 12:00:16 C2SRV2 Cluster.RGM.rgmd: [ID 333393 daemon.notice] 49 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hastorageplus/hastorageplus_monitor_stop>:tag=<proxy2-rg.proxy2-HAS-rs.8>: Calling security_clnt_connect(..., host=<C2SRV2>, sec_type {0:WEAK, 1:STRONG, 2SmilieES} =<1>, ...)
Jul 9 12:00:16 C2SRV2 Cluster.RGM.rgmd: [ID 252072 daemon.notice] 50 fe_rpc_command: cmd_type(enum):<1>:cmd=</opt/SUNWscgds/bin/gds_svc_stop>:tag=<proxy2-rg.proxy2-zone-rs.1>: Calling security_clnt_connect(..., host=<C2SRV2>, sec_type {0:WEAK, 1:STRONG, 2SmilieES} =<1>, ...)
Jul 9 12:00:16 C2SRV2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hastorageplus_monitor_stop> completed successfully for resource <proxy2-HAS-rs>, resource group <proxy2-rg>, node <C2SRV2>, time used: 0% of timeout <90 seconds>
 

10 More Discussions You Might Find Interesting

1. HP-UX

ServiceGuard cluster & volume group failover

I have a 2-node ServiceGuard cluster. One of the cluster packages has a volume group assigned to it. When I fail the package over to the other node, the volume group does not come up automatically on the other node. I have to manually do a "vgchange -a y vgname" on the node before the package... (5 Replies)
Discussion started by: Wotan31
5 Replies

2. High Performance Computing

Veritas Cluster Server Management Console IP Failover

I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier. Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies

3. Solaris

Sun Cluster 3.1 failover

Hi, We have two sun SPARC server in Clustered (Sun Cluster 3.1). For some reason, System 1 failed over to System 2. Where can I find the logs which could tell me the reason for this failover? Thanks (5 Replies)
Discussion started by: Mack1982
5 Replies

4. AIX

Resource Group Monitoring

Hi, I have a requirement to monitor the HACMP Resource Groups. At present in my environment, if the Resource Groups fail over from preferred node to Secondary node we dont get notification. Can some one help me in creating a scrript. I have more than one RG online. (Max 4 Resource Groups in... (2 Replies)
Discussion started by: srnagu
2 Replies

5. Gentoo

How to failover the cluster ?

How to failover the cluster ? GNU/Linux By which command, My Linux version 2008 x86_64 x86_64 x86_64 GNU/Linux What are the prerequisites we need to take while failover ? if any Regards (3 Replies)
Discussion started by: sidharthmellam
3 Replies

6. AIX

Adding a Volume Group to an HACMP Resource Group?

Hi, I have a 2 node Cluster. Which is working in active/passive mode (i.e Node#1 is running and when it goes down the Node#2 takes over) Now there's this requirement that we need a mount point say /test that should be available in active node #1 and when node #1 goes down and node#2 takes... (6 Replies)
Discussion started by: aixromeo
6 Replies

7. Solaris

Sun cluster 4.0 - zone cluster failover doubt

Hello experts - I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts. (1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over) (2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies

8. Solaris

Solaris Cluster Failover based on scan rate

Dear Experts, If there is a possible Solaris Cluster failover to second node based on scan rate? I need the documentation If solaris cluster can do this. Thank You in Advance Edy (3 Replies)
Discussion started by: edydsuranta
3 Replies

9. Red Hat

Linux Cluster failover issue

Hi Guys, I am not much aware of clusters but i have few questions can someone provide the overview as it would be very helpful for me. How can i perform cluster failover test to see all the services are failing back to other node ? If it is using veritas cluster then what kind of... (2 Replies)
Discussion started by: munna529
2 Replies

10. Solaris

Process to add mount point in Sun Cluster existing HAplus resource

Hi Well I would like to know step by step process of adding a mountpoint in HAPLUS resource in SUN cluster as I go the below command to add a mount point but not the step by step process of adding a mount point in existing HA Plus resource. clrs set -p FileSystemMountPoints+=<new_MP>... (3 Replies)
Discussion started by: amity
3 Replies
All times are GMT -4. The time now is 04:24 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy