PowerHA(HACMP) full vg loss - cluster hangs on release_vg_fs event


 
Thread Tools Search this Thread
Operating Systems AIX PowerHA(HACMP) full vg loss - cluster hangs on release_vg_fs event
# 1  
Old 04-09-2013
PowerHA(HACMP) full vg loss - cluster hangs on "release_vg_fs" event

Hello,

AIX 6.1 TL7 SP6
POwerHA 6.1 SP10

I was experimenting with new hacmp build. It's 3-node cluster build on AIX 6.1 lpars. It contains Ethernet and diskhb networks. Shared vg disk is SAN disk. Two nodes see disk using vscsi, third node sees disk using npiv. Application is db2 server.

Most accidents usually involve some kind of network failure - so I decided to test my cluster against Ethernet failure and SAN failure. Ethernet failure test was successful - when node lost Ethernet connectivity(both cables of course) my resource group jumped to next node with no problem.

Next I did SAN failure test:
I did it in 2 different ways by removing vsci mapping in vios or by removing fcs mapping in vios(npiv case) - results were exactly the same in both cases - cluster reacted correctly and started release_vg_fs event, release_vg_fs script tried to unmount filesystems but since all fs disk devices were gone script just hung, and cluster started issuing config_too_long events..
So clstat reports resouce group as "RELEASING.." and that's it...

How do I configure PowerHA to handle full vg loss(for example SAN down causes that) correctly ??

thanks,
Vilius M.

Last edited by vilius; 04-09-2013 at 09:17 AM..
# 2  
Old 04-10-2013
Well, you need to differentiate between between full SAN loss (no nodes can see disks) and failure of only one node seeing disk.

It has been years since I have debugged HACMP scripts - there have been a lot of additions to what is checked, but at it's core the problem is that a resource has gone done - not a topology element - so it is up to the application stop script to make sure the resources are released before "standard processing" continues.

To have this fully automated you would need to write a recovery script that HACMP could call - as config_too_long - means HACMP does not see this as an error.

What I would look for is using the application monitoring abilities to see that the application is down and doing a verification of the resources on the "active" node.

If I recall correctly, the steps PowerHa takes is:
1) application stop - key here is that there are no open files on file system so that following step(s) that
2) release the resources (i.e. 2a) unmount filesystems and 2b) varyoffvg volume group) can succeed.

Again, config_too_long means the script is not exiting with any status - so it is not an error. It is hanging. I would have to look at both the current script as well as application monitoring to determine if application monitoring could inject a new action by the cluster manager to forcibly unmount the filesystems. I am guessing that is not a possibility.

Comment: I would be nervous to be mixing vSCSI and NPIV within a resource group. No real issues with a mix in the cluster, but real concerns when mixing technologies for a single resource in a resource group.

Hope this helps you advance your testing! Good diligence!
# 3  
Old 04-11-2013
Hi,

Thanks for reply.

I just want to clarify some details about my problem:

I'm talking about vg_loss(SAN_down/vscsi_down/npiv_down) only on single node - other nodes see SAN disks with no problem.
vscsi and npiv mixing is only for test purposes.
The problem is not error but hung release_vg_fs event script and to be specific it's "umount -f .." command which hangs - I see that using ps during the event. I tried removing all app processes using fuser in my app server script - it doesn't help - umount still hangs. The same problem is evident even without cluster, just using manual administration commands: all vg devices gone - umount will never return. My problem could be simplified as: how do I umount fs then it's vg and devices are gone?

Only solution I see now is node shutdown on some event(I did not decide which one yet), shutdown never finishes because of hung umounts but node releases resource group - and that is enough. If someone could suggest smarter solution please do.

On the other hand it's standard situation and people who test their cluster against "SAN down on single node" should face the same situation.

Vilius M.

Last edited by vilius; 04-11-2013 at 06:28 AM..
# 4  
Old 04-11-2013
If it is doing unmount -f, and that is not completing - I would need a trace to see what is (not) happening.

I would open a PMR to get official support statement on if/when unmount -f, by design, may hang.

I am assuming that you have tried an additional unmount -f. Have you also tried a varyoffvg -f?

What does lsvg say? I am assuming with the diskhb network you are also using enhanced concurrent volume groups. Which one says "active?" - what do the systems that can see the disk say? What happens when SAN connectivity is restored?

(p.s. will be traveling for work soon, this will delay further comments)
# 5  
Old 04-25-2013
I called IBM support for this - after some back and forth info exchange they recommended AIX upgrade to TL8 SP2, so I did this.
After upgrade problem is gone - during full vg loss cluster umounts filesystems just fine.

This one is solved.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. AIX

Clstat not working in a HACMP 7.1.3 cluster

I have troubles making clstat work. All the "usual suspects" have been covered but still no luck. The topology is a two-node active/passive with only one network-interface (it is a test-setup). The application running is SAP with DB/2 as database. We do not use SmartAssists or other gadgets. ... (8 Replies)
Discussion started by: bakunin
8 Replies

2. AIX

Thoughts on HACMP: Automatic start of cluster services

Hi all, I remember way back in some old environment, having the HA cluster services not being started automatically at startup, ie. no entry in /etc/inittab. I remember reason was (taken a 2 node active/passive cluster), to avoid having a backup node being booted, so that it will not... (4 Replies)
Discussion started by: zaxxon
4 Replies

3. AIX

Re-cluster 2 HACMP 5.2 nodes

Hi, A customer I'm supporting once upon a time broke their 2 cluster node database servers so they could use the 2nd standby node for something else. Now sometime later they want to bring the 2nd node back into the cluster for resilance. Problem is there are now 3 VG's that have been set-up... (1 Reply)
Discussion started by: elcounto
1 Replies

4. AIX

[Howto] Update AIX in HACMP cluster-nodes

As i have updated a lot of HACMP-nodes lately the question arises how to do it with minimal downtime. Of course it is easily possible to have a downtime and do the version update during this. In the best of worlds you always get the downtime you need - unfortunately we have yet to find this best of... (4 Replies)
Discussion started by: bakunin
4 Replies

5. AIX

PowerHA HACMP on VIOS servers

Few questions regarding Power HA ( previously known as HACMP) and VIOS POWERVM IVM ( IBM Virtualization I/O Server ) Is it possible to create HACMP cluster between two VIOS servers Physical Machine_1 VIOS_SERVER_1 LPAR_1 SHARED_DISK_XX VIOS_SERVER_2 Physical Machine_2 LPAR_2... (6 Replies)
Discussion started by: filosophizer
6 Replies

6. AIX

Interoperability Oracle Clusterware - PowerHA/HACMP

I am planning for building a new database server using AIX 6.1 and Oracle 11.2 using ASM. As i have learned starting with Oracle 11.2 ASM can only be used in conjunction with Clusterware, which is Oracles HA-software. As is the companies policy we do intend to use PowerHA as HA-solution instead... (1 Reply)
Discussion started by: bakunin
1 Replies

7. AIX

Should GPFS be configured before/after configuring HACMP for 2 node Cluster?

Hi, I have a IBM Power series machine that has 2 VIOs and hosting 20 LPARS. I have two LPARs on which GPFS is configured (4-5 disks) Now these two LPARs need to be configured for HACMP (PowerHA) as well. What is recommended? Is it possible that HACMP can be done on this config or do i... (1 Reply)
Discussion started by: aixromeo
1 Replies

8. AIX

MQ upgrade(ver.6to7) in a HACMP cluster

Hi What is the procedure to upgrade the MQ from 6 to 7 in aix hacmp cluster. Do i need to bring down the cluster services running in both the nodes and then give #smitty installp in both the nodes separately. Please assist... (0 Replies)
Discussion started by: samsungsamsung
0 Replies

9. Solaris

Solaris Cluster Install Hangs

Greetings Forumers! I tried installing Solaris Cluster 3.3 today. I should say I tried configuring the Cluster today. The software is already installed on two systems. I am trying to configure a shared filesystem between two 6320 Blades. I selected the "Custom" install because the "Typical"... (2 Replies)
Discussion started by: bluescreen
2 Replies

10. AIX

Duplicate IP address makes PowerHA ( HACMP ) down

Hello, I would like to know if anyone has faced this problem. Whenever there is a duplicate IP address, HACMP goes down infact HACMP ( PowerHA ) takes the whole system down. Does anyone know how to solve this problem ? (3 Replies)
Discussion started by: filosophizer
3 Replies
Login or Register to Ask a Question