Sponsored Content
Operating Systems AIX PowerHA(HACMP) full vg loss - cluster hangs on release_vg_fs event Post 302799045 by vilius on Thursday 25th of April 2013 04:25:53 PM
Old 04-25-2013
I called IBM support for this - after some back and forth info exchange they recommended AIX upgrade to TL8 SP2, so I did this.
After upgrade problem is gone - during full vg loss cluster umounts filesystems just fine.

This one is solved.
 

10 More Discussions You Might Find Interesting

1. AIX

Duplicate IP address makes PowerHA ( HACMP ) down

Hello, I would like to know if anyone has faced this problem. Whenever there is a duplicate IP address, HACMP goes down infact HACMP ( PowerHA ) takes the whole system down. Does anyone know how to solve this problem ? (3 Replies)
Discussion started by: filosophizer
3 Replies

2. Solaris

Solaris Cluster Install Hangs

Greetings Forumers! I tried installing Solaris Cluster 3.3 today. I should say I tried configuring the Cluster today. The software is already installed on two systems. I am trying to configure a shared filesystem between two 6320 Blades. I selected the "Custom" install because the "Typical"... (2 Replies)
Discussion started by: bluescreen
2 Replies

3. AIX

MQ upgrade(ver.6to7) in a HACMP cluster

Hi What is the procedure to upgrade the MQ from 6 to 7 in aix hacmp cluster. Do i need to bring down the cluster services running in both the nodes and then give #smitty installp in both the nodes separately. Please assist... (0 Replies)
Discussion started by: samsungsamsung
0 Replies

4. AIX

Should GPFS be configured before/after configuring HACMP for 2 node Cluster?

Hi, I have a IBM Power series machine that has 2 VIOs and hosting 20 LPARS. I have two LPARs on which GPFS is configured (4-5 disks) Now these two LPARs need to be configured for HACMP (PowerHA) as well. What is recommended? Is it possible that HACMP can be done on this config or do i... (1 Reply)
Discussion started by: aixromeo
1 Replies

5. AIX

Interoperability Oracle Clusterware - PowerHA/HACMP

I am planning for building a new database server using AIX 6.1 and Oracle 11.2 using ASM. As i have learned starting with Oracle 11.2 ASM can only be used in conjunction with Clusterware, which is Oracles HA-software. As is the companies policy we do intend to use PowerHA as HA-solution instead... (1 Reply)
Discussion started by: bakunin
1 Replies

6. AIX

PowerHA HACMP on VIOS servers

Few questions regarding Power HA ( previously known as HACMP) and VIOS POWERVM IVM ( IBM Virtualization I/O Server ) Is it possible to create HACMP cluster between two VIOS servers Physical Machine_1 VIOS_SERVER_1 LPAR_1 SHARED_DISK_XX VIOS_SERVER_2 Physical Machine_2 LPAR_2... (6 Replies)
Discussion started by: filosophizer
6 Replies

7. AIX

[Howto] Update AIX in HACMP cluster-nodes

As i have updated a lot of HACMP-nodes lately the question arises how to do it with minimal downtime. Of course it is easily possible to have a downtime and do the version update during this. In the best of worlds you always get the downtime you need - unfortunately we have yet to find this best of... (4 Replies)
Discussion started by: bakunin
4 Replies

8. AIX

Re-cluster 2 HACMP 5.2 nodes

Hi, A customer I'm supporting once upon a time broke their 2 cluster node database servers so they could use the 2nd standby node for something else. Now sometime later they want to bring the 2nd node back into the cluster for resilance. Problem is there are now 3 VG's that have been set-up... (1 Reply)
Discussion started by: elcounto
1 Replies

9. AIX

Thoughts on HACMP: Automatic start of cluster services

Hi all, I remember way back in some old environment, having the HA cluster services not being started automatically at startup, ie. no entry in /etc/inittab. I remember reason was (taken a 2 node active/passive cluster), to avoid having a backup node being booted, so that it will not... (4 Replies)
Discussion started by: zaxxon
4 Replies

10. AIX

Clstat not working in a HACMP 7.1.3 cluster

I have troubles making clstat work. All the "usual suspects" have been covered but still no luck. The topology is a two-node active/passive with only one network-interface (it is a test-setup). The application running is SAP with DB/2 as database. We do not use SmartAssists or other gadgets. ... (8 Replies)
Discussion started by: bakunin
8 Replies
cmdisklock(1m)															    cmdisklock(1m)

NAME
cmdisklock - manage Serviceguard cluster lock devices. SYNOPSIS
cmdisklock check path cmdisklock [-f] reset path DESCRIPTION
cmdisklock is a tool to check the current state of a Serviceguard cluster lock device. It can also be used to reset the state of the clus- ter lock device. The need to reset the cluster lock device state could arise if the cluster lock device is replaced or becomes corrupt. A cluster lock device can be either an HP-UX LVM cluster lock or a cluster lock LUN device. HP-UX LVM cluster locks exist only on a disk in an LVM volume group. Cluster lock LUNs exist only on disks dedicated to cluster lock. cmdisklock is useful for checking either type of cluster lock and for re-initializing cluster lock LUN devices after a failure or corruption. NOTE To restore an HP-UX LVM cluster lock, use vgcfgrestore. cmdisklock will fail until vgcfgrestore is run, and cmdisklock is unnecessary as long as vgcfgbackup was done after the cluster lock was initialized. See the Managing Serviceguard manual for details. The syntax of the path option depends on the type of lock. For HP-UX LVM cluster lock disks, the syntax is VG:PV (for example: /dev/vglock:/dev/dsk/c0t0d2). For cluster lock LUN disks, the path is the disk device path. For example, /dev/sdd1 (on Linux) or /dev/dsk/c0t1d2 (on HP-UX). Options cmdisklock supports the following options: check Check the current state of the cluster lock device and report the results. reset Reset (initialize) the state of the cluster lock device. This operation should only be performed on a cluster lock LUN device. For HP-UX LVM cluster lock, use vgcfgrestore as documented in the Managing Serviceguard manual. After performing a reset, a check can be used to verify that the lock is cleared. EXAMPLES
If the cluster lock LUN device becomes corrupted and the cluster is up, messages like the following will appear in syslog. Mar 15 12:20:41 usb cmdisklockd[17599]: WARNING: Cluster lock LUN /dev/dsk/c0t1d2 is corrupt: bad label. Until this situation is cor- rected, a single failure could cause all nodes in the cluster to crash. Mar 15 12:20:41 usb cmdisklockd[17599]: After ensuring that all active nodes in the cluster have logged this message, run 'cmdisklock reset /dev/dsk/c0t1d2' to repair Mar 15 12:20:41 usb cmdisklockd[17599]: Cluster lock disk /dev/dsk/c0t1d2 is inaccessible Once the above messages appear in syslog on all running nodes, the following command will re-initialize the cluster lock LUN: ucd:/> cmdisklock reset /dev/dsk/c0t1d2 WARNING: Cluster lock LUN /dev/dsk/c0t1d2 is corrupt: bad label. Until this situation is corrected, a single failure could cause all nodes in the cluster to crash. After ensuring that all active nodes in the cluster have logged this message, run 'cmdisklock reset /dev/dsk/c0t1d2' to repair /dev/dsk/c0t1d2 is inaccessible Resetting cluster lock device /dev/dsk/c0t1d2 Cluster lock reset completed /dev/dsk/c0t1d2 is accessible cleared After the lock is restored, a message like the following appears in syslog: Mar 15 12:23:11 usb cmdisklockd[17599]: Cluster lock disk /dev/dsk/c0t1d2 is accessible WARNINGS
CAUTION For cluster lock LUN, reset is a potentially destructive operation. While cmdisklock checks for known volume manager and file system use (overridden by -f), it does not validate that the device to be reset is actually used by any cluster. If -f is used on the wrong device file, loss of data may result. CAUTION Care should be taken when doing a reset when the cluster is active as there is a remote possibility that the cluster will partition right when this command is run and both nodes could end up thinking they have successfully acquired the lock. To avoid this situation, make sure cmcld has logged a message in syslog on all running nodes saying the device is inaccessble, before performing a reset. Note that it is safe to run cmdisklock when the cluster is down. RETURN VALUE
cmdisklock returns the following values: 0 Successful completion. 1 The disk is inaccessible or is not recognized as a cluster lock. AUTHOR
cmdisklock was developed by HP. SEE ALSO
cmapplyconf(1m), cmviewcl(1m), vgcfgbackup(1m), vgcfgrestore(1m) Requires Optional Serviceguard Software cmdisklock(1m)
All times are GMT -4. The time now is 05:30 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy