Sponsored Content
Operating Systems AIX Crash dump and Panic message : RSCT Dead Man Switch Timeout for HACMP; halting non-responsive node Post 303042285 by rbatte1 on Friday 20th of December 2019 06:36:11 AM
Old 12-20-2019
Could this be that whatever is supplying the common disk used to keep heartbeat failed? That way, both nodes would be unable to keep updating the shared disk and the usual response is to terminate all services to avoid getting in the way, i.e. to panic/abort. We've had a Oracle RAC database cluster do this before. not pretty, but it is the best course of action to avoid damage.




Robin
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

help, what is the difference between core dump and panic dump?

help, what is the difference between core dump and panic dump? (1 Reply)
Discussion started by: aileen
1 Replies

2. HP-UX

crash dump

hi friends, i know that when there is a crash then that memory image is put into /var/adm/crash but if the system hangs up and if i have access to console of that machine then how can i take the crash dump manully. thanks (2 Replies)
Discussion started by: mxms755
2 Replies

3. Solaris

crash dump

Can anyone of you help me in enabling crash dump on Solaris 5.5.1 (1 Reply)
Discussion started by: csreenivas
1 Replies

4. AIX

Node Switch Reasons in HACMP

Hi Guys, I have two nodes clustered. Each node is AIX 5.2 & they are clustered with HACMP 5.2. The mode of the cluster is Active/Passive which mean one node is the Active node & have all resource groups on it & the 2nd node is standby. Last Monday I noted that all resource groupes have been... (2 Replies)
Discussion started by: aldowsary
2 Replies

5. Solaris

crash dump

hi , i have machine that is crashed how i can enable core dump file & how can i find it ? :confused: (4 Replies)
Discussion started by: lid-j-one
4 Replies

6. UNIX for Advanced & Expert Users

Linux heartbeat on redhat 4:node dead

Hi. I have started heartbeat on two redhat servers. Using eth0. Before I start heartbeat I can ping the two server to each other. Once I start heartbeat both the server become active as they both have warnings that the other node is dead. Also I am not able to ping each other. After stopping... (1 Reply)
Discussion started by: amrita garg
1 Replies

7. AIX

hacmp in a 7 node configuration ?

Hi Guys, I have to design a multinode hacmp cluster and am not sure if the design I am thinking of makes any sense. I have to make an environment that currently resides on 5 nodes more resilient but I have the constrain of only having 4 frames. In addition the business doesnt want to pay for... (7 Replies)
Discussion started by: zxmaus
7 Replies

8. AIX

HACMP switch over

Hi I had an active passive cluster. Node A went down and all resource groups moved to Node B. Now we brought up Node A. What is the procedure to bring everything back to Node A. Node A #lssrc -a | grep cl clcomdES clcomdES 323782 active clstrmgrES cluster... (9 Replies)
Discussion started by: samsungsamsung
9 Replies

9. HP-UX

Prevent crash dump when SG cluster node reboots

Hi Experts, I have configured HP-UX Service Guard cluster and it dumps crash every time i reboot a cluster node. Can anyone please help me to prevent these unnecessary crash dumps at the time of rebooting SG cluster node? Thanks in advance. Vaishey (2 Replies)
Discussion started by: Vaishey
2 Replies

10. OS X (Apple)

MacOS 10.15.2 Catalina display crash and system panic

MacPro (2013) 12-Core, 64GB RAM (today's crash): panic(cpu 2 caller 0xffffff7f8b333ad5): userspace watchdog timeout: no successful checkins from com.apple.WindowServer in 120 seconds service: com.apple.logd, total successful checkins since load (318824 seconds ago): 31883, last successful... (3 Replies)
Discussion started by: Neo
3 Replies
cmdisklock(1m)															    cmdisklock(1m)

NAME
cmdisklock - manage Serviceguard cluster lock devices. SYNOPSIS
cmdisklock check path cmdisklock [-f] reset path DESCRIPTION
cmdisklock is a tool to check the current state of a Serviceguard cluster lock device. It can also be used to reset the state of the clus- ter lock device. The need to reset the cluster lock device state could arise if the cluster lock device is replaced or becomes corrupt. A cluster lock device can be either an HP-UX LVM cluster lock or a cluster lock LUN device. HP-UX LVM cluster locks exist only on a disk in an LVM volume group. Cluster lock LUNs exist only on disks dedicated to cluster lock. cmdisklock is useful for checking either type of cluster lock and for re-initializing cluster lock LUN devices after a failure or corruption. NOTE To restore an HP-UX LVM cluster lock, use vgcfgrestore. cmdisklock will fail until vgcfgrestore is run, and cmdisklock is unnecessary as long as vgcfgbackup was done after the cluster lock was initialized. See the Managing Serviceguard manual for details. The syntax of the path option depends on the type of lock. For HP-UX LVM cluster lock disks, the syntax is VG:PV (for example: /dev/vglock:/dev/dsk/c0t0d2). For cluster lock LUN disks, the path is the disk device path. For example, /dev/sdd1 (on Linux) or /dev/dsk/c0t1d2 (on HP-UX). Options cmdisklock supports the following options: check Check the current state of the cluster lock device and report the results. reset Reset (initialize) the state of the cluster lock device. This operation should only be performed on a cluster lock LUN device. For HP-UX LVM cluster lock, use vgcfgrestore as documented in the Managing Serviceguard manual. After performing a reset, a check can be used to verify that the lock is cleared. EXAMPLES
If the cluster lock LUN device becomes corrupted and the cluster is up, messages like the following will appear in syslog. Mar 15 12:20:41 usb cmdisklockd[17599]: WARNING: Cluster lock LUN /dev/dsk/c0t1d2 is corrupt: bad label. Until this situation is cor- rected, a single failure could cause all nodes in the cluster to crash. Mar 15 12:20:41 usb cmdisklockd[17599]: After ensuring that all active nodes in the cluster have logged this message, run 'cmdisklock reset /dev/dsk/c0t1d2' to repair Mar 15 12:20:41 usb cmdisklockd[17599]: Cluster lock disk /dev/dsk/c0t1d2 is inaccessible Once the above messages appear in syslog on all running nodes, the following command will re-initialize the cluster lock LUN: ucd:/> cmdisklock reset /dev/dsk/c0t1d2 WARNING: Cluster lock LUN /dev/dsk/c0t1d2 is corrupt: bad label. Until this situation is corrected, a single failure could cause all nodes in the cluster to crash. After ensuring that all active nodes in the cluster have logged this message, run 'cmdisklock reset /dev/dsk/c0t1d2' to repair /dev/dsk/c0t1d2 is inaccessible Resetting cluster lock device /dev/dsk/c0t1d2 Cluster lock reset completed /dev/dsk/c0t1d2 is accessible cleared After the lock is restored, a message like the following appears in syslog: Mar 15 12:23:11 usb cmdisklockd[17599]: Cluster lock disk /dev/dsk/c0t1d2 is accessible WARNINGS
CAUTION For cluster lock LUN, reset is a potentially destructive operation. While cmdisklock checks for known volume manager and file system use (overridden by -f), it does not validate that the device to be reset is actually used by any cluster. If -f is used on the wrong device file, loss of data may result. CAUTION Care should be taken when doing a reset when the cluster is active as there is a remote possibility that the cluster will partition right when this command is run and both nodes could end up thinking they have successfully acquired the lock. To avoid this situation, make sure cmcld has logged a message in syslog on all running nodes saying the device is inaccessble, before performing a reset. Note that it is safe to run cmdisklock when the cluster is down. RETURN VALUE
cmdisklock returns the following values: 0 Successful completion. 1 The disk is inaccessible or is not recognized as a cluster lock. AUTHOR
cmdisklock was developed by HP. SEE ALSO
cmapplyconf(1m), cmviewcl(1m), vgcfgbackup(1m), vgcfgrestore(1m) Requires Optional Serviceguard Software cmdisklock(1m)
All times are GMT -4. The time now is 06:08 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy