we have got this error from time to time in our old AIX 6.1 (powerHA 6.1 GLVM) clusters. In deed last week we had to upgrade nodes from AIX 6.1TL6 to TL9 because a problem with clstat/cldump. But this is not your problem..
The steps we use here for all powerHA 6.1 clusters, sure are the same on your link above, are:
Really sorry I can not help in this case...
Last edited by igalvarez; 10-29-2014 at 09:21 AM..
Hello,
I was wondering if I have 3 nodes (A, B, C) all configured to startup with HACMP, but I would like to configure HACMP in such a way:
1) Node B should startup first. After the cluster successfully starts up and mounts all the filesystems, then
2) Node A, and Node C should startup !
... (4 Replies)
This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Hi
What is the procedure to upgrade the MQ from 6 to 7 in aix hacmp cluster. Do i need to bring down the cluster
services running in both the nodes and then give #smitty installp in both the nodes separately. Please assist... (0 Replies)
Hi all,
I was wondering if someone direct me in how to Make system backup for 2 nodes HACMP cluster ( system image ) .
What are the consideration for this task (3 Replies)
Hi,
I have a IBM Power series machine that has 2 VIOs and hosting 20 LPARS.
I have two LPARs on which GPFS is configured (4-5 disks)
Now these two LPARs need to be configured for HACMP (PowerHA) as well.
What is recommended? Is it possible that HACMP can be done on this config or do i... (1 Reply)
Hi
I'm a little rusty with HACMP, but wanted to find out if it is possible to remove a disk heartbeat network from a running HACMP cluster.
Reason is, I need to migrate all the SAN disk, so the current heartbeat disk will be disappearing. Ideally, I'd like to avoid taking the cluster down to... (2 Replies)
As i have updated a lot of HACMP-nodes lately the question arises how to do it with minimal downtime. Of course it is easily possible to have a downtime and do the version update during this. In the best of worlds you always get the downtime you need - unfortunately we have yet to find this best of... (4 Replies)
Hi,
A customer I'm supporting once upon a time broke their 2 cluster node database servers so they could use the 2nd standby node for something else. Now sometime later they want to bring the 2nd node back into the cluster for resilance. Problem is there are now 3 VG's that have been set-up... (1 Reply)
Hi all,
I remember way back in some old environment, having the HA cluster services not being started automatically at startup, ie. no entry in /etc/inittab.
I remember reason was (taken a 2 node active/passive cluster), to avoid having a backup node being booted, so that it will not... (4 Replies)
HACMP two-node cluster with mirrored LVM.
HACMP two-node cluster with two SAN storages mirrored using LVM. Configured 2 disk heartbeat networks - 1 per each SAN storage. While performing redundancy tests. Once one of SAN storage is down - cluster is going to ERROR state. What are the guidelines... (2 Replies)
Discussion started by: OdilPVC
2 Replies
LEARN ABOUT CENTOS
corosync-cfgtool
COROSYNC-CFGTOOL(8)COROSYNC-CFGTOOL(8)NAME
corosync-cfgtool - An administrative tool for corosync.
SYNOPSIS
corosync-cfgtool [-i] [IP_address] [-s] [-r] [-l] [-u] [-H] [service_name] [-v] [version] [-k] [nodeid] [-a] [nodeid]
DESCRIPTION
corosync-cfgtool A tool for displaying and configuring active parameters within corosync.
OPTIONS -h Print basic usage.
-i Finds only information about the specified interface IP address.
-s Displays the status of the current rings on this node. If any interfaces are faulty, 1 is returned by the binary. If all inter-
faces are active 0 is returned to the shell.
-r Reset redundant ring state cluster wide after a fault to re-enable redundant ring operation.
-l Load a service identified by "service_name".
-u Unload a service identified by "service_name".
-a Display the IP address(es) of a node.
-k Kill a node identified by node id.
-R Tell all instances of corosync in this cluster to reload corosync.conf
-H Shutdown corosync cleanly on this node.
SEE ALSO corosync_overview(8),
AUTHOR
Angus Salkeld
2010-05-30 COROSYNC-CFGTOOL(8)