08-28-2009
Hi Shockneck,
all is in the same datacentre and will be served by the same SAN infrastructure - EMC Symmetrix with Powerpath - Raid5 on the SAN site, dedicated Adapter pairs on the lpar site - no vio except for management purposes.
According to SAN Engineering no problem to make the disks all visible on each node required - even visibility on all 7 nodes is not a problem according to them.
This proposal is for one site only - we will have similar setups for UAT and COB in another datacentre in another country - with database replication between PROD and COB.
Kind regards
zxmaus
10 More Discussions You Might Find Interesting
1. AIX
Hi Guys,
I have two nodes clustered. Each node is AIX 5.2 & they are clustered with HACMP 5.2. The mode of the cluster is Active/Passive which mean one node is the Active node & have all resource groups on it & the 2nd node is standby.
Last Monday I noted that all resource groupes have been... (2 Replies)
Discussion started by: aldowsary
2 Replies
2. AIX
Hi
Can ony one advise which files can be deleted in /usr in an aix hacmp node ? Im new to aix and Im not sure which files can be deleted ?
#df -g /usr
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd2 7.00 1.00 86% 67013 22% /usr
... (4 Replies)
Discussion started by: samsungsamsung
4 Replies
3. AIX
This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Discussion started by: shockneck
0 Replies
4. AIX
hi,
when I do a failover, hacmp always starts db2 but recently it fails to start db2..noticed the issue is db2nodes.cfg is not modified by hacmp and is still showing primary node..manually changed the node name to secondary after which db2 started immediately..unable to figure out why hacmp is... (4 Replies)
Discussion started by: gkr747
4 Replies
5. AIX
Hi,
I have a running HACMP Cluster with two nodes. Its working in active/passive mode. (i.e Out of the two nodes in the cluster one will be active and the other one will be on standby. If first node fails the other takes over)
1. There is a Service IP associated with the cluster. Now the... (2 Replies)
Discussion started by: aixromeo
2 Replies
6. AIX
Hi,
I have a IBM Power series machine that has 2 VIOs and hosting 20 LPARS.
I have two LPARs on which GPFS is configured (4-5 disks)
Now these two LPARs need to be configured for HACMP (PowerHA) as well.
What is recommended? Is it possible that HACMP can be done on this config or do i... (1 Reply)
Discussion started by: aixromeo
1 Replies
7. AIX
HACMP two-node cluster with mirrored LVM.
HACMP two-node cluster with two SAN storages mirrored using LVM. Configured 2 disk heartbeat networks - 1 per each SAN storage. While performing redundancy tests. Once one of SAN storage is down - cluster is going to ERROR state. What are the guidelines... (2 Replies)
Discussion started by: OdilPVC
2 Replies
8. AIX
Hello AIX experts,
I have few queries and appreciate if you could help me with them.
1. How to check if HACMP (or any other AIX OS cluster) is installed
2. How to check if HACMP (or any other AIX OS cluster) is running
3. how to check which Oracle DB instance is running on it
4. how to... (1 Reply)
Discussion started by: prvnrk
1 Replies
9. AIX
Folks,
Please have a look to the attached screenshot from my managed node's HEA configuration option page. I would like to know - what does "Flow Control Enabled" checkbox help us with if opted for?
Thanks!
-- Souvik (3 Replies)
Discussion started by: thisissouvik
3 Replies
10. AIX
Dear all
i have two aix system
-Model : P770
-OS version: AIX 6.1
-patch level : 6100-07-04-1216
-ha version : HACMP v 6.1.0.8
-host : A, B
last Wednesday, my B system suddenly went down with crash dump. after 1 minute, A system went down with crash dump. I checked the dump of A-system... (6 Replies)
Discussion started by: tomato00
6 Replies
cmruncl(1m) cmruncl(1m)
NAME
cmruncl - run a high availability cluster
SYNOPSIS
cmruncl [-f] [-v] [-n node_name...] [-t | -w none]
DESCRIPTION
cmruncl causes all nodes in a configured cluster or all nodes specified to start their cluster daemons and form a new cluster.
To start a cluster, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configuration
file. See access policy in cmquerycl(1m).
This command should only be run when the cluster is not active on any of the configured nodes. This command verifies the network configu-
ration before causing the nodes to start their cluster daemons. If a cluster is already running on a subset of the nodes, the cmrunnode
command should be used to start the remaining nodes and force them to join the existing cluster.
If node_name is not specified, the cluster daemons will be started on all the nodes in the cluster. All nodes in the cluster must be
available for the cluster to start unless a subset of nodes is specified.
Options
cmruncl supports the following options:
-f Force cluster startup without warning message and continuation prompt that are printed with the -n option.
-v Verbose output will be displayed.
-t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages.
The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the
nodes can meet any external dependencies such as EMS resources, package subnets, and storage.
-n node_name...
Start the cluster daemon on the specified subset of node(s).
-w none By default network probing is performed to check that the network connectivity is the same as when the cluster was config-
ured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The
option should only be used if this network configuration is known to be correct from a recent check.
RETURN VALUE
cmruncl returns the following value:
0 Successful completion.
1 Command failed.
EXAMPLES
Run the cluster daemon:
cmruncl
Run the cluster daemons on node1 and node2:
cmruncl -n node1 -n node2
AUTHOR
cmruncl was developed by HP.
SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m).
Requires Optional Serviceguard Software cmruncl(1m)