Sponsored Content
Operating Systems AIX Aix hacmp cluster question (oracle & sap) Post 302282445 by shockneck on Saturday 31st of January 2009 06:56:18 AM
Old 01-31-2009
Quote:
Originally Posted by filosophizer
[...]3 nodes (A, B, C) all configured to startup with HACMP [...]
Don't know whether I got you correctly. I have never seen an admin who wanted his clusters to start up at os boot. In case you are new to HACMP you might not be in an ideal situation for making such a cluster design decision. You can configure HACMP to start up with the os but if you can take some piece of advice: don't do it. IMHO the potential problems by far outweigh the potential adavantages.

Quote:
Originally Posted by filosophizer
I would like to configure HACMP in such a way:

1) Node B should startup first. After the cluster successfully starts up and mounts all the filesystems, then

2) Node A, and Node C should startup ![...]
HACMP is very flexible here, it lets you script everything and you can control when a command is run by pre or post events or by the start/stop scripts. You could create an ssh key with empty password distribute it accordingly to the nodes you want to access and then script the start of the other clusters. Following your description a good place for the startup command might be at the bottom of the RG start script. If you mean to power on the servers themselves you need to script HMC or CSM commands to name two possibilities.
 

8 More Discussions You Might Find Interesting

1. AIX

Oracle and SAP on AIX 6.1 + PowerHA resources ?

I was wondering if any one have any guides or documention regarding Oracle and SAP on AIX 6.1 + PowerHA setups in one guide step by step Thanks (2 Replies)
Discussion started by: h@foorsa.biz
2 Replies

2. AIX

HACMP 5.4.1 Two-Node-Cluster-Configuration-Assistant fails

This post just as a follow-up for thread https://www.unix.com/aix/115548-hacmp-5-4-aix-5300-10-not-working.html: there was a bug in the clcomdES that would cause the Two-Node-Cluster-Configuration-Assistant to fail even with a correct TCP/IP adapter setup. That affected HACMP 5.4.1 in combinatin... (0 Replies)
Discussion started by: shockneck
0 Replies

3. AIX

MQ upgrade(ver.6to7) in a HACMP cluster

Hi What is the procedure to upgrade the MQ from 6 to 7 in aix hacmp cluster. Do i need to bring down the cluster services running in both the nodes and then give #smitty installp in both the nodes separately. Please assist... (0 Replies)
Discussion started by: samsungsamsung
0 Replies

4. AIX

Should GPFS be configured before/after configuring HACMP for 2 node Cluster?

Hi, I have a IBM Power series machine that has 2 VIOs and hosting 20 LPARS. I have two LPARs on which GPFS is configured (4-5 disks) Now these two LPARs need to be configured for HACMP (PowerHA) as well. What is recommended? Is it possible that HACMP can be done on this config or do i... (1 Reply)
Discussion started by: aixromeo
1 Replies

5. AIX

[Howto] Update AIX in HACMP cluster-nodes

As i have updated a lot of HACMP-nodes lately the question arises how to do it with minimal downtime. Of course it is easily possible to have a downtime and do the version update during this. In the best of worlds you always get the downtime you need - unfortunately we have yet to find this best of... (4 Replies)
Discussion started by: bakunin
4 Replies

6. AIX

Re-cluster 2 HACMP 5.2 nodes

Hi, A customer I'm supporting once upon a time broke their 2 cluster node database servers so they could use the 2nd standby node for something else. Now sometime later they want to bring the 2nd node back into the cluster for resilance. Problem is there are now 3 VG's that have been set-up... (1 Reply)
Discussion started by: elcounto
1 Replies

7. AIX

Thoughts on HACMP: Automatic start of cluster services

Hi all, I remember way back in some old environment, having the HA cluster services not being started automatically at startup, ie. no entry in /etc/inittab. I remember reason was (taken a 2 node active/passive cluster), to avoid having a backup node being booted, so that it will not... (4 Replies)
Discussion started by: zaxxon
4 Replies

8. AIX

Clstat not working in a HACMP 7.1.3 cluster

I have troubles making clstat work. All the "usual suspects" have been covered but still no luck. The topology is a two-node active/passive with only one network-interface (it is a test-setup). The application running is SAP with DB/2 as database. We do not use SmartAssists or other gadgets. ... (8 Replies)
Discussion started by: bakunin
8 Replies
ct_slave(3erl)						     Erlang Module Definition						    ct_slave(3erl)

NAME
ct_slave - Common Test Framework functions for starting and stopping nodes for Large Scale Testing. DESCRIPTION
Common Test Framework functions for starting and stopping nodes for Large Scale Testing. This module exports functions which are used by the Common Test Master to start and stop "slave" nodes. It is the default callback module for the {init, node_start} term of the Test Specification. EXPORTS
start(Node) -> Result Types Node = atom() Result = {ok, NodeName} | {error, already_started, NodeName} | {error, started_not_connected, NodeName} | {error, boot_time- out, NodeName} | {error, init_timeout, NodeName} | {error, startup_timeout, NodeName} | {error, not_alive, NodeName} NodeName = atom() Starts an Erlang node with name Node on the local host. See also: start/3 . start(Host, Node) -> Result Types Node = atom() Host = atom() Result = {ok, NodeName} | {error, already_started, NodeName} | {error, started_not_connected, NodeName} | {error, boot_time- out, NodeName} | {error, init_timeout, NodeName} | {error, startup_timeout, NodeName} | {error, not_alive, NodeName} NodeName = atom() Starts an Erlang node with name Node on host Host with the default options. See also: start/3 . start(Host, Node, Options::Opts) -> Result Types Node = atom() Host = atom() Opts = [OptTuples] OptTuples = {username, Username} | {password, Password} | {boot_timeout, BootTimeout} | {init_timeout, InitTimeout} | {startup_timeout, StartupTimeout} | {startup_functions, StartupFunctions} | {monitor_master, Monitor} | {kill_if_fail, Kil- lIfFail} | {erl_flags, ErlangFlags} Username = string() Password = string() BootTimeout = integer() InitTimeout = integer() StartupTimeout = integer() StartupFunctions = [StartupFunctionSpec] StartupFunctionSpec = {Module, Function, Arguments} Module = atom() Function = atom() Arguments = [term] Monitor = bool() KillIfFail = bool() ErlangFlags = string() Result = {ok, NodeName} | {error, already_started, NodeName} | {error, started_not_connected, NodeName} | {error, boot_time- out, NodeName} | {error, init_timeout, NodeName} | {error, startup_timeout, NodeName} | {error, not_alive, NodeName} NodeName = atom() Starts an Erlang node with name Node on host Host as specified by the combination of options in Opts . Options Username and Password will be used to log in onto the remote host Host . Username, if omitted, defaults to the current user name, and password is empty by default. A list of functions specified in the Startup option will be executed after startup of the node. Note that all used modules should be present in the code path on the Host . The timeouts are applied as follows: * BootTimeout - time to start the Erlang node, in seconds. Defaults to 3 seconds. If node does not become pingable within this time, the result {error, boot_timeout, NodeName} is returned; * InitTimeout - time to wait for the node until it calls the internal callback function informing master about successfull startup. Defaults to one second. In case of timed out message the result {error, init_timeout, NodeName} is returned; * StartupTimeout - time to wait intil the node finishes to run the StartupFunctions . Defaults to one second. If this timeout occurs, the result {error, startup_timeout, NodeName} is returned. Option monitor_master specifies, if the slave node should be stopped in case of master node stop. Defaults to false. Option kill_if_fail specifies, if the slave node should be killed in case of a timeout during initialization or startup. Defaults to true. Note that node also may be still alive it the boot timeout occurred, but it will not be killed in this case. Option erlang_flags specifies, which flags will be added to the parameters of the erl executable. Special return values are: * {error, already_started, NodeName} - if the node with the given name is already started on a given host; * {error, started_not_connected, NodeName} - if node is started, but not connected to the master node. * {error, not_alive, NodeName} - if node on which the ct_slave:start/3 is called, is not alive. Note that NodeName is the name of current node in this case. stop(Node) -> Result Types Node = atom() Result = {ok, NodeName} | {error, not_started, NodeName} | {error, not_connected, NodeName} | {error, stop_timeout, NodeName} NodeName = atom() Stops the running Erlang node with name Node on the localhost. stop(Host, Node) -> Result Types Host = atom() Node = atom() Result = {ok, NodeName} | {error, not_started, NodeName} | {error, not_connected, NodeName} | {error, stop_timeout, NodeName} NodeName = atom() Stops the running Erlang node with name Node on host Host . AUTHORS
<> common_test 1.5.3 ct_slave(3erl)
All times are GMT -4. The time now is 07:39 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy