Sponsored Content
Full Discussion: Node can't join cluster
Operating Systems HP-UX Node can't join cluster Post 302107519 by Tris on Sunday 18th of February 2007 11:11:48 PM
Old 02-19-2007
Node can't join cluster

Need help guys!

when running cmrunnode batch i'm getting this error

cmrunnode : Waiting for cluster to form................................................................................................ .................................................................................................... ....
cmrunnode : Node batch unable to join Cluster. Check the syslog file on that node for information


Tried this:

batch: ping 192.168.100.212 (online) -> with response
online: ping 192.168.100.215 (batch) -> no response

What is the problem?

Thanks!
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

The other node name of a SUN cluster

Hello, Under ksh I have to run a script on one of the nodes of a Solaris 8 cluster which at some time must execute a command on the alternate node: # rsh <name> "command" I have to implement this script on all the clusters of my company (a lot of...). Fortunately, the names of the two nodes... (11 Replies)
Discussion started by: heartwork
11 Replies

2. HP-UX

MC/SG Fail to join cluster node

Hi, Please advise me whereas I have two node cluster server configured with MC/SG. Application and DB are running on Node 1, while Node 2 is standby. All the volume group devices are part of cluster environment. There is only one package running at node 1. Node 2 is having the problem to... (1 Reply)
Discussion started by: rauphelhunter
1 Replies

3. High Performance Computing

Removed crashed node from Solaris Cluster 3.0

All- I am new to these forums so please excuse me if this post is in the wrong place. I had a node crash in a 4 node cluster and mgmt has determined this node will not be part of the cluster when rebuilt. I am researching how to remove it from the cluster information on the other 3 nodes and... (2 Replies)
Discussion started by: bluescreen
2 Replies

4. High Performance Computing

Setting up 2 node cluster using solaris 10

hi, i am trying to setup a 2 node cluster environment. following is what i have; 1. 2 x sun ultra60 - 450MHz procs, 1GB RAM, 9GB HDD, solaris 10 2. 2 x HBA cards 3. 2 x Connection leads to connect ultra60 with D1000 4. 1 x D1000 storage box. 5. 3 x 9GB HDD + 2 x 36GB HDD first of all,... (1 Reply)
Discussion started by: solman17
1 Replies

5. Solaris

Active Sun cluster node?

I now the logical name and Virtual IP of the cluster. How can I find the active sun cluster node without having root access? (3 Replies)
Discussion started by: sreeniatbp
3 Replies

6. HP-UX

Identify cluster active node

Hello, Is there any way to identify the active node in a HP-UX cluster without root privileges? (3 Replies)
Discussion started by: psimoes79
3 Replies

7. Solaris

How to remove single node cluster

Hi Gurus, I am very new to clustering and for test i have created a single node cluster, now i want to remove the system from cluster. Did some googling however as a newbee in cluster unable to co related the info. Please help Thanks (1 Reply)
Discussion started by: kumarmani
1 Replies

8. AIX

How to add node to the cluster with stoping the application?

In the production hacmp 5.3 is running with three nodes but i want to add new node to cluster with out stopping the application ie with same resource group if any bady know pls help me. (1 Reply)
Discussion started by: manjunath.m
1 Replies

9. Solaris

SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node

Hi, Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is... (0 Replies)
Discussion started by: dn2011
0 Replies

10. AIX

Cluster node not starting

Setting up HACMP 6.1 on a two node cluster. The other node works fine and can start properly on STABLE state (VGs varied, FS mounted, Service IP aliased). However, the other node is always stuck on ST_JOINING state. Its taking forever and you can't stop the cluster as well or recover from script... (2 Replies)
Discussion started by: depam
2 Replies
cmapplyconf(1m) 														   cmapplyconf(1m)

NAME
cmapplyconf - verify and apply Serviceguard cluster configuration and package configuration files SYNOPSIS
cmapplyconf [-f] [-v] [[-k|-K] -C cluster_ascii_file] [[-p pkg_reference_file] | [-P pkg_ascii_file]...] DESCRIPTION
cmapplyconf verifies the cluster configuration and package configuration specified in the cluster_ascii_file and the associated pkg_ascii_file(s), creates or updates the binary configuration file, called cmclconfig, and distributes it to all nodes. This binary con- figuration file contains the cluster configuration information as well as package configuration information for all packages specified. This file, which is used by the cluster daemons to manage the entire cluster and package environment, is kept in the $SGCONF directory. Only a superuser with effective user ID of zero (see id(1) and su(1)), can verify, create, or update the configuration. cmapplyconf verifies any configured external script program in each pkg_ascii_file for the package run and halt function by calling it with a "validate" parameter. The external script program is run on each member that the package is configured to run on. A non-zero return value from any external script program will cause the command to fail. If the cluster_ascii_file specifies a quorum server as the cluster tie-breaker service, the quorum server must be running and all nodes in the cluster configuration must be authorized to access it. If more than one IP address is specified for the quorum server, the quorum server must be reachable from all configured nodes through all the IP addresses. Otherwise the cmapplyconf command will fail. If the cluster_ascii_file specifies cluster lock lun devices as the cluster tie-breaker service, all nodes must be accessing the same phys- ical device. The lock lun device file must be a block device file. The cluster must be down to modify a cluster tie-breaking service. If changes to either the cluster configuration or to any of the package configuration files are needed, first update the appropriate ASCII file(s) (cluster or package), then validate the changes using the cmcheckconf command and then use cmapplyconf again to verify and redis- tribute the binary file to all nodes. The cluster and package configuration can be modified whether the cluster is up or down, although some configuration requires either the cluster or the package be halted. Please refer to the manual for more detail. The cluster ASCII file only needs to be specified if configuring the cluster for the first time, or if adding or deleting nodes to the cluster. The package ASCII file only needs to be specified if the package is being added, or if the package configuration is being modified. It is recommended that the user run the cmgetconf command to get either the cluster ASCII configuration file or package ASCII configuration file whenever changes to the existing configuration are required. Note that cmapplyconf will verify and distribute cluster configuration or package files. It will not cause the cluster daemon to start or application packages to run. If the cluster is down, once the binary configuration file is distributed, use cmruncl to start the cluster daemons on all nodes. If the cluster is already up and running, cmapplyconf applies modifications to the existing binary configuration file while the cluster remains active. The user needs to use the cmrunnode command to start the cluster activities on the newly added node(s), and use the cmmodpkg or cmrunpkg command to start the newly added package(s). If cmapplyconf is specified when the cluster or packages have already been configured, the cluster_ascii_file, and package_ascii_file, (or all ASCII files in the package_reference_file) will be scanned for configuration changes. If a node is specified in the clus- ter_ascii_file, but does not exist in the previous configuration, that node is considered as a new node and will be added to the new con- figuration. A node which exists in the previous configuration, but is not specified in the cluster_ascii_file, will be removed from the cluster configuration. The node needs to be halted before it can be removed from the cluster configuration. The same kind of processing will apply to the package configuration to determine whether to add or delete package nodes, package subnet, etc. Not all package configu- ration changes require the package to be halted. Similar processing will apply to the network configuration to determine whether to add or delete heartbeat or stationary networks, add or delete standby network interfaces change the network attributes, change the network polling interval etc. in the cluster configuration. In general these changes can be made while the node and cluster are running, but note that at least one heartbeat network must remain up and unchanged while the cluster is running. See the Managing Serviceguard manual for more infor- mation. It is recommended to use the cmquerycl -c command especially to add new networks when a cluster is configured. Please refer to cmquerycl man page for more detail. After cmapplyconf completes, make sure to use cmmodpkg to enable all the package switching flags for packages that had been halted earlier. Once package switching is enabled, the package will be started automatically. Under Serviceguard Extension for RAC (HP-UX only), cmapplyconf returns an error, if a multi node package is configured with both SUBNET and CLUSTER_INTERCONNECT_SUBNET parameters that monitor the same subnet. Before starting configuration, make sure that all nodes have the same release of Serviceguard. If Serviceguard Extension for RAC, SGeRAC, (HP-UX only) is installed, make sure that even SGeRAC release version matches. Configuration is only allowed if all the nodes in the cluster are on the same Serviceguard and SGeRAC, if applicable, release. Configurations using the SITE_NAME and SITE attributes (HP-UX only) must have the appropriate versions of Metrocluster and SGeRAC software installed on all nodes. Options cmapplyconf supports the following options: -v Verbose output will be displayed. -k Using the -k option means that cmapplyconf only checks disk connectivity to the LVM volume groups that are identified in the ASCII file. This option does not exit on Linux. Omitting the -k option (the default behavior) means that cmapplyconf tests the connectivity of all LVM volume group on all the cluster nodes. Using -k can result in significantly faster oper- ation of the command. Do not use -k the option when removing LVM volume groups from the cluster. -k must be used with -C and can not be used with -K -K Using the -K option means that cmapplyconf only checks disk connectivity for cluster lock volume groups. This option does not exist on Linux. For all other LVM volume groups no connectivity will be checked and no modification will be done to their state of cluster awareness. Omitting the -K and -k option (the default behavior) means that cmapplyconf tests the connectivity of all LVM volume groups on all the cluster nodes. Using -K can result in significantly faster operation of the command. -K can be used only when cluster is already configured and is used with -C -K can not be used with -k -K does not affect lock LUN checking. -f Force the distribution even if a binary configuration file exists on any nodes. The old binary configuration file will be replaced. If the -f option is not specified and a binary file exists on one of the nodes, the user will be asked whether or not the existing file should be replaced. If a negative response is given, neither the configuration nor the binary configuration file will be modified. Note that in the cases when either the cluster or the package is not halted as required, the -f option will not force the operation to go through. -C cluster_ascii_file Name of the cluster ASCII file to use. This is a required parameter if the cluster has never been configured before. If not specified, the local cluster configuration is used. An ASCII file for a new cluster is created with the cmquerycl command. An ASCII file for an existing cluster is created with cmgetconf command. See cmquerycl(1m) and cmgetconf(1m). -P pkg_ascii_file... Name of the package configuration file(s) to use. For a new package, a package configuration template file can be created by using the cmmakepkg command and must be customized to include specific information for the package. See cmmakepkg(1m). A configuration file for an existing package can be generated by using the cmgetconf command. See cmgetconf(1m). -p pkg_reference_file Name of the file containing a list of package configuration file(s) to be used. This file may be necessary if the number of pkg_ascii_file names given with multiple -P options do not fit on the command line. This option cannot be used with the -P option. RETURN VALUE
Upon completion, cmapplyconf returns one of the following values: 0 Successful completion. 1 Command failed. EXAMPLES
The high availability environment contains an ASCII cluster configuration file, clusterA.config and two packages, pkg1 and pkg2, specified in ASCII files pkg1.config and pkg2.config. To create and distribute the binary configuration file, use the following command: cmapplyconf -C clusterA.config -P pkg1.config -P pkg2.config To specify a long list of package configuration files, use the following command: cmapplyconf -C clusterA.config -p file where file contains: pkg1.config pkg2.config pkg3.config To add a node, node1 to the existing cluster configuration, use the following command: cmgetconf -c clusterA clusterA.config Modify the ASCII file to add the node information accordingly cmapplyconf -C clusterA.config cmrunnode node1 To delete a node, node2 from the existing cluster configuration, use the following command: cmgetconf -c clusterA clusterA.config cmhaltnode node1 Modify the ASCII file to delete the node information accordingly cmapplyconf -C clusterA.config Apply the configuration while restricting the connective check to the volume groups specified in the clusterA.config file and marking them cluster aware (if not already), cmapplyconf -k -C clusterA.config Apply the configuration while restricting the connective check to the cluster lock volume groups and marking them cluster aware (if not already), cmapplyconf -K -C clusterA.config To add a new heartbeat network to the existing cluster configuration, use the following command: cmquerycl -c clusterA -C clusterA.config Modify the ASCII file to uncomment the new HEARTBEAT_IP cmapplyconf -C clusterA.config To remove a network from the existing cluster configuration, use either of the following commands to get the current cluster configuration: cmquerycl -c clusterA -C clusterA.config or cmgetconf -c clusterA -C clusterA.config Modify the ASCII file to delete the network entries cmapplyconf -C clusterA.config To verify and distribute modification to package pkg1 from a running cluster, with pkg1.config containing the changes to the package, pkg1, use the following command: cmapplyconf -f -P pkg1.config To start that package, run cmmodpkg -e pkg1 LIMITATIONS
Configurations for packages of SYSTEM_MULTI_NODE type cannot be applied at the same time as the cluster configuration and/or any other package configurations. Therefore, when applying configurations for packages of SYSTEM_MULTI_NODE type, the following limitations apply: The option may not be specified. If using the option, only configuration files for packages of SYSTEM_MULTI_NODE type may be specified. If using the option, only configuration files for packages of SYSTEM_MULTI_NODE type may be specified in the package reference file. Under Serviceguard Extension for RAC (HP-UX only), cmapplyconf returns an error if you try to add nodes to or delete nodes from the cluster while there are any SLVM volume groups activated in shared mode. Deactivate all shared SLVM volume groups before running cmapplyconf to add new cluster nodes or delete existing nodes from the cluster. AUTHOR
cmapplyconf was developed by HP. SEE ALSO
cmcheckconf(1m), cmgetconf(1m), cmmakepkg(1m), cmquerycl(1m), cmruncl(1m). cmhaltcl(1m), cmrunnode(1m). cmhaltnode(1m). Requires Optional Serviceguard Software cmapplyconf(1m)
All times are GMT -4. The time now is 10:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy