Sponsored Content
Full Discussion: Linux Cluster failover issue
Operating Systems Linux Red Hat Linux Cluster failover issue Post 302895843 by munna529 on Wednesday 2nd of April 2014 10:07:08 PM
Old 04-02-2014
RedHat Linux Cluster failover issue

Hi Guys,

I am not much aware of clusters but i have few questions can someone provide the overview as it would be very helpful for me.

How can i perform cluster failover test to see all the services are failing back to other node ? If it is using veritas cluster then what kind of pre-prequisite i need to follow ?

If it is not using veritas cluster then how to check whether host is clustered or not ? if clustered how can i perform failover to other nodes?

If my question is not clear please forgive me and let me know what are the required steps i need to follow this procedure ?

Thanks
 

9 More Discussions You Might Find Interesting

1. High Performance Computing

sun Cluster resource group cant failover

I have rcently setup a 4 node cluster running sun cluster 3.2 and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename> this worked ok. theni I tried to siwitch the zone to node a thei work... (14 Replies)
Discussion started by: lesliek
14 Replies

2. HP-UX

ServiceGuard cluster & volume group failover

I have a 2-node ServiceGuard cluster. One of the cluster packages has a volume group assigned to it. When I fail the package over to the other node, the volume group does not come up automatically on the other node. I have to manually do a "vgchange -a y vgname" on the node before the package... (5 Replies)
Discussion started by: Wotan31
5 Replies

3. High Performance Computing

Veritas Cluster Server Management Console IP Failover

I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier. Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies

4. Solaris

Sun Cluster 3.1 failover

Hi, We have two sun SPARC server in Clustered (Sun Cluster 3.1). For some reason, System 1 failed over to System 2. Where can I find the logs which could tell me the reason for this failover? Thanks (5 Replies)
Discussion started by: Mack1982
5 Replies

5. Gentoo

How to failover the cluster ?

How to failover the cluster ? GNU/Linux By which command, My Linux version 2008 x86_64 x86_64 x86_64 GNU/Linux What are the prerequisites we need to take while failover ? if any Regards (3 Replies)
Discussion started by: sidharthmellam
3 Replies

6. Solaris

Sun cluster 4.0 - zone cluster failover doubt

Hello experts - I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts. (1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over) (2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies

7. Solaris

Sun Cluster 3.2 Issue

Hello everyone, I have two Solaris 10 servers that are on cluster. The cluster is a Sun Cluster 3.2 I have a script cronned that stop/start a ressource in a resource group everyday. Today I have checked the status of the ressources and I found that my ressource group have a "Error--stop... (1 Reply)
Discussion started by: adilyos
1 Replies

8. Solaris

Solaris Cluster Failover based on scan rate

Dear Experts, If there is a possible Solaris Cluster failover to second node based on scan rate? I need the documentation If solaris cluster can do this. Thank You in Advance Edy (3 Replies)
Discussion started by: edydsuranta
3 Replies

9. AIX

Cluster communication issue

Hi, I am using Power HA7.1.1 SP5 AIx 7.1 My both cluster nodes are independently working. RG informations are not updating each other. Node A shows that node B is down and vice versa. RG1 is running node A, RG2 running on node B. === clRGinfo From Node B === RG01 OFFLINE ... (2 Replies)
Discussion started by: sunnybee
2 Replies
KANIF.CONF(5)					      kanif.conf configuration file for kanif					     KANIF.CONF(5)

NAME
kanif.conf - configuration file for kanif SYNOPSIS
$HOME/.kanif.conf, /etc/kanif.conf or /etc/c3.conf DESCRIPTION
kanif.conf is the configuration file for kanif. It is optional and only helps the management of static clusters (configurations that do not change much over time). It mimics the syntax of C3 configuration file. It is composed of a sequence of one or more cluster definitions. Each cluster definition is made of the word "cluster" followed by the cluster name and, enclosed in a pair of curly braces : o the front node specification. This is either: o a simple hostname which can be reached from the inside of the cluster (compute nodes). o two names separated by a colon. The first name is the name used from the outside to log on the front node (not used by kanif). The second is the name used from the cluster compute nodes to reach the front node. o an hostname with a colon prepended. This is used for indirect clusters. These are not supported by kanif at this time. o zero or more compute nodes specifications: o a simple hostname (anything that is not of the following form) o an host set made of a prefix, a range and a suffix. o an exclude directive that must follow an host set or another exclude directive. This is made of the word "exclude" followed on the same line by either a single number or an interval between brackets. This applies to the range of the preceding host set. If the exclusion is an interval, the separator between the word "exclude" and this exclusion is optional. o a dead node. The word "dead" followed by the name of the dead node on the same line. Notice that all nodes excluded (using exclude directives or dead nodes) will not take part of the deployment, but are still taken into account in cluster ranges when giving machines specifications to kanif (they are kind of placeholders). This is the interest of specifying nodes as dead or excluded rather than dropping them from the definitions. EXAMPLE
cluster megacluster { # The # character introduce comments megacluster-dev megacluster0[1-9] megacluster[10-64] } cluster supercluster { super-ext:super-int exclude # The host "exclude" super[01-99] exclude 02 # "super02" is excluded exclude[90-95] # "super90" to "super95" are excluded dead # The host "dead" dead othernode # "othernode" is dead } SEE ALSO
kanif(1), taktuk(1) AUTHOR
The author of kanif and current maintainer of the package is Guillaume Huard. Acknowledgements to Lucas Nussbaum for the idea of the name "kanif". COPYRIGHT
kanif is provided under the terms of the GNU General Public License version 2 or later. perl v5.14.2 2012-06-22 KANIF.CONF(5)
All times are GMT -4. The time now is 11:57 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy