Sponsored Content
Operating Systems Linux Red Hat Redhat clustering clustat not showing info Post 302953779 by mrn6430 on Tuesday 1st of September 2015 11:03:14 AM
Old 09-01-2015
Rebooted late last night since we are in production live. All of the cluster servers were rebooted and restarted clean. Issue is resolved. What had happened both network switches were taken down for maintenance which caused our rgmanager to go defunct on all of the servers!
They should have done one switch at a time not both at the same time. Thanks
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

ps -ef not showing all info

Hi there, How can I expand the command filed sizing when using a ps command. With diretories and script names and parameters being set I'm not getting some of the information that I need (at the end of the command line). I'm using HP_UX 11i. Cheers, Neil (2 Replies)
Discussion started by: nhatch
2 Replies

2. UNIX for Dummies Questions & Answers

Clustering or not

I woulld like to use the JDE newest version, but I am considering whether using 2 X Wintel server with clustering or 1 Unix server without clustering. Is Unix stable enough to except the clustering? (0 Replies)
Discussion started by: superlouis
0 Replies

3. UNIX for Advanced & Expert Users

Mirroring and Clustering with Veritas

Can anybody help me how to mirror the solaris 10 step-by-step with veritas. Have two disks. Then how can I cluster with veritas (1 Reply)
Discussion started by: karole
1 Replies

4. UNIX for Dummies Questions & Answers

Getting HW info on redhat

Hello Everybody, I need a script to get the hardware info for redhat ( CPU-Memory-Disks-packages-platform) here what I've come up with: #!/usr/bin/sh rm /tmp/output echo "/n PCI info /n" > /tmp/output lspci >> /tmp/output echo "/n Memory info /n" >> /tmp/output /tmp/output... (1 Reply)
Discussion started by: aladdin
1 Replies

5. HP-UX

need to recall my hp serviceguard clustering

hi, do u know any link that will get back to me up to speed on hp serviceguard on clustering? thanks and much appreciated, itik (2 Replies)
Discussion started by: itik
2 Replies

6. Solaris

Clustering filesystems

SunOS 5.10 Generic_142900-15 sun4u sparc SUNW,SPARC-Enterprise How can I tell if "clustering" is being used in my shop? I have to file systems that are identical. These filesystems are nfs mounted. But how can I tell if they are being kept in sync as a result of clustering or some other... (2 Replies)
Discussion started by: Harleyrci
2 Replies

7. UNIX for Dummies Questions & Answers

Linux Clustering

hi guys Some time ago I used Linux HA(Heartbeat) to setup like 3 cluster. Now I have to install another 2 cluster and was checking more info to be sure HA was still used but I found some other stuff like OpenAIS - Corosync - Pacemaker to tell you the truth I am kinda confused here I get... (0 Replies)
Discussion started by: karlochacon
0 Replies

8. Linux

Linux os clustering

Hi, I have done the OS clustering in linux redhat 5.6, my one node is down and when i am trying to reboot the other node it is not coming up. any pointer to this would be helpful. the SAN storage luns are not coming as mounted (2 Replies)
Discussion started by: mohitj.engg
2 Replies

9. Red Hat

Not showing ifcfg eth0....redhat enterprise Linux 6.2

Hello, I'm using dell inspiron 14R laptop and I installed Redhat enterprise 6.2 on this. After successfully installation i did not found any network interface on this. when i'm trying to load/add qeth driver . The driver is not found showing this message. Bash: lsmod | grep qeth Bash:... (4 Replies)
Discussion started by: dearsumon
4 Replies

10. HP-UX

HP-UX Clustering

Hello guys, I would like to ask for your assistance, since i am new to HP-UX. Please give me some documentation about clustering in HP-UX. More precisely design,architecture, configuring etc. I am working on my master thesis right now and would like to include some guidance about that.... (1 Reply)
Discussion started by: bazillion
1 Replies
cman(5)                                               cluster.conf cman configuration section                                              cman(5)

NAME
cman - cluster.conf cman configuration section DESCRIPTION
Cman configuration values are placed in the <cman> </cman> section of cluster.conf. Per-node configuration related to cman is placed in the standard <clusternode> </clusternode> sections. All cman configuration settings are optional; usually none are used. The <cman> section is placed under the <cluster> section in cluster.conf. <cluster> <cman> </cman> ... </cluster> UDP port By default, cman will use UDP port 5405/5404 for internode communication. This can be changed by setting a port number as follows: <cman port="6809"> </cman> This will cause cman to use ports 6809 and 6808 for cluster communications. Expected votes The expected votes value is used by cman to determine quorum. The cluster is quorate if the sum of votes of existing members is over half of the expected votes value. By default, cman sets the expected votes value to be the sum of votes of all nodes listed in cluster.conf. This can be overridden by setting an explicit expected_votes value as follows: <cman expected_votes="3"> </cman> If the cluster becomes partitioned, improper use of this option can result in more than one partition gaining quorum. In that event, nodes in each partition will enable cluster services. Two node clusters Ordinarily, the loss of quorum after one out of two nodes fails will prevent the remaining node from continuing (if both nodes have one vote.) Special configuration options can be set to allow the one remaining node to continue operating if the other fails. To do this only two nodes, each with one vote, can be defined in cluster.conf. The two_node and expected_votes values must then be set to 1 in the cman section as follows. <cman two_node="1" expected_votes="1"> </cman> Node votes By default, a node is given one vote toward the calculation of quorum. This can be changed by giving a node a specific number of votes as follows: <clusternode name="nd1" votes="2"> </clusternode> Node ID All nodes must have a unique node ID. This is a single integer that identifies it to the cluster. A node's application to join the cluster may be rejected if you try to set the nodeid to one that is already used. <clusternode name="nd1" nodeid="1"> </clusternode> Multi-home configuration It is quite common to use multiple ethernet adapters for cluster nodes, so they will tolerate the failure of one link. A common way to do this is to use ethernet bonding. Alternatively you can get corosync to run in redundant ring mode by specifying an 'altname' for the node. This is an alternative name by which the node is known, that resolves to another IP address used on the other ethernet adapter(s). You can optionally specify a different port and/or multicast address for each altname in use. Up to 9 altnames (10 interfaces in total) can be used. Note that if you are using the DLM with cman/corosync then you MUST tell it to use SCTP as it's communications protocol as TCP does not support multihoming. <clusternode name="nd1" nodeid="1"> <altname name="nd1a" port="6809" mcast="229.192.0.2"/> </clusternode> <dlm protocol="sctp"/> Multicast network configuration cman uses multicast UDP packets to communicate with other nodes in the cluster. By default it will generate a multicast address using 239.192.x.x where x.x is the 16bit cluster ID number split into bytes. This, in turn is generated from a hash of the cluster name though it can be specified explicitly. The purpose of this is to allow multiple clusters to share the same subnet - they will each use a different multicast address. You might also/instead want to isolate clusters using the port number as shown above. It is possible to override the multicast address by specifying it in cluster.conf as shown: <cman> <multicast addr="229.192.0.1"/> </cman> Cluster ID The cluster ID number is used to isolate clusters in the same subnet. Usually it is generated from a hash of the cluster name, but it can be overridden here if you feel the need. Sometimes cluster names can hash to the same ID. <cman cluster_id="669"> </cman> corosync security key All traffic sent out by cman/corosync is encrypted. By default the security key used is simply the cluster name. If you need more security you can specify a key file that contains the key used to encrypt cluster communications. Of course, the contents of the key file must be the same on all nodes in the cluster. It is up to you to securely copy the file to the nodes. <cman keyfile="/etc/cluster/corosync.key"> </cman> Note that this only applies to cluster communication. The DLM does not encrypt traffic. Other corosync parameters When corosync is started by cman (cman_tool runs corosync), the corosync.conf file is not used. Many of the configuration parame- ters listed in corosync.conf can be set in cluster.conf instead. Cman will read corosync parameters from the following sections in cluster.conf and load them into corosync: <cluster> <totem /> <event /> <aisexec /> <group /> </cluster> See the corosync.conf(5) man page for more information on keys that are valid for these sections. Note that settings in the <clus- ternodes> section will override settings in the sections above, and options on the cman_tool command line will override both. In particular, settings like bindnetaddr, mcastaddr, mcastport and nodeid will always be replaced by values in <clusternodes>. Cman uses different defaults for some of the corosync parameters listed in corosync.conf(5). If you wish to use a non-default set- ting, they can be configured in cluster.conf as shown above. Cman uses the following default values: <totem vsftype="none" token="10000" token_retransmits_before_loss_const="20" join="60" consensus="4800" rrp_mode="none" <!-- or rrp_mode="active" if altnames are present > /> <aisexec user="root" group="root" /> Here's how to set the token timeout to five seconds: <totem token="5000"/> SEE ALSO
cluster.conf(5), corosync.conf(5), cman_tool(8) cman(5)
All times are GMT -4. The time now is 11:29 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy