Arbitrator for 2 nodes ocfs cluster


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Arbitrator for 2 nodes ocfs cluster
# 1  
Old 02-04-2015
Arbitrator for 2 nodes ocfs cluster

Is there any way to create a arbitrary node for ocfs2 on a virtual machine (others are physical servers) so it won't go panic when one of physical server goes down?

This is for load balanced application servers.

Any setting example or tips?
Thanks.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Red Hat

RedHat Cluster: Nodes won't see each other

Hi All; I try to build a Redhat Cluster (CentOS 6) on vmware. But each node sees the other down like: # clustat Cluster Status for mycluster @ Wed Apr 8 11:01:38 2015 Member Status: Quorate Member Name ID Status ------ ---- ... (1 Reply)
Discussion started by: Meacham12
1 Replies

2. Red Hat

RedHat Cluster: Nodes won't see each other

Hi All; I try to build a Redhat Cluster (CentOS 6) on vmware. But each node sees the other down like: # clustat Cluster Status for mycluster @ Wed Apr 8 11:01:38 2015 Member Status: Quorate Member Name ID Status ------ ---- ... (0 Replies)
Discussion started by: Meacham12
0 Replies

3. AIX

Re-cluster 2 HACMP 5.2 nodes

Hi, A customer I'm supporting once upon a time broke their 2 cluster node database servers so they could use the 2nd standby node for something else. Now sometime later they want to bring the 2nd node back into the cluster for resilance. Problem is there are now 3 VG's that have been set-up... (1 Reply)
Discussion started by: elcounto
1 Replies

4. Red Hat

How to troubleshoot a 1000 nodes Apache cluster?

Hi all. May I get some expert advice on troubleshooting performance issues of a 1000 nodes Apache LB cluster. Users report slow loading/response of webpages. Different websites are hosted on this cluster for different clients. But all are reporting the same issue. Could you please let me know... (1 Reply)
Discussion started by: admin_xor
1 Replies

5. Solaris

What is the procedure to reboot cluster nodes

Hi we have 2 solaris 10 servers in veritas cluster. also we have oracle cluster on the database end. now we have a requirement to reboot both the servers as it has been running for more than a year. can any one tell what is the procedure to bring down the cluster services in both the nodes... (7 Replies)
Discussion started by: newtoaixos
7 Replies

6. Solaris

Need advise on setting up solaris 10 2 nodes cluster

I am new to setting up sun solaris 10 cluster, I have 2 sun sparc t3-1 servers (identical), going to use them as web servers (sun one java web server 7), looking for data replication and real time fail over. My question is do I need external storage to configure the cluster? or I can just use... (3 Replies)
Discussion started by: spitfire2011
3 Replies

7. Red Hat

Centos/rhel 5 cluster 3 nodes with out Quorum

Hi all, i have 3 nodes cluster (Centos 5 cluster suit) with out quorum disk, node vote = 1, the value of a quorum = 2, when 2 nodes going offline, cluster services are destoys. How i can save the cluster and all services(move all services to one alive node) with out quorum disk when other... (3 Replies)
Discussion started by: Flomaster
3 Replies

8. Emergency UNIX and Linux Support

Rebooting 3 to 1 Cluster nodes.

hello Gurus, My current set up is 3 to 1 Cluster (SUN Cluster 3.2) running oracle database. Task is to reboot the servers. My query is about the procedure to do the same. My understanding is suspend the databases to avoid switchover. Then execute the command scshutdown to down the cluster... (4 Replies)
Discussion started by: EmbedUX
4 Replies

9. UNIX for Dummies Questions & Answers

IP Alias, Bonding or Virtual IP, 2 nodes Cluster, which one to use ?

Hi ! I have a simple setup of 2 PC (with linux Red-Hat) where the first PC is the primary machine and the second the backup. I use DRBD for data replication and Red-Hat cluster suite for HA (High Availability). I have tested both. Now I NEED a COMMON IP ADDRESS (or Master/unique IP address) for... (3 Replies)
Discussion started by: Danny Gilbert
3 Replies

10. High Performance Computing

Bonding, IP alias, Virtual IP, 2 nodes cluster

Hi ! I have a simple setup of 2 PC (with linux Red-Hat) where the first PC is the primary machine and the second the backup. I use DRBD for data replication and Red-Hat cluster suite for HA (High Availability). I have tested both. Now I NEED a COMMON IP ADDRESS (or Master/unique IP address) for... (0 Replies)
Discussion started by: Danny Gilbert
0 Replies
Login or Register to Ask a Question
o2cb(7) 							OCFS2 Manual Pages							   o2cb(7)

NAME
o2cb - Default cluster stack for the OCFS2 file system. DESCRIPTION
o2cb is the default cluster stack for the OCFS2 file system. It includes a node manager (o2nm) to keep track of the nodes in the cluster, a heartbeat agent (o2hb) to detect live nodes, a network agent (o2net) for intra-cluster node communication and a distributed lock manager (o2dlm) to keep track of lock resources. All these components are in-kernel. It also includes an in-memory file system, dlmfs, to allow userspace to access the in-kernel dlm. This cluster stack has two configuration files, namely, /etc/ocfs2/cluster.conf and /etc/sysconfig/o2cb. Whereas the former keeps track of the cluster layout, the latter keeps track of the cluster timeouts. Both files are only read when the cluster is brought online. Values in use by the online cluster can be perused in the /sys/kernel/config/cluster directory structure. CONFIGURATION
The cluster layout is specified in /etc/ocfs2/cluster.conf. While it is easier to populate and propagate this configuration file using ocfs2console(8), one can also do it by manually as long as care is taken to format the file correctly. While the console utility is intuitive to use, there are few points to keep in mind. 1. The node name needs to match the hostname. It does not need to include the domain name. For example, appserver.oracle.com can be appserver. 2. The IP address need not be the one associated with that hostname. As in, any valid IP address on that node can be used. O2CB will not attempt to match the node name (hostname) with the specified IP address. For best performance, use of a private interconnect (lower latency) is recommended. The cluster.conf file is in a stanza format with two types of stanzas, namely, cluster and node. A typical cluster.conf will have one clus- ter stanza and multiple node stanzas. The cluster stanza has two parameters: node_count Total number of nodes in the cluster name Name of the cluster The node stanza has five parameters: ip_port IP port ip_address IP address number Unique node number from 0-254 name Hostname cluster Name of the cluster Users populating cluster.conf manually should follow the format strictly. As in, stanza header should start at the first column and end with a colon, stanza parameters should start after a tab, a blank line should demarcate each stanza and care taken to avoid stray white- spaces. The O2CB cluster timeouts are specified in /etc/sysconfig/o2cb and can be configured using the o2cb init script. These timeouts are used by the O2CB clusterstack to determine whether a node is dead or alive. While the use of default values is recom- mended, users can experiment with other values if the defaults are causing spurious fencing. The cluster timeouts are: Heartbeat Dead Threshold The Disk Heartbeat timeout is the number of two second iterations before a node is considered dead. The exact formula used to con- vert the timeout in seconds to the number of iterations is as follows: O2CB_HEARTBEAT_THRESHOLD = (((timeout in seconds) / 2) + 1) For e.g., to specify a 60 sec timeout, set it to 31. For 120 secs, set it to 61. The default for this timeout is 60 secs (O2CB_HEARTBEAT_THRESHOLD = 31). Network Idle Timeout The Network Idle timeout specifies the time in milliseconds before a network connection is considered dead. It defaults to 30000 ms. Network Keepalive Delay The Network Keepalive specifies the maximum delay in milliseconds before a keepalive packet is sent to another node to check whether it is alive or not. If the node is alive, it will respond. Its defaults to 2000 ms. Network Reconnect Delay The Network Reconnect specifies the minimum delay in milliseconds between connection attempts. It defaults to 2000 ms. EXAMPLES
A sample /etc/ocfs2/cluster.conf. cluster: node_count = 3 name = webcluster node: ip_port = 7777 ip_address = 192.168.0.107 number = 7 name = node7 cluster = webcluster node: ip_port = 7777 ip_address = 192.168.0.106 number = 6 name = node6 cluster = webcluster node: ip_port = 7777 ip_address = 192.168.0.110 number = 10 name = node10 cluster = webcluster SEE ALSO
mkfs.ocfs2(8) fsck.ocfs2(8) tunefs.ocfs2(8) debugfs.ocfs2(8) ocfs2console(8) AUTHORS
Oracle Corporation COPYRIGHT
Copyright (C) 2004, 2010 Oracle. All rights reserved. Version 1.6.4 September 2010 o2cb(7)