Arbitrator for 2 nodes ocfs cluster


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Arbitrator for 2 nodes ocfs cluster
# 1  
Old 02-04-2015
Arbitrator for 2 nodes ocfs cluster

Is there any way to create a arbitrary node for ocfs2 on a virtual machine (others are physical servers) so it won't go panic when one of physical server goes down?

This is for load balanced application servers.

Any setting example or tips?
Thanks.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Red Hat

RedHat Cluster: Nodes won't see each other

Hi All; I try to build a Redhat Cluster (CentOS 6) on vmware. But each node sees the other down like: # clustat Cluster Status for mycluster @ Wed Apr 8 11:01:38 2015 Member Status: Quorate Member Name ID Status ------ ---- ... (1 Reply)
Discussion started by: Meacham12
1 Replies

2. Red Hat

RedHat Cluster: Nodes won't see each other

Hi All; I try to build a Redhat Cluster (CentOS 6) on vmware. But each node sees the other down like: # clustat Cluster Status for mycluster @ Wed Apr 8 11:01:38 2015 Member Status: Quorate Member Name ID Status ------ ---- ... (0 Replies)
Discussion started by: Meacham12
0 Replies

3. AIX

Re-cluster 2 HACMP 5.2 nodes

Hi, A customer I'm supporting once upon a time broke their 2 cluster node database servers so they could use the 2nd standby node for something else. Now sometime later they want to bring the 2nd node back into the cluster for resilance. Problem is there are now 3 VG's that have been set-up... (1 Reply)
Discussion started by: elcounto
1 Replies

4. Red Hat

How to troubleshoot a 1000 nodes Apache cluster?

Hi all. May I get some expert advice on troubleshooting performance issues of a 1000 nodes Apache LB cluster. Users report slow loading/response of webpages. Different websites are hosted on this cluster for different clients. But all are reporting the same issue. Could you please let me know... (1 Reply)
Discussion started by: admin_xor
1 Replies

5. Solaris

What is the procedure to reboot cluster nodes

Hi we have 2 solaris 10 servers in veritas cluster. also we have oracle cluster on the database end. now we have a requirement to reboot both the servers as it has been running for more than a year. can any one tell what is the procedure to bring down the cluster services in both the nodes... (7 Replies)
Discussion started by: newtoaixos
7 Replies

6. Solaris

Need advise on setting up solaris 10 2 nodes cluster

I am new to setting up sun solaris 10 cluster, I have 2 sun sparc t3-1 servers (identical), going to use them as web servers (sun one java web server 7), looking for data replication and real time fail over. My question is do I need external storage to configure the cluster? or I can just use... (3 Replies)
Discussion started by: spitfire2011
3 Replies

7. Red Hat

Centos/rhel 5 cluster 3 nodes with out Quorum

Hi all, i have 3 nodes cluster (Centos 5 cluster suit) with out quorum disk, node vote = 1, the value of a quorum = 2, when 2 nodes going offline, cluster services are destoys. How i can save the cluster and all services(move all services to one alive node) with out quorum disk when other... (3 Replies)
Discussion started by: Flomaster
3 Replies

8. Emergency UNIX and Linux Support

Rebooting 3 to 1 Cluster nodes.

hello Gurus, My current set up is 3 to 1 Cluster (SUN Cluster 3.2) running oracle database. Task is to reboot the servers. My query is about the procedure to do the same. My understanding is suspend the databases to avoid switchover. Then execute the command scshutdown to down the cluster... (4 Replies)
Discussion started by: EmbedUX
4 Replies

9. UNIX for Dummies Questions & Answers

IP Alias, Bonding or Virtual IP, 2 nodes Cluster, which one to use ?

Hi ! I have a simple setup of 2 PC (with linux Red-Hat) where the first PC is the primary machine and the second the backup. I use DRBD for data replication and Red-Hat cluster suite for HA (High Availability). I have tested both. Now I NEED a COMMON IP ADDRESS (or Master/unique IP address) for... (3 Replies)
Discussion started by: Danny Gilbert
3 Replies

10. High Performance Computing

Bonding, IP alias, Virtual IP, 2 nodes cluster

Hi ! I have a simple setup of 2 PC (with linux Red-Hat) where the first PC is the primary machine and the second the backup. I use DRBD for data replication and Red-Hat cluster suite for HA (High Availability). I have tested both. Now I NEED a COMMON IP ADDRESS (or Master/unique IP address) for... (0 Replies)
Discussion started by: Danny Gilbert
0 Replies
Login or Register to Ask a Question
debugfs.ocfs2(8)						OCFS2 Manual Pages						  debugfs.ocfs2(8)

NAME
debugfs.ocfs2 - OCFS2 file system debugger. SYNOPSIS
debugfs.ocfs2 [-f cmdfile] [-R command] [-s backup] [-nwV?] [device] debugfs.ocfs2 -l [tracebit ... [allow|off|deny]] ... debugfs.ocfs2 -d, --decode lockname debugfs.ocfs2 -e, --encode lock_type block_num [generation | parent] DESCRIPTION
The debugfs.ocfs2 program is an interactive file system debugger useful in displaying on-disk OCFS2 filesystem structures on the specified device. OPTIONS
-d, --decode lockname Display the information encoded in the lockname. -e, --encode lock_type block_num [generation | parent] Display the lockname obtained by encoding the arguments provided. -f, --file cmdfile Executes the debugfs commands in cmdfile. -i, --image Specifies device is an o2image file created by o2image tool. -l [tracebit ... [allow|off|deny]] ... Control OCFS2 filesystem tracing by enabling and disabling trace bits. Do debugfs.ocfs2 -l to get the list of all trace bits. -n, --noprompt Hide prompt. -R, --request command Executes a single debugfs command. -s, --superblock backup-number mkfs.ocfs2 makes upto 6 backup copies of the superblock at offsets 1G, 4G, 16G, 64G, 256G and 1T depending on the size of the vol- ume. Use this option to specify the backup, 1 thru 6, to use to open the volume. -w, --write Opens the filesystem in RW mode. By default the filesystem is opened in RO mode. -V, --version Display version and exit. -?, --help Displays help and exit. SPECIFYING FILES
Many debugfs.ocfs2 commands take a filespec as an argument to specify an inode (as opposed to a pathname) in the filesystem which is cur- rently opened by debugfs.ocfs2. The filespec argument may be specified in two forms. The first form is an inode number or lockname sur- rounded by angle brackets, e.g., <32>. The second form is a pathname; if the pathname is prefixed by a forward slash ('/'), then it is interpreted relative to the root of the filesystem which is currently opened by debugfs.ocfs2. If not, the path is interpreted relative to the current working directory as maintained by debugfs.ocfs2, which can be modified using the command cd. If the pathname is prefixed by a double forward slash ('//'), then it is interpreted relative to the root of the system directory of the filesystem opened by debugfs.ocfs2. LOCKNAMES
Locknames are specially formatted strings used by the file system to uniquely identify objects in the filesystem. Most locknames used by OCFS2 are generated using the inode number and its generation number and can be decoded using the decode command or used directly in place of an inode number in commands requiring a filespec. Like inode numbers, locknames need to be enclosed in angle brackets, e.g., <M000000000000000040c40c044069cf>. To generate a lockname for a given object, use the encode command. COMMANDS
This is a list of the commands which debugfs.ocfs2 supports. bmap filespec logical_block Display the physical block number corresponding to the logical block number logical_block in the inode filespec. cat filespec Dump the contents of inode filespec to stdout. cd filespec Change the current working directory to filespec. chroot filespec Change the root directory to be the directory filespec. close Close the currently opened filesystem. controld dump Display information obtained from ocfs2_controld. curdev Show the currently open device. decode <lockname> Display the inode number encoded in the lockname. dirblocks <filespec> Display the directory blocks associated with the given filespec. dlm_locks [-f <file>] [-l] [<lockname(s)>]... Display the status of all lock resources in the o2dlm domain that the file system is a member of. This command expects the debugfs filesystem to be mounted as mount -t debugfs debugfs /sys/kernel/debug. Use lockname(s) to limit the output to the given lock resources, -l to include contents of the lock value block and -f <file> to specify a saved copy of /sys/ker- nel/debug/o2dlm/<DOMAIN>/locking_state. dump [-p] filespec outfile Dump the contents of the inode filespec to the output file outfile. If the -p is given, set the owner, group, timestamps and permis- sions information on outfile to match those of filespec. encode filespec Display the lockname for the filespec. extent block# Display the contents of the extent structure at block#. findpath [<lockname>|<inode#>] Display the pathname for the inode specified by lockname or inode#. This command does not display all the hard-linked paths for the inode. frag filespec Display the inode's number of extents to clusters ratio. fs_locks [-f <file>] [-l] [-B] [<lockname(s)>]... Display the status of all locks known by the file system. This command expects the debugfs filesystem to be mounted as mount -t debugfs debugfs /sys/kernel/debug. Use lockname(s) to limit the output to the given lock resources, -B to limit the output to only the busy locks, -l to include contents of the lock value block and -f <file> to specify a saved copy of /sys/ker- nel/debug/ocfs2/<UUID>/locking_state. group block# Display the contents of the group descriptor at block#. hb Display the contents of the heartbeat system file. help, ? Print the list of commands understood by debugfs.ocfs2. icheck block# ... Display the inodes that use the one or more blocks specified on the command line. If the inode is a regular file, also display the corresponding logical block offset. lcd directory Change the current working directory of the debugfs.ocfs2 process to the directory on the native filesystem. locate [<lockname>|<inode#>] ... Display all pathnames for the inode(s) specified by locknames or inode#s. logdump node# Display the contents of the journal for node node#. ls [-l] filespec Print the listing of the files in the directory filespec. The -l flag will list files in the long format. ncheck [<lockname>|<inode#>] ... See locate. open device Open the filesystem on device. quit, q Quit debugfs.ocfs2. rdump [-v] filespec outdir Recursively dump directory filespec and all its contents (including regular files, symbolic links and other directories) into the outdir which should be an existing directory on the native filesystem. refcount [-e] filespec Display the refcount block, and optionally its tree, of the specified inode. slotmap Display the contents of the slotmap system file. stat filespec Display the contents of the inode structure for the filespec. stats [-h] [-s backup-number] Display the contents of the superblock. Use -s to display a specific backup superblock. Use -h to hide the inode. xattr [-v] <filespec> Display extended attributes associated with the given filespec. ACKNOWLEDGEMENT
This tool has been modelled after debugfs, a debugging tool for ext2. SEE ALSO
mkfs.ocfs2(8) fsck.ocfs2(8) tunefs.ocfs2(8) mounted.ocfs2(8) ocfs2console(8) o2image(8) o2cb(7) AUTHOR
Oracle Corporation COPYRIGHT
Copyright (C) 2004, 2010 Oracle. All rights reserved. Version 1.4.3 February 2010 debugfs.ocfs2(8)