10 More Discussions You Might Find Interesting
1. Red Hat
Hi All;
I try to build a Redhat Cluster (CentOS 6) on vmware. But each node sees the other down like:
# clustat
Cluster Status for mycluster @ Wed Apr 8 11:01:38 2015
Member Status: Quorate
Member Name ID Status
------ ---- ... (1 Reply)
Discussion started by: Meacham12
1 Replies
2. Red Hat
Hi All;
I try to build a Redhat Cluster (CentOS 6) on vmware. But each node sees the other down like:
# clustat
Cluster Status for mycluster @ Wed Apr 8 11:01:38 2015
Member Status: Quorate
Member Name ID Status
------ ---- ... (0 Replies)
Discussion started by: Meacham12
0 Replies
3. AIX
Hi,
A customer I'm supporting once upon a time broke their 2 cluster node database servers so they could use the 2nd standby node for something else. Now sometime later they want to bring the 2nd node back into the cluster for resilance. Problem is there are now 3 VG's that have been set-up... (1 Reply)
Discussion started by: elcounto
1 Replies
4. Red Hat
Hi all.
May I get some expert advice on troubleshooting performance issues of a 1000 nodes Apache LB cluster. Users report slow loading/response of webpages. Different websites are hosted on this cluster for different clients. But all are reporting the same issue.
Could you please let me know... (1 Reply)
Discussion started by: admin_xor
1 Replies
5. Solaris
Hi
we have 2 solaris 10 servers in veritas cluster.
also we have oracle cluster on the database end.
now we have a requirement to reboot both the servers as it has been running for more than a year.
can any one tell what is the procedure to bring down the cluster services in both the nodes... (7 Replies)
Discussion started by: newtoaixos
7 Replies
6. Solaris
I am new to setting up sun solaris 10 cluster, I have 2 sun sparc t3-1 servers (identical), going to use them as web servers (sun one java web server 7), looking for data replication and real time fail over. My question is do I need external storage to configure the cluster? or I can just use... (3 Replies)
Discussion started by: spitfire2011
3 Replies
7. Red Hat
Hi all, i have 3 nodes cluster (Centos 5 cluster suit) with out quorum disk,
node vote = 1,
the value of a quorum = 2,
when 2 nodes going offline, cluster services are destoys.
How i can save the cluster and all services(move all services to one alive node)
with out quorum disk when other... (3 Replies)
Discussion started by: Flomaster
3 Replies
8. Emergency UNIX and Linux Support
hello Gurus,
My current set up is 3 to 1 Cluster (SUN Cluster 3.2) running oracle database. Task is to reboot the servers. My query is about the procedure to do the same.
My understanding is suspend the databases to avoid switchover. Then execute the command scshutdown to down the cluster... (4 Replies)
Discussion started by: EmbedUX
4 Replies
9. UNIX for Dummies Questions & Answers
Hi ! I have a simple setup of 2 PC (with linux Red-Hat) where the first PC is the primary machine and the second the backup. I use DRBD for data replication and Red-Hat cluster suite for HA (High Availability). I have tested both.
Now I NEED a COMMON IP ADDRESS (or Master/unique IP address) for... (3 Replies)
Discussion started by: Danny Gilbert
3 Replies
10. High Performance Computing
Hi ! I have a simple setup of 2 PC (with linux Red-Hat) where the first PC is the primary machine and the second the backup. I use DRBD for data replication and Red-Hat cluster suite for HA (High Availability). I have tested both.
Now I NEED a COMMON IP ADDRESS (or Master/unique IP address) for... (0 Replies)
Discussion started by: Danny Gilbert
0 Replies
mount.ocfs2(8) OCFS2 Manual Pages mount.ocfs2(8)
NAME
mount.ocfs2 - mount an OCFS2 filesystem
SYNOPSIS
mount.ocfs2 [-vn] [-o options] device dir
DESCRIPTION
mount.ocfs2 mounts an OCFS2 filesystem at dir. It is usually invoked indirectly by the mount(8) command when using the -t ocfs2 option.
OPTIONS
_netdev
The filesystem resides on a device that requires network access (used to prevent the system from attempting to mount these filesys-
tems until the network has been enabled on the system). mount.ocfs2 transparently appends this option during mount. However, users
mounting the volume via /etc/fstab must explicitly specify this mount option to delay the system from mounting the volume until
after the network has been enabled.
atime_quantum=nrsec
The file system will not update atime unless this number of seconds has passed since the last update. Set to zero to always update
atime. It defaults to 60 secs.
relatime
The file system only update atime if the previous atime is older than mtime or ctime.
noatime
The file system will not update access time.
acl / noacl
Enables / disables POSIX ACLs (Access Control Lists) support.
user_xattr / nouser_xattr
Enables / disables Extended User Attributes.
commit=nrsec
Sync all data and metadata every nrsec seconds. The default value is 5 seconds. Zero means default.
data=ordered / data=writeback
Specifies the handling of file data during metadata journalling.
ordered
This is the default mode. All data is forced directly out to the main file system prior to its metadata being committed
to the journal.
writeback
Data ordering is not preserved - data may be written into the main file system after its metadata has been committed to
the journal. This is rumored to be the highest-throughput option. While it guarantees internal file system integrity, it
can allow old data to appear in files after a crash and journal recovery.
datavolume
This mount option has been deprecated in OCFS2 1.6. It has been used in the past (OCFS2 1.2 and OCFS2 1.4), to force the Oracle
RDBMS to issue direct IOs to the hosted data files, control files, redo logs, archive logs, voting disk, cluster registry, etc. It
has been deprecated because it is no longer required. Oracle RDBMS users should instead use the init.ora parameter, filesys-
temio_options, to enable direct IOs.
errors=remount-ro / errors=panic
Define the behavior when an error is encountered. (Either remount the file system read-only, or panic and halt the system.) By
default, the file system is remounted read only.
localflocks
This disables cluster-aware flock(2).
intr / nointr
The default is intr that allows signals to interrupt cluster operations. nointr disables signals during cluster operations.
ro Mount the file system read-only.
rw Mount the file system read-write.
SEE ALSO
mkfs.ocfs2(8) fsck.ocfs2(8) tunefs.ocfs2(8) mounted.ocfs2(8) debugfs.ocfs2(8) o2cb(7)
AUTHORS
Oracle Corporation
COPYRIGHT
Copyright (C) 2004, 2010 Oracle. All rights reserved.
Version 1.4.3 February 2010 mount.ocfs2(8)