11-18-2013
What cluster does is the same as i mentioned in initial post.
In case of failure, activate the volume group on another node and mount the filesystem.
Later on, if first node becomes available, cluster knows it has active volume group on another, so it will not activate it.
You can do this action yourself, just don't use /etc/fstab or any other scripts to mount automatically after reboot.
This User Gave Thanks to Peasant For This Post:
10 More Discussions You Might Find Interesting
1. Solaris
hi,
i have currently below mounts in solaris box and i want to create new mount points.
please let me know how can i do it?
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/ 1000M 350M 609M 37% /
/dev 1000M 350M ... (3 Replies)
Discussion started by: rags_s11
3 Replies
2. UNIX for Dummies Questions & Answers
Hi,
I'm new to Linux and to this forum too. Now, I need some info.
I have an application which writes some data onto one mount(logs and others).
Now, I want to have some convention or script where if the mount(where the application is writing data) reaches certain amount of memory or if it... (1 Reply)
Discussion started by: krisdasword
1 Replies
3. AIX
We need to allow ordinary users to preform NFS mounts on a AIX server without giving them root access to the server. Is there a way to give an ordinary users root access on a tem basis or a script to allow them to preform NFS mounts? (4 Replies)
Discussion started by: daveisme
4 Replies
4. Solaris
hi All,
This is the first time that i would be doing a NFS mount.
Its solaris 10 machine on which i would be doing this.
Can any of you gurus help me with the steps that should be taken. (3 Replies)
Discussion started by: sankasu
3 Replies
5. AIX
Hi,
I have shared the file system and given access to 2 clients
Examples
client A
client b
but after sharing and mounting on client side when i am issue command share it is showing me only single ip of client A , it does.nt showing me ip address of client b and
After mouting as RW ,... (6 Replies)
Discussion started by: manoj.solaris
6 Replies
6. Solaris
Hi,
I was wondering, whether there is a limit regarding the max number of nfs mounts in
Oracle Solaris 10 (newest update).
The data center plans to migrate from a fibre channel based storage environment (hitachi) to a nfs based storage environment (netapp). Regarding the Solaris 10 database... (1 Reply)
Discussion started by: schms
1 Replies
7. Red Hat
Hi,
My query is on NFS,
I have shared the /test folder on Server A, whose owner is user "A" uid=101, gid=202
and mounted on Server B as /test1, whose owner is "A" uid=204, gid=204,
Now My query is if uid and gid of users of both servers are different how user A will have full access to... (1 Reply)
Discussion started by: manoj.solaris
1 Replies
8. Shell Programming and Scripting
Gentleman and Ladies,
I am having some difficulty in regards to the find utility. I am trying to not traverse nfs mounts when searching for files or directories.
This command does not seem to work. It still returns directories that live on nfs shares. The man page says:
-fstype type... (6 Replies)
Discussion started by: jaysunn
6 Replies
9. Solaris
I have several Solaris 11.2 zones. when I reboot them I have to go in and do mountall to mount the NFS mounts.
any ideas where to troubleshoot why they are not automounting? (2 Replies)
Discussion started by: os2mac
2 Replies
10. AIX
Hello all,
I'm working to fix a two-node SysMirror cluster that uses NFS mounts from a NetApp appliance as the data repository. Currently all the NFS mounts/unmounts are called from the application controller scripts, and since file collection isn't currently working, (One fight at at time... (3 Replies)
Discussion started by: ZekesGarage
3 Replies
LEARN ABOUT CENTOS
ocf_heartbeat_lvm
OCF_HEARTBEAT_LVM(7) OCF resource agents OCF_HEARTBEAT_LVM(7)
NAME
ocf_heartbeat_LVM - Controls the availability of an LVM Volume Group
SYNOPSIS
LVM [start | stop | status | monitor | meta-data | validate-all]
DESCRIPTION
Resource script for LVM. It manages an Linux Volume Manager volume (LVM) as an HA resource.
SUPPORTED PARAMETERS
volgrpname
The name of volume group.
(required, string, no default)
exclusive
If set, the volume group will be activated exclusively. This option works one of two ways. If the volume group has the cluster
attribute set, then the volume group will be activated exclusively using clvmd across the cluster. If the cluster attribute is not set,
the volume group will be activated exclusively through the use of the volume_list filter in lvm.conf. In the filter scenario, the LVM
agent verifies that pacemaker's configuration will result in the volume group only being active on a single node in the cluster and
that the local node's volume_list filter will prevent the volume group from activating outside of the resource agent. On activation
this agent claims the volume group through the use of a unique tag, and then overrides the volume_list field in a way that allows the
volume group to be activated only by the agent. To use exclusive activation without clvmd, the volume_list in lvm.conf must be
initialized. If volume groups exist locally that are not controlled by the cluster, such as the root volume group, make sure those
volume groups are listed in the volume_list so they will be allowed to activate on bootup.
(optional, boolean, default false)
tag
If "exclusive" is set on a non clustered volume group, this overrides the tag to be used.
(optional, string, default "pacemaker")
partial_activation
If set, the volume group will be activated even only partial of the physical volumes available. It helps to set to true, when you are
using mirroring logical volumes.
(optional, string, default "false")
SUPPORTED ACTIONS
This resource agent supports the following actions (operations):
start
Starts the resource. Suggested minimum timeout: 30.
stop
Stops the resource. Suggested minimum timeout: 30.
status
Performs a status check. Suggested minimum timeout: 30.
monitor
Performs a detailed status check. Suggested minimum timeout: 30. Suggested interval: 10.
methods
Suggested minimum timeout: 5.
meta-data
Retrieves resource agent metadata (internal use only). Suggested minimum timeout: 5.
validate-all
Performs a validation of the resource configuration. Suggested minimum timeout: 5.
EXAMPLE
The following is an example configuration for a LVM resource using the crm(8) shell:
primitive p_LVM ocf:heartbeat:LVM
params
volgrpname=string
op monitor depth="0" timeout="30" interval="10"
SEE ALSO
http://www.linux-ha.org/wiki/LVM_(resource_agent)
AUTHOR
Linux-HA contributors (see the resource agent source for information about individual authors)
resource-agents UNKNOWN 06/09/2014 OCF_HEARTBEAT_LVM(7)