Sponsored Content
Operating Systems Linux Red Hat PaceMaker Cluster Fence Device Post 302981475 by Padow1 on Tuesday 13th of September 2016 03:16:15 PM
Old 09-13-2016
You need to use whichever fence device is appropriate, not look for easiest. By your question, I'm guessing that you may not understand what the fence device is. Pacemaker needs to be able to force reboot nodes to prevent a situation where a node or nodes lose contact with one another and multiple hosts attempt to start the same services. Your fence device can be an HP ILO, a Dell DRAC, and Cisco UCS Manager, VMWare vsphere, rhevm, etc. It needs to be something that can force control the power of your system from outside of the system itself.

In your case, you should probably use fence_vmware_soap, which is provided by the fence-agents-vmware-soap.x86_64 package.
This User Gave Thanks to Padow1 For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

electric fence

hi, i have downloaded electric fence from website,but i dnt know to install it in my system,plz tel me the procedure to install new packages into my system. (2 Replies)
Discussion started by: m.mujahid
2 Replies

2. UNIX for Dummies Questions & Answers

Fence status - beginners problems!!

Hi, I'd like to try and capture a fence status from Maestro and move it to another file, but each time I try it fails. For example, when using Maestro (conman) I enter: status I then get the fence status showing as 0 I then create a .dat file: touch statustest.dat If I... (0 Replies)
Discussion started by: tjhorwood
0 Replies

3. Solaris

Solaris cluster scdidadm Inquiry on device failed.

Solaris 10, Solaris Cluser 3.2, two node cluster, all software installed succefully, all nodes join to the cluster And on snod2 didn't recognize disks as a did devices and I can't make a quorum device. snod1#/usr/cluster/bin/cluster status === Cluster Nodes === --- Node Status --- Node... (2 Replies)
Discussion started by: Uran
2 Replies

4. Solaris

Increasing Raw device Sun Cluster 3.0

Hi All , I would like to know the procedure for increasing shared volume space in sun cluster . Currently the configuration is like these . Main stripe oradb1/d91 2 1 /dev/did/rdsk/d35s0 1 /dev/did/rdsk/d36s0 =Total 49 Gb oradb1/d94 -p oradb1/d91 -o 88080480 -b 14680064 ==Total 7 GB... (4 Replies)
Discussion started by: sahil_shine
4 Replies

5. Red Hat

Oracle ha from rgmanager to pacemaker

Hi, I've got one question about oracle rgmanager resource agent and ocf script. Rgmanager oracledb.sh script has an options called "vhost" which set the "ORACLE_HOSTNAME" value when you have different different cards on the same machine and different oracle instances. In ocf: heartbeat:... (0 Replies)
Discussion started by: lantuin
0 Replies

6. Red Hat

RHEL 7 PaceMaker Fence Agent Options

my options are getting very limited very fast on what agent to use with my 2 Node RHEL 7 VM Cluster. fence_vmware_soap "plug id"? I have a ESX resource cluster of 7 blades being managed by a Vsphere 6.0 server. I found the blade that the Cluster VM(s) is housed on. WTF is the plug id... (3 Replies)
Discussion started by: mrmurdock
3 Replies

7. AIX

IBM Power Linux Cluster Fence device on Power8 Platform

wasn't quite sure which forum to post in. What typical fence device to configure for a Power Linux PaceMaker Cluster running on the Power8 Platform (S822 Model of hardware), or what should be ordered with the S822 for use as a Fence Device? (5 Replies)
Discussion started by: mrmurdock
5 Replies

8. Solaris

Solaris Cluster Device Problem

I build up two node cluster (node1, node2) in virtualbox. For these two nodes I add 5 shared disk. (Also each node have own OS disk). 1 shared disk for vtoc 2 shared disk for NFS resource group 2 shared disk for WEB resource group When I finished my work; two nodes was ok and shared disk... (4 Replies)
Discussion started by: sonofsunra
4 Replies

9. Red Hat

[HA] Red Hat 7, pacemaker and start/stop scripts

Hi there, I am wondering if I could add start/stop ksh scripts provided by 3rd party to cluster... I read that script must be ocf/lsb compliant, however, in AIX I can just set up two separate scripts for starting and stopping application. Can similar be done under RH Linux cluster? Cheers, c (1 Reply)
Discussion started by: cyjan
1 Replies

10. Shell Programming and Scripting

Bash script to add multiple resources to NFS pacemaker cluster

All, I'm looking for some guidance on how to accomplish automating the addition of exports to an HA Pacemaker NFS cluster. I would like to do it in bash for logistics reasons. The resource creation command looks like this: pcs resource create nfs-b2b-hg-media-10.1 exportfs... (6 Replies)
Discussion started by: hburnswell
6 Replies
ccs_tool(8)                                                                                                                            ccs_tool(8)

NAME
ccs_tool - The tool used to make online updates of CCS config files. SYNOPSIS
ccs_tool [OPTION].. <command> DESCRIPTION
ccs_tool is part of the Cluster Configuration System (CCS). It is used to make online updates to cluster.conf. It can also be used to upgrade old style (GFS <= 6.0) CCS archives to the new xml cluster.conf format. OPTIONS
-h Help. Print out the usage. -V Print the version information. sub-commands have their own options, see below for more detail COMMANDS
addnode [options] <node> [<fenceoption=value>]... Adds a new node to the cluster configuration file. Fencing device options are specified as key=value pairs (as many as required) and are entered into the configuration file as is. See the documentation for your fencing agent for more details (eg a powerswitch fence device may need to know which port the node is connected to). Options: -v <votes> Number of votes for this node (mandatory) -n <nodeid> Node id for this node (optional) -i <interface> Network interface to use for this node. Mandatory if the cluster is using multicast as transport. Forbidden if not. -m <multicast> Multicast address for cluster. Only allowed on the first node to be added to the file. Subsequent nodes will use either multicast or broadcast depending on the properties of the first node. -f <fencedevice> Name of fence device to use for this node. The fence device section must already have been added to the file, probably using the addfence command. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. delnode [options] <node> Delete a node from the cluster configuration file. Note: there is no "edit" command so to change the properties of a node you must delete it and add it back in with the new properties. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. addfence [options] <name> <agent> [<option>=<value>]... Adds a new fence device section to the cluster configuration file. <agent> is the name of the fence agent that controls the device. the options following are entered as key-value pairs. See the fence agent documentation for details about these. eg: you may need to enter the IP address and username/password for a powerswitch fencing device. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. delfence [options] <node> Deletes a fencing device from the cluster configuration file. delfence will allow you to remove a fence device that is in use by nodes. This is to allow changes to be made, but be aware that it may produce an invalid configuration file if you don't add it back in again. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. lsnode [options] List the nodes in the configuration file. This is (hopefully obviously) not necessarily the same as the nodes currently in the clus- ter, but it should be a superset. Options: -v Verbose. Lists all the properties of the node, and the node-specific properties of the fence device too. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf lsfence [options] List all the fence devices in the cluster configuration file. Options: -v Verbose. Lists all the properties of the fence device rather than just the names and agents. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf create [options] <clustername> Create a new, skeleton, configuration file. Note that "create" on its own will not create a valid configuration file. Fence agents and nodes will need to be added to it before handing it over to ccsd. The new configuration file will have a version number of 1. Subsequent addnode/delnode/addfence/delfence operations will increment the version number by 1 each time. Options: -c <file> Config file to create. Defaults to /etc/cluster/cluster.conf addnodeids Adds node ID numbers to all the nodes in cluster.conf. In RHEL4, node IDs were optional and assigned by cman when a node joined the cluster. In RHEL5 they must be pre-assigned in cluster.conf. This command will not change any node IDs that are already set in clus- ter.conf, it will simply add unique node ID numbers to nodes that do not already have them. SEE ALSO
cluster.conf(5) ccs_tool(8)
All times are GMT -4. The time now is 04:17 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy