Sponsored Content
Operating Systems AIX IBM Power Linux Cluster Fence device on Power8 Platform Post 302981462 by Padow1 on Tuesday 13th of September 2016 12:13:36 PM
Old 09-13-2016
The fence agent would have to use the HMC for power systems. I don't see an HMC fence agent for CentOS. If you are using and paying for RHEL subscriptions, maybe you can open a ticket with Red Hat for them to add one upstream?

The fence agents are in the main channel of RHEL/Centos systems. You should be able to use the following command to list the available agents. You could use a similar apt-get command for other linux distro's.

Code:
yum list 'fence-agents-*'

Here is a list of all of the packages the base channel for CentOS7 for PPC64.

Code:
http://mirror.centos.org/altarch/7/os/ppc64/Packages/

 

6 More Discussions You Might Find Interesting

1. Solaris

Increasing Raw device Sun Cluster 3.0

Hi All , I would like to know the procedure for increasing shared volume space in sun cluster . Currently the configuration is like these . Main stripe oradb1/d91 2 1 /dev/did/rdsk/d35s0 1 /dev/did/rdsk/d36s0 =Total 49 Gb oradb1/d94 -p oradb1/d91 -o 88080480 -b 14680064 ==Total 7 GB... (4 Replies)
Discussion started by: sahil_shine
4 Replies

2. AIX

IBM Power Pseries Open Firmware boot / VIOS POWERVM VET CODE

Hello, I installed PowerVM IVM Virtual I/O on P-550 but later found out that the machine isn't activated for CoD VET code for virtualization. So when booted , it goes into OPEN Firmware I/O Hosting requires a hosting partition boot not permitted exit called > ok Panel shows > IO... (3 Replies)
Discussion started by: filosophizer
3 Replies

3. AIX

New IBM Power8 (S822) and StorWiz V3700 SAN, best practices for production setup/config?

Hello, Got a IBM Power8 box (S822) that I am configuring for replacement of our existing IBM machine. Wanted to touch base with the expert community here to ensure I don't miss anything critical in my setup/config of AIX. Did a fresh AIX 7.1 install on the internal scsi hdisk, mirror'ed... (3 Replies)
Discussion started by: c3rb3rus
3 Replies

4. Red Hat

PaceMaker Cluster Fence Device

I have 2 VM's setup with a shared VMware disk running RHEL 7.1 (just updated to 7.2 with yum update), and would like to know what is the easiest Fence device to implement for testing purposes. Apparently, I need a fence device before my IP resources will come online. I have the cluster... (1 Reply)
Discussion started by: mrmurdock
1 Replies

5. Solaris

Solaris Cluster Device Problem

I build up two node cluster (node1, node2) in virtualbox. For these two nodes I add 5 shared disk. (Also each node have own OS disk). 1 shared disk for vtoc 2 shared disk for NFS resource group 2 shared disk for WEB resource group When I finished my work; two nodes was ok and shared disk... (4 Replies)
Discussion started by: sonofsunra
4 Replies

6. AIX

IBM Power 740 won't boot after firmware update

A team member installed the wrong version of License Code on a Power 740 system used in our Development environment. The system will no longer boot. Does anyone have IBM documentation or experience to share on recovering a system that currently reports "Unsupported License Code version... (1 Reply)
Discussion started by: kevinedailey
1 Replies
ccs_tool(8)															       ccs_tool(8)

NAME
ccs_tool - The tool used to make online updates of CCS config files. SYNOPSIS
ccs_tool [OPTION].. <command> DESCRIPTION
ccs_tool is part of the Cluster Configuration System (CCS). It is used to make online updates to cluster.conf. It can also be used to upgrade old style (GFS <= 6.0) CCS archives to the new xml cluster.conf format. OPTIONS
-h Help. Print out the usage. -V Print the version information. sub-commands have their own options, see below for more detail COMMANDS
addnode [options] <node> [<fenceoption=value>]... Adds a new node to the cluster configuration file. Fencing device options are specified as key=value pairs (as many as required) and are entered into the configuration file as is. See the documentation for your fencing agent for more details (eg a powerswitch fence device may need to know which port the node is connected to). Options: -v <votes> Number of votes for this node (mandatory) -n <nodeid> Node id for this node (optional) -i <interface> Network interface to use for this node. Mandatory if the cluster is using multicast as transport. Forbidden if not. -m <multicast> Multicast address for cluster. Only allowed on the first node to be added to the file. Subsequent nodes will use either multicast or broadcast depending on the properties of the first node. -f <fencedevice> Name of fence device to use for this node. The fence device section must already have been added to the file, probably using the addfence command. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. delnode [options] <node> Delete a node from the cluster configuration file. Note: there is no "edit" command so to change the properties of a node you must delete it and add it back in with the new properties. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. addfence [options] <name> <agent> [<option>=<value>]... Adds a new fence device section to the cluster configuration file. <agent> is the name of the fence agent that controls the device. the options following are entered as key-value pairs. See the fence agent documentation for details about these. eg: you may need to enter the IP address and username/password for a powerswitch fencing device. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. delfence [options] <node> Deletes a fencing device from the cluster configuration file. delfence will allow you to remove a fence device that is in use by nodes. This is to allow changes to be made, but be aware that it may produce an invalid configuration file if you don't add it back in again. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. lsnode [options] List the nodes in the configuration file. This is (hopefully obviously) not necessarily the same as the nodes currently in the clus- ter, but it should be a superset. Options: -v Verbose. Lists all the properties of the node, and the node-specific properties of the fence device too. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf lsfence [options] List all the fence devices in the cluster configuration file. Options: -v Verbose. Lists all the properties of the fence device rather than just the names and agents. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf create [options] <clustername> Create a new, skeleton, configuration file. Note that "create" on its own will not create a valid configuration file. Fence agents and nodes will need to be added to it before handing it over to ccsd. The new configuration file will have a version number of 1. Subsequent addnode/delnode/addfence/delfence operations will increment the version number by 1 each time. Options: -c <file> Config file to create. Defaults to /etc/cluster/cluster.conf addnodeids Adds node ID numbers to all the nodes in cluster.conf. In RHEL4, node IDs were optional and assigned by cman when a node joined the cluster. In RHEL5 they must be pre-assigned in cluster.conf. This command will not change any node IDs that are already set in clus- ter.conf, it will simply add unique node ID numbers to nodes that do not already have them. SEE ALSO
cluster.conf(5) ccs_tool(8)
All times are GMT -4. The time now is 05:24 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy