Sponsored Content
Operating Systems AIX Adding a Volume Group to an HACMP Resource Group? Post 302535182 by funksen on Thursday 30th of June 2011 04:01:14 AM
Old 06-30-2011
create a concurrent capable vg on one side, then varyoff, and import it on the other node, of course you need to see the disk on both nodes

make sure you use the same vg major number

importvg -V majornumber -y yourvg hdiskx


I always want to make sure, that my disks have the same hdisk number on both nodes, but this is not absolutely necessary


then discover os changes in hacmp, add the vg to your resource group (make sure it's varied of before you start the cluster)
synchronise the cluster and you are done

I recommend adding lvs und filesystems before you import the vg on the other node, so you don't have to work with cspoc, but with standard lvm commands



this is a general description because I don't have the time to make it more detailed, but ask if you have troubles somewhere
This User Gave Thanks to funksen For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

LVM - Extending Logical Volume within Volume Group

Hello, I have logical volume group of 50GB, in which I have 2 logical volumes, LogVol01 and LogVol02, both are of 10GB. If I extend LogVol01 further by 10GB, then it keeps the extended copy after logical volume 2. I want to know where it keeps this information Regards Himanshu (3 Replies)
Discussion started by: ghimanshu
3 Replies

2. AIX

Moving a Logical Volume from one Volume Group to Another

Does anyone have any simple methods for moving a current logical volume from one volume group to another? I do not wish to move the data from one physical volume to another. Basically, I want to "relink" the logical volume to exist in a different volume group. Any ideas? (2 Replies)
Discussion started by: krisw
2 Replies

3. AIX

Logical volume name conflict in two volume group

Hello, I am a french computer technician, and i speak English just a little. On Aix 5.3, I encounter a name conflict logical volume on two volume group. The first volume lvnode01 is OK in rootvg and mounted. It is also consistent in the ODM root # lsvg -l rootvg |grep lvnode01 ... (10 Replies)
Discussion started by: dantares
10 Replies

4. AIX

Resource Group Monitoring

Hi, I have a requirement to monitor the HACMP Resource Groups. At present in my environment, if the Resource Groups fail over from preferred node to Secondary node we dont get notification. Can some one help me in creating a scrript. I have more than one RG online. (Max 4 Resource Groups in... (2 Replies)
Discussion started by: srnagu
2 Replies

5. AIX

HACMP resource group State not STABLE

Hi, Not sure if this is the correct forum to post this on but maybe a mod could move it if not. When trying to move a HACMP resource group between lpars on AIX I receive the following. State not STABLE/RP_RUNNING or ibcasts Join for node 2 rejected, Clearing in join protocol flag... (11 Replies)
Discussion started by: elmesy
11 Replies

6. AIX

HACMP, NFS cross-mount problem. Can not move resource group

Hi, I'm new to HACMP. Currently I setup a cluster with nfs cross-mount follow this guide: kristijan.org NFS cross-mounts in PowerHA/HACMPMy cluster has two nodes: erp01 and erp02. I'm using nfs4 with filesystem for nfs is: /sapnfs Cluster start without problems. But I cannnot move RG (with... (3 Replies)
Discussion started by: giobuon
3 Replies

7. AIX

Adding existing VG to powerHA Resource group

Hello. I am Running AIX 6.1 and PowerHA 6.1 I have an active/active cluster (Prod/Dev) cluster. Each side will failover to the other. I have on my prod side an active volume group with a file system. The VG is imported on both nodes and active (varried on, file system mounted) on the prod... (3 Replies)
Discussion started by: mhenryj
3 Replies

8. Shell Programming and Scripting

Need to create an for loop for adding an disk in veritas volume group.

Hi Experts I need an script to add an disk in to the veritas volume manager disk group. For example: # cd /tmp # view disk c6t5d2 c6t2d1 c6t3d7 c6t11d2 c7t11d2 c6t11d6 Normally we add the disk like this: # vxdg -g freedg freedisk01=c6t5d2 # vxdg -g freedg freedisk02=c6t2d1 #... (3 Replies)
Discussion started by: indrajit_preet
3 Replies

9. UNIX for Dummies Questions & Answers

How to create a volume group, logical volume group and file system?

hi, I want to create a volume group of 200 GB and then create different file systems on that. please help me out. Its becomes confusing when the PP calculating PP. I don't understand this concept. (2 Replies)
Discussion started by: kamaldev
2 Replies

10. Red Hat

No space in volume group. How to create a file system using existing logical volume

Hello Guys, I want to create a file system dedicated for an application installation. But there is no space in volume group to create a new logical volume. There is enough space in other logical volume which is being mounted on /var. I know we can use that logical volume and create a virtual... (2 Replies)
Discussion started by: vamshigvk475
2 Replies
ccs_tool(8)															       ccs_tool(8)

NAME
ccs_tool - The tool used to make online updates of CCS config files. SYNOPSIS
ccs_tool [OPTION].. <command> DESCRIPTION
ccs_tool is part of the Cluster Configuration System (CCS). It is used to make online updates to cluster.conf. It can also be used to upgrade old style (GFS <= 6.0) CCS archives to the new xml cluster.conf format. OPTIONS
-h Help. Print out the usage. -V Print the version information. sub-commands have their own options, see below for more detail COMMANDS
addnode [options] <node> [<fenceoption=value>]... Adds a new node to the cluster configuration file. Fencing device options are specified as key=value pairs (as many as required) and are entered into the configuration file as is. See the documentation for your fencing agent for more details (eg a powerswitch fence device may need to know which port the node is connected to). Options: -v <votes> Number of votes for this node (mandatory) -n <nodeid> Node id for this node (optional) -i <interface> Network interface to use for this node. Mandatory if the cluster is using multicast as transport. Forbidden if not. -m <multicast> Multicast address for cluster. Only allowed on the first node to be added to the file. Subsequent nodes will use either multicast or broadcast depending on the properties of the first node. -f <fencedevice> Name of fence device to use for this node. The fence device section must already have been added to the file, probably using the addfence command. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. delnode [options] <node> Delete a node from the cluster configuration file. Note: there is no "edit" command so to change the properties of a node you must delete it and add it back in with the new properties. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. addfence [options] <name> <agent> [<option>=<value>]... Adds a new fence device section to the cluster configuration file. <agent> is the name of the fence agent that controls the device. the options following are entered as key-value pairs. See the fence agent documentation for details about these. eg: you may need to enter the IP address and username/password for a powerswitch fencing device. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. delfence [options] <node> Deletes a fencing device from the cluster configuration file. delfence will allow you to remove a fence device that is in use by nodes. This is to allow changes to be made, but be aware that it may produce an invalid configuration file if you don't add it back in again. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. lsnode [options] List the nodes in the configuration file. This is (hopefully obviously) not necessarily the same as the nodes currently in the clus- ter, but it should be a superset. Options: -v Verbose. Lists all the properties of the node, and the node-specific properties of the fence device too. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf lsfence [options] List all the fence devices in the cluster configuration file. Options: -v Verbose. Lists all the properties of the fence device rather than just the names and agents. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf create [options] <clustername> Create a new, skeleton, configuration file. Note that "create" on its own will not create a valid configuration file. Fence agents and nodes will need to be added to it before handing it over to ccsd. The new configuration file will have a version number of 1. Subsequent addnode/delnode/addfence/delfence operations will increment the version number by 1 each time. Options: -c <file> Config file to create. Defaults to /etc/cluster/cluster.conf addnodeids Adds node ID numbers to all the nodes in cluster.conf. In RHEL4, node IDs were optional and assigned by cman when a node joined the cluster. In RHEL5 they must be pre-assigned in cluster.conf. This command will not change any node IDs that are already set in clus- ter.conf, it will simply add unique node ID numbers to nodes that do not already have them. SEE ALSO
cluster.conf(5) ccs_tool(8)
All times are GMT -4. The time now is 01:33 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy