Sponsored Content
Operating Systems HP-UX ServiceGuard cluster & volume group failover Post 302253315 by Wotan31 on Friday 31st of October 2008 10:33:27 AM
Old 10-31-2008
ServiceGuard cluster & volume group failover

I have a 2-node ServiceGuard cluster. One of the cluster packages has a volume group assigned to it. When I fail the package over to the other node, the volume group does not come up automatically on the other node.

I have to manually do a "vgchange -a y vgname" on the node before the package will come up. If I fail it back to the original node, I once again have to issue the vgchange command manually before the package will start.

What am I missing? I see some cluster related options in the vgchange man page but I don't understand if/how/when to use them.

What do I need to do for this volume group to automatically come up when I fail the package over?

TIA! Smilie
 

10 More Discussions You Might Find Interesting

1. HP-UX

HP Serviceguard failover - what doesn't get moved across?

Excuse the basic nature of the question, I've zero experience with regards to this and I'm just looking for a little clarity... When using serviceguard for a failover from one machine to another, what doesn't get taken from one machine to the other? For example, I was told that OS users and... (1 Reply)
Discussion started by: stefcha
1 Replies

2. High Performance Computing

sun Cluster resource group cant failover

I have rcently setup a 4 node cluster running sun cluster 3.2 and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename> this worked ok. theni I tried to siwitch the zone to node a thei work... (14 Replies)
Discussion started by: lesliek
14 Replies

3. HP-UX

Serviceguard Cluster Questions..

Hi, I'm fairly new to HP-UX. I am trying to find out if it is possible to create a cluster with 3 active nodes and 1 passive node. Everything I have found online so far just describes active/active and active/passive. Is it possible to have 3 active nodes and 1 passive node in a cluster? If... (2 Replies)
Discussion started by: DtbCollumb
2 Replies

4. AIX

backup & restore a volume group

Dear All, I would like to ask about saving & restoring a user defined volume group. i have a user defined volume group, named as datavg. i want to save it & to restore it into different size of physical volume. currently, datavg consist of 4 pv, three fisrt pv size are 100GB & one pv size is... (1 Reply)
Discussion started by: donybangetgitu
1 Replies

5. Solaris

Sun Cluster 3.1 failover

Hi, We have two sun SPARC server in Clustered (Sun Cluster 3.1). For some reason, System 1 failed over to System 2. Where can I find the logs which could tell me the reason for this failover? Thanks (5 Replies)
Discussion started by: Mack1982
5 Replies

6. Gentoo

How to failover the cluster ?

How to failover the cluster ? GNU/Linux By which command, My Linux version 2008 x86_64 x86_64 x86_64 GNU/Linux What are the prerequisites we need to take while failover ? if any Regards (3 Replies)
Discussion started by: sidharthmellam
3 Replies

7. Solaris

Sun cluster 4.0 - zone cluster failover doubt

Hello experts - I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts. (1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over) (2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies

8. UNIX for Dummies Questions & Answers

How to create a volume group, logical volume group and file system?

hi, I want to create a volume group of 200 GB and then create different file systems on that. please help me out. Its becomes confusing when the PP calculating PP. I don't understand this concept. (2 Replies)
Discussion started by: kamaldev
2 Replies

9. Linux

Volume is mounted on two ServiceGuard nodes

Hey! I'm running a HP ServiceGuard cluster with three nodes. One of the packages was moved (not by me) from one node to another a few weeks ago. I just noticed that one of the volume groups is still mounted on the old node. Oops! When I run df, less space is used on the old node than the new. ... (0 Replies)
Discussion started by: tobiasvl
0 Replies

10. Red Hat

No space in volume group. How to create a file system using existing logical volume

Hello Guys, I want to create a file system dedicated for an application installation. But there is no space in volume group to create a new logical volume. There is enough space in other logical volume which is being mounted on /var. I know we can use that logical volume and create a virtual... (2 Replies)
Discussion started by: vamshigvk475
2 Replies
VGCHANGE(8)						      System Manager's Manual						       VGCHANGE(8)

NAME
vgchange - change attributes of a volume group SYNOPSIS
vgchange [--addtag Tag] [--alloc AllocationPolicy] [-A|--autobackup {y|n}] [-a|--available [e|l] {y|n}] [--monitor {y|n}] [--poll {y|n}] [-c|--clustered {y|n}] [-u|--uuid] [-d|--debug] [--deltag Tag] [-h|--help] [--ignorelockingfailure] [--ignoremonitoring] [--sysinit] [--noudevsync] [-l|--logicalvolume MaxLogicalVolumes] [-p|--maxphysicalvolumes MaxPhysicalVolumes] [-P|--partial] [-s|--physicalextentsize PhysicalExtentSize[bBsSkKmMgGtTpPeE]] [--refresh] [-t|--test] [-v|--verbose] [--version] [-x|--resizeable {y|n}] [VolumeGroupName...] DESCRIPTION
vgchange allows you to change the attributes of one or more volume groups. Its main purpose is to activate and deactivate VolumeGroupName, or all volume groups if none is specified. Only active volume groups are subject to changes and allow access to their logical volumes. [Not yet implemented: During volume group activation, if vgchange recognizes snapshot logical volumes which were dropped because they ran out of space, it displays a message informing the administrator that such snapshots should be removed (see lvremove(8)). ] OPTIONS
See lvm for common options. -A, --autobackup {y|n} Controls automatic backup of metadata after the change. See vgcfgbackup (8). Default is yes. -a, --available [e|l]{y|n} Controls the availability of the logical volumes in the volume group for input/output. In other words, makes the logical volumes known/unknown to the kernel. If clustered locking is enabled, add 'e' to activate/deactivate exclusively on one node or 'l' to activate/deactivate only on the local node. Logical volumes with single-host snapshots are always activated exclusively because they can only be used on one node at once. -c, --clustered {y|n} If clustered locking is enabled, this indicates whether this Volume Group is shared with other nodes in the cluster or whether it contains only local disks that are not visible on the other nodes. If the cluster infrastructure is unavailable on a particular node at a particular time, you may still be able to use Volume Groups that are not marked as clustered. -u, --uuid Generate new random UUID for specified Volume Groups. --monitor {y|n} Start or stop monitoring a mirrored or snapshot logical volume with dmeventd, if it is installed. If a device used by a monitored mirror reports an I/O error, the failure is handled according to mirror_image_fault_policy and mirror_log_fault_policy set in lvm.conf(5). --poll {y|n} Without polling a logical volume's backgrounded transformation process will never complete. If there is an incomplete pvmove or lvconvert (for example, on rebooting after a crash), use --poll y to restart the process from its last checkpoint. However, it may not be appropriate to immediately poll a logical volume when it is activated, use --poll n to defer and then --poll y to restart the process. --sysinit Indicates that vgchange(8) is being invoked from early system initialisation scripts (e.g. rc.sysinit or an initrd), before write- able filesystems are available. As such, some functionality needs to be disabled and this option acts as a shortcut which selects an appropriate set of options. Currently this is equivalent to using --ignorelockingfailure, --ignoremonitoring, --poll n and setting LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES environment variable. --noudevsync Disable udev synchronisation. The process will not wait for notification from udev. It will continue irrespective of any possible udev processing in the background. You should only use this if udev is not running or has rules that ignore the devices LVM2 cre- ates. --ignoremonitoring Make no attempt to interact with dmeventd unless --monitor is specified. Do not use this if dmeventd is already monitoring a device. -l, --logicalvolume MaxLogicalVolumes Changes the maximum logical volume number of an existing inactive volume group. -p, --maxphysicalvolumes MaxPhysicalVolumes Changes the maximum number of physical volumes that can belong to this volume group. For volume groups with metadata in lvm1 for- mat, the limit is 255. If the metadata uses lvm2 format, the value 0 removes this restriction: there is then no limit. If you have a large number of physical volumes in a volume group with metadata in lvm2 format, for tool performance reasons, you should consider some use of --pvmetadatacopies 0 as described in pvcreate(8). -s, --physicalextentsize PhysicalExtentSize[bBsSkKmMgGtTpPeE] Changes the physical extent size on physical volumes of this volume group. A size suffix (k for kilobytes up to t for terabytes) is optional, megabytes is the default if no suffix is present. The default is 4 MB and it must be at least 1 KB and a power of 2. Before increasing the physical extent size, you might need to use lvresize, pvresize and/or pvmove so that everything fits. For example, every contiguous range of extents used in a logical volume must start and end on an extent boundary. If the volume group metadata uses lvm1 format, extents can vary in size from 8KB to 16GB and there is a limit of 65534 extents in each logical volume. The default of 4 MB leads to a maximum logical volume size of around 256GB. If the volume group metadata uses lvm2 format those restrictions do not apply, but having a large number of extents will slow down the tools but have no impact on I/O performance to the logical volume. The smallest PE is 1KB. The 2.4 kernel has a limitation of 2TB per block device. --refresh If any logical volume in the volume group is active, reload its metadata. This is not necessary in normal operation, but may be useful if something has gone wrong or if you're doing clustering manually without a clustered lock manager. -x, --resizeable {y|n} Enables or disables the extension/reduction of this volume group with/by physical volumes. EXAMPLES
To activate all known volume groups in the system: vgchange -a y To change the maximum number of logical volumes of inactive volume group vg00 to 128. vgchange -l 128 /dev/vg00 SEE ALSO
lvchange(8), lvm(8), vgcreate(8) Sistina Software UK LVM TOOLS 2.02.67(2) (2010-06-04) VGCHANGE(8)
All times are GMT -4. The time now is 09:44 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy