Sponsored Content
Operating Systems HP-UX ServiceGuard cluster & volume group failover Post 302253315 by Wotan31 on Friday 31st of October 2008 10:33:27 AM
Old 10-31-2008
ServiceGuard cluster & volume group failover

I have a 2-node ServiceGuard cluster. One of the cluster packages has a volume group assigned to it. When I fail the package over to the other node, the volume group does not come up automatically on the other node.

I have to manually do a "vgchange -a y vgname" on the node before the package will come up. If I fail it back to the original node, I once again have to issue the vgchange command manually before the package will start.

What am I missing? I see some cluster related options in the vgchange man page but I don't understand if/how/when to use them.

What do I need to do for this volume group to automatically come up when I fail the package over?

TIA! Smilie
 

10 More Discussions You Might Find Interesting

1. HP-UX

HP Serviceguard failover - what doesn't get moved across?

Excuse the basic nature of the question, I've zero experience with regards to this and I'm just looking for a little clarity... When using serviceguard for a failover from one machine to another, what doesn't get taken from one machine to the other? For example, I was told that OS users and... (1 Reply)
Discussion started by: stefcha
1 Replies

2. High Performance Computing

sun Cluster resource group cant failover

I have rcently setup a 4 node cluster running sun cluster 3.2 and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename> this worked ok. theni I tried to siwitch the zone to node a thei work... (14 Replies)
Discussion started by: lesliek
14 Replies

3. HP-UX

Serviceguard Cluster Questions..

Hi, I'm fairly new to HP-UX. I am trying to find out if it is possible to create a cluster with 3 active nodes and 1 passive node. Everything I have found online so far just describes active/active and active/passive. Is it possible to have 3 active nodes and 1 passive node in a cluster? If... (2 Replies)
Discussion started by: DtbCollumb
2 Replies

4. AIX

backup & restore a volume group

Dear All, I would like to ask about saving & restoring a user defined volume group. i have a user defined volume group, named as datavg. i want to save it & to restore it into different size of physical volume. currently, datavg consist of 4 pv, three fisrt pv size are 100GB & one pv size is... (1 Reply)
Discussion started by: donybangetgitu
1 Replies

5. Solaris

Sun Cluster 3.1 failover

Hi, We have two sun SPARC server in Clustered (Sun Cluster 3.1). For some reason, System 1 failed over to System 2. Where can I find the logs which could tell me the reason for this failover? Thanks (5 Replies)
Discussion started by: Mack1982
5 Replies

6. Gentoo

How to failover the cluster ?

How to failover the cluster ? GNU/Linux By which command, My Linux version 2008 x86_64 x86_64 x86_64 GNU/Linux What are the prerequisites we need to take while failover ? if any Regards (3 Replies)
Discussion started by: sidharthmellam
3 Replies

7. Solaris

Sun cluster 4.0 - zone cluster failover doubt

Hello experts - I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts. (1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over) (2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies

8. UNIX for Dummies Questions & Answers

How to create a volume group, logical volume group and file system?

hi, I want to create a volume group of 200 GB and then create different file systems on that. please help me out. Its becomes confusing when the PP calculating PP. I don't understand this concept. (2 Replies)
Discussion started by: kamaldev
2 Replies

9. Linux

Volume is mounted on two ServiceGuard nodes

Hey! I'm running a HP ServiceGuard cluster with three nodes. One of the packages was moved (not by me) from one node to another a few weeks ago. I just noticed that one of the volume groups is still mounted on the old node. Oops! When I run df, less space is used on the old node than the new. ... (0 Replies)
Discussion started by: tobiasvl
0 Replies

10. Red Hat

No space in volume group. How to create a file system using existing logical volume

Hello Guys, I want to create a file system dedicated for an application installation. But there is no space in volume group to create a new logical volume. There is enough space in other logical volume which is being mounted on /var. I know we can use that logical volume and create a virtual... (2 Replies)
Discussion started by: vamshigvk475
2 Replies
cmrunnode(1m)															     cmrunnode(1m)

NAME
cmrunnode - run a node in a high availability cluster SYNOPSIS
cmrunnode [-v] [node_name...] [-t | -w none] DESCRIPTION
cmrunnode causes a node to start its cluster daemon to join the existing cluster. This command verifies the network configuration before causing the node to start its cluster daemon. To start a cluster on one of its nodes, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the clus- ter configuration file. See access policy in cmquerycl(1m). Starting a node will not cause any active packages to be moved to the new node. However, if a package is DOWN, has its switching enabled, and is able to run on the new node, that package will automatically run there. If node_name is not specified, the cluster daemon will be started on the local node and will join the existing cluster. Options cmrunnode supports the following options: -v Verbose output will be displayed. -t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages. The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the nodes can meet any external dependencies such as EMS resources, package subnets, and storage. node_name... Start the cluster daemon on the specified node(s). -w none By default network probing is performed to check that the network connectivity is the same as when the cluster was configured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The option should only be used if this network configuration is known to be correct from a recent check. RETURN VALUE
cmrunnode returns the following value: 0 Successful completion. 1 Command failed. EXAMPLES
Run the cluster daemon on the current node: cmrunnode Run the cluster daemons on node1 and node2: cmrunnode node1 node2 AUTHOR
cmrunnode was developed by HP. SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmruncl(1m), cmviewcl(1m), cmeval(1m). Requires Optional Serviceguard Software cmrunnode(1m)
All times are GMT -4. The time now is 02:06 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy