07-09-2008
Hi,
can you please confirm what is it exactly that I need to do as I am not sure?
how is this done :
if your cluster is 3.2 you should not use Network_resources_used any more, just place your logical host in the dependency list
10 More Discussions You Might Find Interesting
1. HP-UX
I have a 2-node ServiceGuard cluster. One of the cluster packages has a volume group assigned to it. When I fail the package over to the other node, the volume group does not come up automatically on the other node.
I have to manually do a "vgchange -a y vgname" on the node before the package... (5 Replies)
Discussion started by: Wotan31
5 Replies
2. High Performance Computing
I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier.
Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies
3. Solaris
Hi,
We have two sun SPARC server in Clustered (Sun Cluster 3.1). For some reason, System 1 failed over to System 2. Where can I find the logs which could tell me the reason for this failover?
Thanks (5 Replies)
Discussion started by: Mack1982
5 Replies
4. AIX
Hi,
I have a requirement to monitor the HACMP Resource Groups. At present in my environment, if the Resource Groups fail over from preferred node to Secondary node we dont get notification.
Can some one help me in creating a scrript. I have more than one RG online. (Max 4 Resource Groups in... (2 Replies)
Discussion started by: srnagu
2 Replies
5. Gentoo
How to failover the cluster ? GNU/Linux
By which command,
My Linux version
2008 x86_64 x86_64 x86_64 GNU/Linux
What are the prerequisites we need to take while failover ?
if any
Regards (3 Replies)
Discussion started by: sidharthmellam
3 Replies
6. AIX
Hi,
I have a 2 node Cluster. Which is working in active/passive mode (i.e Node#1 is running and when it goes down the Node#2 takes over)
Now there's this requirement that we need a mount point say /test that should be available in active node #1 and when node #1 goes down and node#2 takes... (6 Replies)
Discussion started by: aixromeo
6 Replies
7. Solaris
Hello experts -
I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts.
(1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over)
(2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies
8. Solaris
Dear Experts,
If there is a possible Solaris Cluster failover to second node based on scan rate?
I need the documentation If solaris cluster can do this.
Thank You in Advance
Edy (3 Replies)
Discussion started by: edydsuranta
3 Replies
9. Red Hat
Hi Guys,
I am not much aware of clusters but i have few questions can someone provide the overview as it would be very helpful for me.
How can i perform cluster failover test to see all the services are failing back to other node ? If it is using veritas cluster then what kind of... (2 Replies)
Discussion started by: munna529
2 Replies
10. Solaris
Hi
Well I would like to know step by step process of adding a mountpoint in HAPLUS resource in SUN cluster as I go the below command to add a mount point but not the step by step process of adding a mount point in existing HA Plus resource.
clrs set -p FileSystemMountPoints+=<new_MP>... (3 Replies)
Discussion started by: amity
3 Replies
LEARN ABOUT CENTOS
stonith_admin
PACEMAKER(8) System Administration Utilities PACEMAKER(8)
NAME
Pacemaker - Part of the Pacemaker cluster resource manager
SYNOPSIS
stonith_admin mode [options]
DESCRIPTION
stonith_admin - Provides access to the stonith-ng API.
Allows the administrator to add/remove/list devices, check device and host status and fence hosts
OPTIONS
-?, --help
This text
-$, --version
Version information
-V, --verbose
Increase debug output
-q, --quiet
Print only essential output
Commands:
-l, --list=value
List devices that can terminate the specified host
-L, --list-registered
List all registered devices
-I, --list-installed
List all installed devices
-M, --metadata Check the device's metadata
-Q, --query=value
Check the device's status
-F, --fence=value
Fence the named host
-U, --unfence=value
Unfence the named host
-B, --reboot=value
Reboot the named host
-C, --confirm=value
Confirm the named host is now safely down
-H, --history=value
Retrieve last fencing operation
-R, --register=value
Register the named stonith device. Requires: --agent, optional: --option
-D, --deregister=value De-register the named stonith device
-r, --register-level=value
Register a stonith level for the named host. Requires: --index, one or more --device entries
-d, --deregister-level=value
De-register a stonith level for the named host. Requires: --index
Options and modifiers:
-a, --agent=value
The agent (eg. fence_xvm) to instantiate when calling with --register
-e, --env-option=value
-o, --option=value
-T, --tag=value
Identify fencing operations with the specified tag.
Useful when there are multiple entities that might be invoking stonith_admin(8)
-v, --device=value
A device to associate with a given host and stonith level
-i, --index=value
The stonith level (1-9)
-t, --timeout=value
Operation timeout in seconds
--tolerance=value
(Advanced) Do nothing if an equivalent --fence request succeeded less than N seconds earlier
-L, --list-all legacy alias for --list-registered
AUTHOR
Written by Andrew Beekhof
REPORTING BUGS
Report bugs to pacemaker@oss.clusterlabs.org
Pacemaker 1.1.10-29.el7 June 2014 PACEMAKER(8)