10 More Discussions You Might Find Interesting
1. HP-UX
Hi,
We have HP UX service guard cluster on OS 11.23. Recently 40+ LUNs presented to both nodes by SAN team but I was asked to mount them on only one node. I created required VGs/LVs, created VxFS and mounted all of them and they are working fine. Now client requested those FS on 2nd node as... (4 Replies)
Discussion started by: prvnrk
4 Replies
2. AIX
Setting up HACMP 6.1 on a two node cluster. The other node works fine and can start properly on STABLE state (VGs varied, FS mounted, Service IP aliased). However, the other node is always stuck on ST_JOINING state. Its taking forever and you can't stop the cluster as well or recover from script... (2 Replies)
Discussion started by: depam
2 Replies
3. UNIX for Advanced & Expert Users
We have two node cluster with OS disk mirrored under SVM. There is slight disk problem on one of the mirror disk causing cluster to panic.
Failure of one mirror disk causing VCS to panic the node. Why VCS is not able to write /var filesystem, as one of the disk is healthy.
... (1 Reply)
Discussion started by: amlanroy
1 Replies
4. Solaris
I have one disk that is reporting I/O errors but the same LUN mounted on a different node is able to access it without issue, is there a way to identify which HBA is being used for the LUN without swapping each out at a time? (4 Replies)
Discussion started by: thmnetwork
4 Replies
5. Solaris
Dear all,
I have created a shared metaset(500gb) having 3 hosts in which 2 hosts are in cluster and 1 is non cluster. I have taken the ownership in cluster node from non cluster node but the problem is i am unable to mount the file system it is giving error "/dev/md/eccdb-ds/d100 or /eccdb-ds... (1 Reply)
Discussion started by: spandhan
1 Replies
6. High Performance Computing
hi,
i am trying to setup a 2 node cluster environment. following is what i have;
1. 2 x sun ultra60 - 450MHz procs, 1GB RAM, 9GB HDD, solaris 10
2. 2 x HBA cards
3. 2 x Connection leads to connect ultra60 with D1000
4. 1 x D1000 storage box.
5. 3 x 9GB HDD + 2 x 36GB HDD
first of all,... (1 Reply)
Discussion started by: solman17
1 Replies
7. High Performance Computing
All-
I am new to these forums so please excuse me if this post is in the wrong place.
I had a node crash in a 4 node cluster and mgmt has determined this node will not be part of the cluster when rebuilt. I am researching how to remove it from the cluster information on the other 3 nodes and... (2 Replies)
Discussion started by: bluescreen
2 Replies
8. HP-UX
Need help guys!
when running cmrunnode batch i'm getting this error
cmrunnode : Waiting for cluster to... (1 Reply)
Discussion started by: Tris
1 Replies
9. Solaris
I m not able to copy the text present on the tree's node to terminal or other text editor in solaris. I m using <Shift><control> C and V comaand for the same but the text is not being copied and pasted on the text pad or the terminal window.
While the same is possible in windows OS using ctrl+c... (3 Replies)
Discussion started by: friendanoop
3 Replies
10. Shell Programming and Scripting
Hello,
Under ksh I have to run a script on one of the nodes of a Solaris 8 cluster which at some time must execute a command on the alternate node:
# rsh <name> "command"
I have to implement this script on all the clusters of my company (a lot of...).
Fortunately, the names of the two nodes... (11 Replies)
Discussion started by: heartwork
11 Replies
VOTEQUORUM_LEAVING(3) Corosync Cluster Engine Programmer's Manual VOTEQUORUM_LEAVING(3)
NAME
votequorum_leaving - Tell other nodes that we are leaving the cluster
SYNOPSIS
#include <corosync/votequorum.h>
int votequorum_leaving(votequorum_handle_t handle);
DESCRIPTION
The votequorum_leaving function is used to tell the other nodes in the cluster that this node is leaving. They will (when the node actually
leaves) reduce quorum to keep the cluster running without this node.
This function should only be called if it is known that the node is being shut down for a known reason and could be out of the cluster for
an extended period of time.
Normal behaviour is for the cluster to reduce the total number of votes, but NOT expected_votes when a node leave the cluster, so the clus-
ter could become inquorate. This is correct behaviour and is ther eto prevent split-brain.
Do NOT call this function unless you know what you are doing.
RETURN VALUE
This call returns the CS_OK value if successful, otherwise an error is returned.
ERRORS
The errors are undocumented.
SEE ALSO
votequorum_overview(8), votequorum_initialize(3), votequorum_finalize(3), votequorum_dispatch(3), votequorum_fd_get(3),
corosync Man Page 2009-01-26 VOTEQUORUM_LEAVING(3)