can someone please share their process on how to kernel patch a red hat ent 4 with veritas cluster 5? it's compose of a primary and backup node. the resources are db, disk, nic.
it doesn't need to be detail, just give me the steps like:
login to the backup node and update the kernel
reboot the backup node
check the kernel is updated
failover the cluster and check if the resources are up on the backup node
patch the primary node with the new kernel and reboot
failover it back
I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system.
-- System State Frozen
A node1 running 0
A node2 running 0
-- Group State
-- Group System Probed ... (1 Reply)
I am working with a Sun StoredgeTek 6540 disk array connected to two Sun 490 servers. After taking one of the 490 nodes on the cluster down to single user mode I proceeded to install the latest cluster patch from Oracle. After the patch was completed the system rebooted, failed to rejoin the... (2 Replies)
Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC.
Am thinking how to migrate to sun cluster setup instead.
My plan as follows leave the existing vcs intact as a fallback plan.
Then install and build suncluster on... (5 Replies)
I need to update my redhatas4 kernel with kernel-2.6.9-67.0.20.EL.src.rpm.
When I run this
# rpm -ivh kernel-2.6.9-67.0.20.EL.src.rpm
warning: user brewbuilder does not exist - using root
warning: group brewbuilder does not exist - using root
warning: user brewbuilder does... (2 Replies)
Discussion started by: itik
LEARN ABOUT DEBIAN
VOTEQUORUM_LEAVING(3) Corosync Cluster Engine Programmer's Manual VOTEQUORUM_LEAVING(3)NAME
votequorum_leaving - Tell other nodes that we are leaving the cluster
int votequorum_leaving(votequorum_handle_t handle);
The votequorum_leaving function is used to tell the other nodes in the cluster that this node is leaving. They will (when the node actually
leaves) reduce quorum to keep the cluster running without this node.
This function should only be called if it is known that the node is being shut down for a known reason and could be out of the cluster for
an extended period of time.
Normal behaviour is for the cluster to reduce the total number of votes, but NOT expected_votes when a node leave the cluster, so the clus-
ter could become inquorate. This is correct behaviour and is ther eto prevent split-brain.
Do NOT call this function unless you know what you are doing.
This call returns the CS_OK value if successful, otherwise an error is returned.
The errors are undocumented.
SEE ALSO votequorum_overview(8), votequorum_initialize(3), votequorum_finalize(3), votequorum_dispatch(3), votequorum_fd_get(3),
corosync Man Page 2009-01-26 VOTEQUORUM_LEAVING(3)