Sponsored Content
Full Discussion: VCS heartbeat
Operating Systems Solaris VCS heartbeat Post 302331233 by incredible on Saturday 4th of July 2009 09:15:59 PM
Old 07-04-2009
VCS heartbeat

we have a vcs cluster set up and noticed that they were showing one of the heart beat link qfe3 as DOWN. Both qfe2 and qfe3 were fine all along, btw if I were to push in/re-set the hb cable, do you know whether it will panic or cause anything to the system,? Its a DB critical prod nodes
 

9 More Discussions You Might Find Interesting

1. Programming

checking the heartbeat of the online user

hello, Can anyone please tell me how can i check the availability of the online user in a client-server environtment. This is for a program where lakhs of client are connected to the server and the server has to check the availability of the every client in every minute. So polling every client... (0 Replies)
Discussion started by: shushilmore
0 Replies

2. Linux

linux-heartbeat on Solaris 9

has anyone installed linux-heartbeat on Solaris 9?? If yes, which version??? which is the best compiler to build it?? cc, ucbcc, gcc?? what other packages are needed to build it besides m4, autoconf, automake and libtool?? what GNU tools are needed??? thanks a lot (0 Replies)
Discussion started by: eldiego
0 Replies

3. Cybersecurity

Heartbeat configuring issue

hi i'm configuring linux heartbeat in my two redhat boxes i have below few things to clarify can i set up heartbeat between two different notworks? (e.g primary server IP is 192.168.x.x and secondary server IP is 10.48.X.X) i got below error in my secondary server once primary is down ... (2 Replies)
Discussion started by: asela115
2 Replies

4. UNIX for Dummies Questions & Answers

Heartbeat configuring in Redhat

hi, I'm currently trying to configure Linux heartbeat on my two Linux servers(where SMPP service is running) my two machines are in two different location with different notworks (primary is in 192.168.x.x and secondary is in 10.48.x.x network) I want to know whether is it possible to... (0 Replies)
Discussion started by: asela115
0 Replies

5. AIX

Heartbeat network down

Hello. I have a cluster with two heartbeat networks. Both are on separate VG. Some one varyoned those Vg's on one node. After that heartbeat network went down on both nodes. I varyoffed those two Vg's networks came back up but cluster is still in UNSTABLE status and in hacmp.out... (1 Reply)
Discussion started by: phobus
1 Replies

6. Solaris

VCS on Solaris: VCS ERROR V-16-2-13077 (host2) Agent is unable to offline resource(DiskReservation)

hi, dear all I get a problem "VCS ERROR V-16-2-13077 " on VCS 4.1 for Solaris 10. I can not offline the host2 when the raid is bad. I don't know the reason and how to offline host2 and switch to host1. please help me, thank you! the message of engine_A.log is : ... (2 Replies)
Discussion started by: ForgetChen
2 Replies

7. UNIX for Advanced & Expert Users

LVS AND HEARTBEAT

Hi there, anybody can help me get a manual or a website where LVS and Heartbeat for UBUNTU (please not for REDHAT ) is hosted. Any help will be much appreciated. neuvin (0 Replies)
Discussion started by: neuvinapp
0 Replies

8. UNIX for Dummies Questions & Answers

Heartbeat network IP range in Clustering

I am trying to do the two node clustering with serviceguard package in redhat 5 and I am following this link http ://pbraun.nethence.com/doc/sysutils/mcsg.html Here They are mentioning 10.1.1 for network use and 10.1.2 and 10.1.3 for HeartBeat use, I have a doubt, How to make a heartbeat... (1 Reply)
Discussion started by: sankarg304
1 Replies

9. Programming

PL/SQL heartbeat Query has errors

Hello, I want to query something simple which works as a standalone sqlplus query perfectly: Table statements: ALTER TABLE MP$PATHLOADER.ISALIVE DROP PRIMARY KEY CASCADE; DROP TABLE MP$PATHLOADER.ISALIVE CASCADE CONSTRAINTS; CREATE TABLE MP$PATHLOADER.ISALIVE ( ISALIVE_PK ... (2 Replies)
Discussion started by: sdohn
2 Replies
cmruncl(1m)															       cmruncl(1m)

NAME
cmruncl - run a high availability cluster SYNOPSIS
cmruncl [-f] [-v] [-n node_name...] [-t | -w none] DESCRIPTION
cmruncl causes all nodes in a configured cluster or all nodes specified to start their cluster daemons and form a new cluster. To start a cluster, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configuration file. See access policy in cmquerycl(1m). This command should only be run when the cluster is not active on any of the configured nodes. This command verifies the network configu- ration before causing the nodes to start their cluster daemons. If a cluster is already running on a subset of the nodes, the cmrunnode command should be used to start the remaining nodes and force them to join the existing cluster. If node_name is not specified, the cluster daemons will be started on all the nodes in the cluster. All nodes in the cluster must be available for the cluster to start unless a subset of nodes is specified. Options cmruncl supports the following options: -f Force cluster startup without warning message and continuation prompt that are printed with the -n option. -v Verbose output will be displayed. -t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages. The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the nodes can meet any external dependencies such as EMS resources, package subnets, and storage. -n node_name... Start the cluster daemon on the specified subset of node(s). -w none By default network probing is performed to check that the network connectivity is the same as when the cluster was config- ured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The option should only be used if this network configuration is known to be correct from a recent check. RETURN VALUE cmruncl returns the following value: 0 Successful completion. 1 Command failed. EXAMPLES
Run the cluster daemon: cmruncl Run the cluster daemons on node1 and node2: cmruncl -n node1 -n node2 AUTHOR
cmruncl was developed by HP. SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m). Requires Optional Serviceguard Software cmruncl(1m)
All times are GMT -4. The time now is 04:32 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy