07-09-2008
Sun cluster 3.2
hi,
I have also attached acopy of the /etc/cluster/ccr/rgm_rg_proxy2-rg
file:
bash-3.00# cat rgm_rg_proxy2-rg
ccr_gennum 6
ccr_checksum 53FF13F4E152CAB05ED6D524C74B089C
Unmanaged FALSE
Nodelist 1,2,3,4
Maximum_primaries 1
Desired_primaries 1
Failback FALSE
RG_System FALSE
Resource_list proxy2-HAS-rs,proxy2-zone-rs
RG_dependencies
Global_resources_used *
RG_mode Failover
Implicit_network_dependencies TRUE
Pathprefix
RG_description
Pingpong_interval 3600
RG_project_name
RG_SLM_type manual
RG_SLM_pset_type default
RG_SLM_CPU_SHARES 1
RG_SLM_PSET_MIN 0
RG_affinities
Auto_start_on_new_cluster TRUE
Suspend_automatic_recovery FALSE
Ok_To_Start
RS_proxy2-HAS-rs Type=SUNW.HAStoragePlus:6;Type_version=6;R_description=;On_off_switch=1,2,3,4;Monitored_switch=1,2,3 ,4;Resource_project_name=;Resource_dependencies=;Resource_dependencies_weak=;Resource_dependencies_r estart=;Resource_dependencies_offline_restart=;Extension;FilesystemMountPoints=/opt/zones/mail/proxy2.mail.internal,/opt/zones/mail/proxy2.mail.internal/mounts/var
RS_proxy2-zone-rs Type=SUNW.gds:6;Type_version=6;R_description=;On_off_switch=1,2,3,4;Monitored_switch=1,2,3,4;Resourc e_project_name=;Resource_dependencies=proxy2-HAS-rs;Resource_dependencies_weak=;Resource_dependencies_restart=;Resource_dependencies_offline_restart= ;Extension;Start_command=/opt/SUNWsczone/sczbt/bin/start_sczbt -R proxy2-zone-rs -G proxy2-rg -P /opt/ParameterFile;Stop_command=/opt/SUNWsczone/sczbt/bin/stop_sczbt -R proxy2-zone-rs -G proxy2-rg -P /opt/ParameterFile;Probe_command=/opt/SUNWsczone/sczbt/bin/probe_sczbt -R proxy2-zone-rs -G proxy2-rg -P /opt/ParameterFile;Network_aware=FALSE;Stop_signal=9
10 More Discussions You Might Find Interesting
1. HP-UX
I have a 2-node ServiceGuard cluster. One of the cluster packages has a volume group assigned to it. When I fail the package over to the other node, the volume group does not come up automatically on the other node.
I have to manually do a "vgchange -a y vgname" on the node before the package... (5 Replies)
Discussion started by: Wotan31
5 Replies
2. High Performance Computing
I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier.
Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies
3. Solaris
Hi,
We have two sun SPARC server in Clustered (Sun Cluster 3.1). For some reason, System 1 failed over to System 2. Where can I find the logs which could tell me the reason for this failover?
Thanks (5 Replies)
Discussion started by: Mack1982
5 Replies
4. AIX
Hi,
I have a requirement to monitor the HACMP Resource Groups. At present in my environment, if the Resource Groups fail over from preferred node to Secondary node we dont get notification.
Can some one help me in creating a scrript. I have more than one RG online. (Max 4 Resource Groups in... (2 Replies)
Discussion started by: srnagu
2 Replies
5. Gentoo
How to failover the cluster ? GNU/Linux
By which command,
My Linux version
2008 x86_64 x86_64 x86_64 GNU/Linux
What are the prerequisites we need to take while failover ?
if any
Regards (3 Replies)
Discussion started by: sidharthmellam
3 Replies
6. AIX
Hi,
I have a 2 node Cluster. Which is working in active/passive mode (i.e Node#1 is running and when it goes down the Node#2 takes over)
Now there's this requirement that we need a mount point say /test that should be available in active node #1 and when node #1 goes down and node#2 takes... (6 Replies)
Discussion started by: aixromeo
6 Replies
7. Solaris
Hello experts -
I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts.
(1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over)
(2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies
8. Solaris
Dear Experts,
If there is a possible Solaris Cluster failover to second node based on scan rate?
I need the documentation If solaris cluster can do this.
Thank You in Advance
Edy (3 Replies)
Discussion started by: edydsuranta
3 Replies
9. Red Hat
Hi Guys,
I am not much aware of clusters but i have few questions can someone provide the overview as it would be very helpful for me.
How can i perform cluster failover test to see all the services are failing back to other node ? If it is using veritas cluster then what kind of... (2 Replies)
Discussion started by: munna529
2 Replies
10. Solaris
Hi
Well I would like to know step by step process of adding a mountpoint in HAPLUS resource in SUN cluster as I go the below command to add a mount point but not the step by step process of adding a mount point in existing HA Plus resource.
clrs set -p FileSystemMountPoints+=<new_MP>... (3 Replies)
Discussion started by: amity
3 Replies
LEARN ABOUT HPUX
cmdeleteconf
cmdeleteconf(1m) cmdeleteconf(1m)
NAME
cmdeleteconf - Delete either the cluster or the package configuration
SYNOPSIS
cmdeleteconf [-f] [-v] [-c cluster_name] [[-p package_name]...]
DESCRIPTION
cmdeleteconf deletes either the entire cluster configuration, including all its packages, or only the specified package configuration. If
neither cluster_name nor package_name is specified, cmdeleteconf will delete the local cluster's configuration and all its packages. If
the local node's cluster configuration is outdated, cmdeleteconf without any argument will only delete the local node's configuration. If
only the package_name is specified, the configuration of package_name in the local cluster is deleted. If both cluster_name and pack-
age_name are specified, the package must be configured in the cluster_name, and only the package package_name will be deleted. cmdelete-
conf with only cluster_name specified will delete the entire cluster configuration on all the nodes in the cluster, regardless of the con-
figuration version. The local cluster is the cluster that the node running the cmdeleteconf command belongs to.
Only a superuser, whose effective user ID is zero (see id(1) and su(1)), can delete the configuration.
To delete the cluster configuration, halt the cluster first. To delete a package configuration you must halt the package first, but you do
not need to halt the cluster (it may remain up or be brought down). To delete the package VxVM-CVM-pkg (HP-UX only), you must first delete
all packages with STORAGE_GROUP defined.
While deleting the cluster, if any of the cluster nodes are powered down, the user can choose to continue deleting the configuration. In
this case, the cluster configuration on the down node will remain in place and, therefore, be out of sync with the rest of the cluster. If
the powered-down node ever comes up, the user should execute the cmdeleteconf command with no argument on that node to clean up the config-
uration before doing any other Serviceguard command.
Options
cmdeleteconf supports the following options:
-f Force the deletion of either the cluster configuration or the package configuration.
-v Verbose output will be displayed.
-c cluster_name
Name of the cluster to delete. The cluster must be halted already, if intending to delete the cluster.
-p package_name
Name of an existing package to delete from the cluster. The package must be halted already. There should not be any
packages in the cluster with STORAGE_GROUP defined before having a package_name of VxVM-CVM-pkg (HP-UX only).
RETURN VALUE
Upon completion, cmdeleteconf returns one of the following values:
0 Successful completion.
1 Command failed.
EXAMPLES
The high availability environment contains the cluster, clusterA , and a package, pkg1.
To delete package pkg1 in clusterA, do the following:
cmdeleteconf -f -c clusterA -p pkg1
To delete the cluster clusterA and all its packages, do the following:
cmdeleteconf -f -c clusterA
AUTHOR
cmdeleteconf was developed by HP.
SEE ALSO
cmcheckconf(1m), cmapplyconf(1m), cmgetconf(1m), cmmakepkg(1m), cmquerycl(1m).
Requires Optional Serviceguard Software cmdeleteconf(1m)