I need your kind suggestions or feedback on choice of clustering software for red hat linux nodes running certain applications which I need to make highly available.
Minimum requirement is 2 nodes; all nodes shall be active-active running distinct applications e.g. node 1 runs application ABC and node 2 runs application PQR, node 1 is back up node for PQR and node 2 for ABC. Node 1 and node 2 do not have any shared storage required, they access back end database which is protected by Oracle's dataguard.
I need to have cost-effective HA with minimum downtime. There are various HA solutions available in market - SteelEye, Veritas Cluster, Red Hat cluster suite, IBM HACMP etc. and I have almost narrowed down to Red Hat Cluster(for it being cheaper) vs Veritas (for it being feature rich).
Any experiences with clustering? Any pros and cons of these two cluster software (with references would be appreciated) would be of great help?
Dear mods, if this thread is not suitable for this sub-forum; kindly advise where to post within this forum
Hi, I am not sure if the term "cluster" fits my situation or not. I have 5 Linux computers, each has different a host-name, and my users have to remember those the computer names to login and find out which computers have free CPUs, so they can run their jobs on those computers have free CPUs.
... (4 Replies)
Hi ,
I am a complete newbie to linux, I have been given a job to install a mail solution (postfix if possible) I would like to know how should I go about installing..)
Can any one provide me some steps to go about it..
i have read about postfix and the installation steps, it seems there are... (1 Reply)
All,
I am most familiar with Solaris, and I am in the process of learning Linux (Fedora 5), and one of my tasks is to replace our current NAS solution. We currently use EMC Celerra, but it is way too expensive for what we use it for. So I have looked into Linux.
We mostly we have a Windows... (1 Reply)
Dear All Experts,
Would like to know the maturity/ stability of Redhat Linux AS 3.0 and Solaris.
My organization need to setup cluster solution. We are well-versed with Veritas Cluster on Solaris.
We are thinking of waiting for certification support of the various ISV like Oracle, Veritas... (3 Replies)
scgdevs(1M) System Administration Commands scgdevs(1M)NAME
scgdevs - global devices namespace administration script
SYNOPSIS
/usr/cluster/bin/scgdevs
DESCRIPTION
Note -
Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software
still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor-
mation about the object-oriented command set, see the Intro(1CL) man page.
The scgdevs command manages the global devices namespace. The global devices namespace is mounted under the /global directory and consists
of a set of logical links to physical devices. As the /dev/global directory is visible to each node of the cluster, each physical device is
visible across the cluster. This fact means that any disk, tape, or CD-ROM that is added to the global-devices namespace can be accessed
from any node in the cluster.
The scgdevs command enables you to attach new global devices (for example, tape drives, CD-ROM drives, and disk drives) to the global-
devices namespace without requiring a system reboot. You must run the devfsadm command before you run the scgdevs command.
Alternatively, you can perform a reconfiguration reboot to rebuild the global namespace and attach new global devices. See the boot(1M) man
page for more information about reconfiguration reboots.
You must run this command from a node that is a current cluster member. If you run this command from a node that is not a cluster member,
the command exits with an error code and leaves the system state unchanged.
You can use this command only in the global zone.
You need solaris.cluster.system.modify RBAC authorization to use this command. See the rbac(5) man page.
You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized
users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a spe-
cial kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile.
A profile shell is launched when you run the su command to assume a role. You can also use the pfexec command to issue privileged Sun Clus-
ter commands.
EXIT STATUS
The following exit values are returned:
0 The command completed successfully.
nonzero An error occurred. Error messages are displayed on the standard output.
FILES
/devices Device nodes directory
/global/.devices Global devices nodes directory
/dev/md/shared Solaris Volume Manager metaset directory
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|Availability |SUNWsczu |
+-----------------------------+-----------------------------+
|Interface Stability |Evolving |
+-----------------------------+-----------------------------+
SEE ALSO pfcsh(1), pfexec(1), pfksh(1), pfsh(1), Intro(1CL), cldevice(1CL), boot(1M), devfsadm(1M), su(1M), did(7)
Sun Cluster System Administration Guide for Solaris OS
NOTES
The scgdevs command, called from the local node, will perform its work on remote nodes asynchronously. Therefore, command completion on the
local node does not necessarily mean that the command has completed its work clusterwide.
This document does not constitute an API. The /global/.devices directory and the /devices directory might not exist or might have different
contents or interpretations in a future release. The existence of this notice does not imply that any other documentation that lacks this
notice constitutes an API. This interface should be considered an unstable interface.
Sun Cluster 3.2 10 Apr 2006 scgdevs(1M)