Sponsored Content
Operating Systems Linux Red Hat HA Cluster solution for Linux - which one to use? Post 302444310 by mark54g on Wednesday 11th of August 2010 01:35:26 PM
Old 08-11-2010
There's always Linux-HA with/without pacemaker and or drbd

http://www.linux-ha.org/wiki/DRBD
 

5 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Clustering solution for RH Linux AS and Solaris x86/AMD 64

Dear All Experts, Would like to know the maturity/ stability of Redhat Linux AS 3.0 and Solaris. My organization need to setup cluster solution. We are well-versed with Veritas Cluster on Solaris. We are thinking of waiting for certification support of the various ISV like Oracle, Veritas... (3 Replies)
Discussion started by: izy100
3 Replies

2. Linux

Linux as a NAS solution?

All, I am most familiar with Solaris, and I am in the process of learning Linux (Fedora 5), and one of my tasks is to replace our current NAS solution. We currently use EMC Celerra, but it is way too expensive for what we use it for. So I have looked into Linux. We mostly we have a Windows... (1 Reply)
Discussion started by: kjbaumann
1 Replies

3. Red Hat

Mail solution for Linux - Postfix

Hi , I am a complete newbie to linux, I have been given a job to install a mail solution (postfix if possible) I would like to know how should I go about installing..) Can any one provide me some steps to go about it.. i have read about postfix and the installation steps, it seems there are... (1 Reply)
Discussion started by: anilhk
1 Replies

4. High Performance Computing

Linux cluster

Hi, I am not sure if the term "cluster" fits my situation or not. I have 5 Linux computers, each has different a host-name, and my users have to remember those the computer names to login and find out which computers have free CPUs, so they can run their jobs on those computers have free CPUs. ... (4 Replies)
Discussion started by: hiepng
4 Replies

5. Red Hat

Free Cluster Solution for RHEL5

Can some one tell me free cluster solution avaialable for RHEL5 we just want to test the clustering on RHEL. (6 Replies)
Discussion started by: fugitive
6 Replies
scgdevs(1M)						  System Administration Commands					       scgdevs(1M)

NAME
scgdevs - global devices namespace administration script SYNOPSIS
/usr/cluster/bin/scgdevs DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scgdevs command manages the global devices namespace. The global devices namespace is mounted under the /global directory and consists of a set of logical links to physical devices. As the /dev/global directory is visible to each node of the cluster, each physical device is visible across the cluster. This fact means that any disk, tape, or CD-ROM that is added to the global-devices namespace can be accessed from any node in the cluster. The scgdevs command enables you to attach new global devices (for example, tape drives, CD-ROM drives, and disk drives) to the global- devices namespace without requiring a system reboot. You must run the devfsadm command before you run the scgdevs command. Alternatively, you can perform a reconfiguration reboot to rebuild the global namespace and attach new global devices. See the boot(1M) man page for more information about reconfiguration reboots. You must run this command from a node that is a current cluster member. If you run this command from a node that is not a cluster member, the command exits with an error code and leaves the system state unchanged. You can use this command only in the global zone. You need solaris.cluster.system.modify RBAC authorization to use this command. See the rbac(5) man page. You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a spe- cial kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run the su command to assume a role. You can also use the pfexec command to issue privileged Sun Clus- ter commands. EXIT STATUS
The following exit values are returned: 0 The command completed successfully. nonzero An error occurred. Error messages are displayed on the standard output. FILES
/devices Device nodes directory /global/.devices Global devices nodes directory /dev/md/shared Solaris Volume Manager metaset directory ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
pfcsh(1), pfexec(1), pfksh(1), pfsh(1), Intro(1CL), cldevice(1CL), boot(1M), devfsadm(1M), su(1M), did(7) Sun Cluster System Administration Guide for Solaris OS NOTES
The scgdevs command, called from the local node, will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean that the command has completed its work clusterwide. This document does not constitute an API. The /global/.devices directory and the /devices directory might not exist or might have different contents or interpretations in a future release. The existence of this notice does not imply that any other documentation that lacks this notice constitutes an API. This interface should be considered an unstable interface. Sun Cluster 3.2 10 Apr 2006 scgdevs(1M)
All times are GMT -4. The time now is 04:58 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy