Sponsored Content
Special Forums UNIX and Linux Applications High Performance Computing Building Linux cluster for mechanical engineering software Post 302918831 by DGPickett on Thursday 25th of September 2014 05:44:39 PM
Old 09-25-2014
Just a few thoughts:
  • Don't overlook Lustre, the high bandwidth distributed NFS.
  • VM is a run in the opposite direction, but for some things it can be appropriate. Watch your reliability aand administrative models, as more VMs is just that much more load on them.
  • Clusters are usually homogenous. There are other tactics for distrbuted processing that are more heterogenous-friendly.
  • A remote X like vnc often has much higher performance, due to low Xserver latency.
 

4 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

building and running a software in different linux kernel versions

my Querry is if i build a software on a specific linux kernel and then try to run it on another linux kernel ....what can be the possible problems or what errors can most probably appear while running the binary in an updated version of linux. (1 Reply)
Discussion started by: mobydick
1 Replies

2. High Performance Computing

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris

Provides a description of how to set up a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris. More... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. High Performance Computing

Building a Linux Virtual Server cluster

Hi Guys, I'm busy building a LVS-NAT cluster on Red-Hat server 5.1 and I need a kernel that has LVS capabilities for a red-hat server 5.1. Is the anyone who can advise me where I can get this kernel. I have already visited the following site Ultra Monkey: and this has old kernels e.g. 2.4.20... (2 Replies)
Discussion started by: Linux Duke
2 Replies

4. Red Hat

Free Cluster software with Red Hat Linux 5.0

Hi, I would like to know wheather any free cluster software is coming with Red Hat Ent Linux Medias? or needs to be purchased seperately. (3 Replies)
Discussion started by: manoj.solaris
3 Replies
scgdevs(1M)						  System Administration Commands					       scgdevs(1M)

NAME
scgdevs - global devices namespace administration script SYNOPSIS
/usr/cluster/bin/scgdevs DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scgdevs command manages the global devices namespace. The global devices namespace is mounted under the /global directory and consists of a set of logical links to physical devices. As the /dev/global directory is visible to each node of the cluster, each physical device is visible across the cluster. This fact means that any disk, tape, or CD-ROM that is added to the global-devices namespace can be accessed from any node in the cluster. The scgdevs command enables you to attach new global devices (for example, tape drives, CD-ROM drives, and disk drives) to the global- devices namespace without requiring a system reboot. You must run the devfsadm command before you run the scgdevs command. Alternatively, you can perform a reconfiguration reboot to rebuild the global namespace and attach new global devices. See the boot(1M) man page for more information about reconfiguration reboots. You must run this command from a node that is a current cluster member. If you run this command from a node that is not a cluster member, the command exits with an error code and leaves the system state unchanged. You can use this command only in the global zone. You need solaris.cluster.system.modify RBAC authorization to use this command. See the rbac(5) man page. You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a spe- cial kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run the su command to assume a role. You can also use the pfexec command to issue privileged Sun Clus- ter commands. EXIT STATUS
The following exit values are returned: 0 The command completed successfully. nonzero An error occurred. Error messages are displayed on the standard output. FILES
/devices Device nodes directory /global/.devices Global devices nodes directory /dev/md/shared Solaris Volume Manager metaset directory ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
pfcsh(1), pfexec(1), pfksh(1), pfsh(1), Intro(1CL), cldevice(1CL), boot(1M), devfsadm(1M), su(1M), did(7) Sun Cluster System Administration Guide for Solaris OS NOTES
The scgdevs command, called from the local node, will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean that the command has completed its work clusterwide. This document does not constitute an API. The /global/.devices directory and the /devices directory might not exist or might have different contents or interpretations in a future release. The existence of this notice does not imply that any other documentation that lacks this notice constitutes an API. This interface should be considered an unstable interface. Sun Cluster 3.2 10 Apr 2006 scgdevs(1M)
All times are GMT -4. The time now is 12:30 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy