Sponsored Content
Full Discussion: High Performance Computing
Special Forums UNIX and Linux Applications High Performance Computing High Performance Computing Post 302257144 by humbletech99 on Tuesday 11th of November 2008 01:11:34 PM
Old 11-11-2008
High Performance Computing

I am interested in setting up some High Performance Computing clusters and would like to get people's views and experiences on this.

I have 2 requirements:

1. Compute clusters to do fast cpu intensive computations
2. Storage clusters of parallel and extendable filesystems spread across many nodes

Both of these should run across multiple commodity hardware nodes and ideally be Linux/Unix based and open source.

Any feedback welcome.
 

6 More Discussions You Might Find Interesting

1. AIX

Performance Problem - High CPU utilization

Hello everybody. I have a problem with my AIX 5.3. Recently my unix shows a high cpu utilization with sar or topas. I need to find what I have to do to solve this problem, in fact, I don't know what is my problem. I had the same problem with another AIX 5.3 running the same... (2 Replies)
Discussion started by: wilder.mellotto
2 Replies

2. High Performance Computing

IBM Scheduler for High Throughput Computing on IBM Blue Gene P

A lightweight scheduler that supports high-throughput computing (HTC) applications on Blue Gene/P. (NEW: 06/12/2008 in grid) More... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. High Performance Computing

What does high performance computing mean?

Sorry, I am not really from a computer science background. But from the subject of it, does it mean something like multi processor programming? distributed computing? like using erlang? Sound like it, which excite me. I just had a 3 day crash course in erlang and "Cocurrency oriented programming"... (7 Replies)
Discussion started by: linuxpenguin
7 Replies

4. High Performance Computing

High performance Linkpack

hello everyone , Im new to HPL. i wanted to know whether High performance linpack solves linear system of equations for single precision airthmatic on LINUX. it works for double precision , so is there any HPL version which is for single precision.\ thanks . (0 Replies)
Discussion started by: rahul_viz
0 Replies

5. Emergency UNIX and Linux Support

Performance investigation, very high runq-sz %runocc

I've just been handed a hot potato from a colleague who left :(... our client has been complaining about slow performance on one of our servers. I'm not very experienced in investigating performance issues so I hoping someone will be so kind to provide some guidance Here is an overview of the... (8 Replies)
Discussion started by: Solarius
8 Replies

6. High Performance Computing

High Performance Linpack Compiling Issue

I'm trying to compile Linpack on a Ubuntu cluster. I'm running MPI. I've modified the following values to fit my system TOPdir MPdir LAlib CC LINKER. When compiling I get the following error: (the error is at the end, the other errors in between are because I've ran the script several times so... (0 Replies)
Discussion started by: JPJPJPJP
0 Replies
sgmgr(1M)																 sgmgr(1M)

NAME
sgmgr - Serviceguard Manager SYNOPSIS
filename ] | COMserver username password cluster_name ...]] COMserver2 Remarks Serviceguard Manager is the graphical user interface for Serviceguard or Serviceguard Extension for RAC software, Version A.11.12 or later, Serviceguard products are not included in the standard HP-UX operating system. DESCRIPTION
The command starts Serviceguard Manager, the graphical user interface for Serviceguard clusters. Serviceguard Manager can be installed on HP-UX, Linux, or Windows. Serviceguard Manager can be used to view saved data files of a single cluster, or to see running clusters. To see the "live" cluster map, Serviceguard Manager connects to a Serviceguard node on the same subnet as those clusters, specifically to a part of Serviceguard called the Cluster Object Manager (COM). Options supports the options listed below. No options are required. If any option is not specified, the user will be prompted to supply it after the interface opens. Open a previously saved or example object data file. The file will have the .sgm extension. It can display only one cluster. This option cannot be used with any other options. Specify the Serviceguard node that will be the server. This node's COM will query cluster objects running on its subnets, and will report their status and configuration. Servers with Serviceguard Version A.11.12 or later can monitor clusters. Servers with Ser- viceguard Version A.11.14 or later can also perform administrative actions. Servers with Version A.11.16 or later can also configure clusters. To specify multiple sessions, repeat the -s option. The user login name for the COMserver node. Valid only if COMserver is specified. The user password on the COMserver node. Valid only if username is specified. In creating the map, the COMserver will include the cluster where it is a member. In creating the map, the COMserver will report information about the specified cluster_name(s). Specify clusters with the following cluster software installed: MC/Serviceguard Version A.10.10 or later, MC/LockManager Version A.11.02 or later, Serviceguard OPS or Extension for RAC Version A.11.08 or later, and all versions of MetroClusters and ContinentalClusters. To see several clusters, repeat the -c option. If you specify this unused nodes option all COMservers will report information about nodes that have Serviceguard installed, but are not currently configured in any cluster. To connect to another COMserver for another session, repeat the -s options. AUTHOR
was developed by HP. SEE ALSO
See documents at http://docs.hp.com/hpux/ha including: Managing Serviceguard. Configuring OPS Clusters with Serviceguard OPS Edition. Using Serviceguard Extension for Real Application Cluster (RAC). Series 700 or 800 Works with Optional Serviceguard or Serviceguard Extension for RAC Software sgmgr(1M)
All times are GMT -4. The time now is 10:53 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy