Sponsored Content
Full Discussion: I/O bound computing clusters
Special Forums UNIX and Linux Applications High Performance Computing I/O bound computing clusters Post 302692459 by figaro on Monday 27th of August 2012 04:39:13 PM
Old 08-27-2012
Thank you again for your response.
Our backbone is 1GB as far as I know, but would have to check. The bigger issue is with nodes on the WAN, we should be lucky to sustain 1MB on those lines. That means we should consider compression/decompression for the results.
I will have our administrator look into setting up NFSes on the available nodes.
Our application stack is fairly standard: FreeBSD 8.x with a C++/mysql/python application. This should eliminate the diversity problem, but that doesn't mean we will not run into hardware issues. We may even have to assign the jobs greedily to the node with the fastest CPU first.
 

9 More Discussions You Might Find Interesting

1. High Performance Computing

question about clusters

hello all...first off let me say hi and im really glad to be apart of this community....tried to join awhile back but i couldnt for some reason im a highschool student and im eager to learn and what im trying to learn now is clusters i have 3 computers in my room all connected on a simple hub ... (1 Reply)
Discussion started by: hexadecimal0011
1 Replies

2. Virtualization and Cloud Computing

Event Cloud Computing - IBM Turning Data Centers Into ?Computing Cloud?

Tim Bass Thu, 15 Nov 2007 23:55:07 +0000 *I predict we may experience less*debates*on the use of the term “event cloud”*related to*CEP in the future, now that both IBM and Google* have made announcements about “cloud computing” and “computing cloud”, IBM Turning Data Centers Into ‘Computing... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. Solaris

List zones bound to a pool

How to get the list of zones which are bound to a pool say appPool. Rather then logging in each zone and then check from pool stat command. (3 Replies)
Discussion started by: fugitive
3 Replies

4. Programming

env not bound: BEDEWORK

I was trying to test dump data on bedework jxi console however I got the error below.I'm using debian as my OS and installed quickstart bedework on it. Pls advise what am I missing. thanks Caused by: javax.naming.NameNotFoundException: env not bound at... (1 Reply)
Discussion started by: lhareigh890
1 Replies

5. Solaris

VCS Clusters

:)Hi, can someone please explain VCS clustering and where do we need VCS clusters ..? :o:)Thanks in advance :o:) (1 Reply)
Discussion started by: amitbisht9
1 Replies

6. Linux

Memory bound error...

Hi all, Am getting the below error for a job that is run in our system. error code: 114, pc=0, call=1, seg=0 114 Attempt to access item beyond bounds of memory (Signal 11) This job uses a cobol program and as far as I know, the problem is related to this cobol program. What does this... (1 Reply)
Discussion started by: das.somik
1 Replies

7. Solaris

Bound, Unbound, Idle, Listening,

Hi Guys, I am studying netstat and I am getting confused a lot. I will be glad if someone will be kind enough to explain to me : 1) bound port 2) unbound port 3) idle 4 listening I will very much appreciate it. Thanks guys We have a special forum with special rules for homework (3 Replies)
Discussion started by: cjashu
3 Replies

8. Emergency UNIX and Linux Support

How to fix the CPU bound issues on AIX?

Hi All, Can you please answer my question. i see lot of CPU utilization on AIX LPARs. i am able to find the cause of the probelm. But i do not know how to mitigate or fix the problem. for instance, i found the process which is consuming most of CPU. i informed the responsible team. how... (7 Replies)
Discussion started by: System Admin 77
7 Replies

9. Shell Programming and Scripting

Awk: get upper and lower bound per group

Hi all, I've data as: 22 51018157 51018157 exonic CHKB nonsynonymous SNV 22 51018204 51018204 exonic CHKB nonsynonymous SNV 22 51018428 51018428 exonic CHKB nonsynonymous SNV 22 51018814 51018814 ... (4 Replies)
Discussion started by: genome
4 Replies
orte-clean(1)							     Open MPI							     orte-clean(1)

NAME
orte-clean - Cleans up any stale processes and files leftover from Open MPI jobs. SYNOPSIS
orte-clean [--verbose] mpirun --pernode [--host | --hostfile file] orte-clean [--verbose] OPTIONS
[-v | --verbose] This argument will run the command in verbose mode and print out the universes that are getting cleaned up as well as pro- cesses that are being killed. DESCRIPTION
orte-clean attempts to clean up any processes and files left over from Open MPI jobs that were run in the past as well as any currently running jobs. This includes OMPI infrastructure and helper commands, any processes that were spawned as part of the job, and any temporary files. orte-clean will only act upon processes and files that belong to the user running the orte-clean command. If run as root, it will kill off processes belonging to any users. When run from the command line, orte-clean will attempt to clean up the local node it is run from. When launched via mpirun, it will clean up the nodes selected by mpirun. EXAMPLES
Example 1: Clean up local node only. example% orte-clean Example 2: To clean up on a specific set of nodes specified on command line, it is recommended to use the pernode option. This will run one orte-clean for each node. example% mpirun --pernode --host node1,node2,node3 orte-clean To clean up on a specific set of nodes from a file. example% mpirun --pernode --hostfile nodes_file orte-clean Example 3: Within a resource managed environment like N1GE, SLURM, or Torque. The following example is from N1GE. First, we see that we have two nodes with two CPUs each. example% qsh -pe orte 4 example% mpirun -np 4 hostname node1 node1 node2 node2 Clean up all the nodes in the cluster. example% mpirun --pernode orte-clean Clean up a subset of the nodes in the cluster. example% mpirun --pernode --host node1 orte-clean SEE ALSO
orterun(1), orte-ps(1) 1.4.5 Feb 10, 2012 orte-clean(1)
All times are GMT -4. The time now is 10:11 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy