Sponsored Content
Top Forums Programming A serious application using MPI???? Post 302785483 by DGPickett on Monday 25th of March 2013 05:14:51 PM
Old 03-25-2013
Well, to exploit parallelism, you can pipeline or multi-serve. Pipeline is doing parts on the problem in multiple modules that pull workd from one MPI and push product to another MPI. The multi-serve concept means dividing the problem in rotatio or by internal information, so when problem parts arrive at a dispatcher, it send them to different parallel, identical instances. It can be in rotation, on a queue depth basis when service suration is variable, or based on a message value, like stock symbol to drive a book and match engine, or two account digits to divide the flow into 100 streams. From what I saw of MPI, it lacks the transactional nature of MQ from IBM, which allowed restarts and supported RDBMS interfaces. And when you have scatter, you need gather, so MPI processes that collect data from many sources can merge them, perhaps in a binary tree of stream merges. MPI supports loosely coupled systems, and can be used to manage memory and process dispatching, where the dispatcher provides CPUs to threads emptying fuller queues and filling emptier queues. One idea I had was for a high speed sorting container, where the input was very parallel buffered for quick writing and the sorting was dynamically divided up into parallel streams followed by a merge tree. MPI can also be used to flow data to a deep buffer, at a point where file I/O can support data aggregation, to flow data into or out of RDBMS, and to flow data to and from files. Parallel flat file reading is not that difficult, and for fixed arrays of data into flat files, parallel writing is also possible. MPI supports heterogenous processing, so you could code in JAVA and run on a mix of platforms: PCs, MACs, SPARCs, etc. The mind boggles.
 

10 More Discussions You Might Find Interesting

1. Solaris

where can i download unix mpi ?

where can i download unix mpi? I have a sun workstation with solaris2.8 system and 8 cpus. I want to run parallel program but have no mpi software. Some people say there is unix mpi but i can not find it.Who can help me?Thanks.<removed> is my email. (2 Replies)
Discussion started by: jingwp
2 Replies

2. Solaris

please help me about solaris mpi

Dear Sir./Madam, Thanks the help from some users here,I have downloaded solaris mpi software and installed into my SUN E3500 SUNOS5.8 system. I need more help. When I submit the parallel computing task,there is a error message "ld.so.1 : /workdir/computingsoft.x : fatal : libmpi.so.1 : open... (1 Reply)
Discussion started by: jingwp
1 Replies

3. Programming

help required for linux MPI

hi I am starting with MPI programming, can anyone suggest me some books for MPI or some good online tutorials which will cover both theory and programming. thanks (0 Replies)
Discussion started by: bhakti
0 Replies

4. UNIX for Dummies Questions & Answers

MPI in Rock cluster

hi, may i know how to run mpi after i had install the rock cluster? is there any guidelines or examples? (0 Replies)
Discussion started by: joannetan9984
0 Replies

5. High Performance Computing

MPI, recovering node

Hi all, I'm writing an MPI application, in which I handle failures and recover them. In order to do that, in case of one node failure, I would like to remove that node from the MPI_COMM_WORLD group and continue with the remaining nodes. Does anybody know how I can do that? I'm using... (5 Replies)
Discussion started by: SaTYR
5 Replies

6. High Performance Computing

MPI error message and other applicatioins

1st,I'm a newbie. I've written a mpi program to realize the parallel computing and encounter many problems. 1. When the computing scale is small, that means the communication time is short, just needs few minutes, such as 14 minutes or less. The program runs well and finished the jog. ... (0 Replies)
Discussion started by: mystline
0 Replies

7. High Performance Computing

MPI - Error on sending argv

Hi all, I write a simple MPI program to send a text message to another process. The code is below. (test.c) #include "mpi.h" #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char* argv) { int dest, noProcesses, processId; MPI_Status status; ... (0 Replies)
Discussion started by: awiles
0 Replies

8. High Performance Computing

MPI + Cluster SMP

Hola, he instalado mpich2 vs. 1.2.1p1 en un cluster de biprocesadores con las opciones por defecto (antes usaba ssm pero visto que se quedaba colgado, lo he dejado con nemesis). El caso es que quisiera que cada vez que lanzo un job (por ejemplo de 2 procesos), cada proceso del trabajo se fuera... (1 Reply)
Discussion started by: Sonia_
1 Replies

9. High Performance Computing

Installation of MPI in a cluster of SMPs

Hi, I've installed mpich2 v. 1.2.1p1 on a cluster of dual-processors with the default options (in previous versions I used 'ssm' device, but now I use 'nemesis'). I'd like that every time I execute a job (e.g. with 2 MPI-processes), each job's process be dispatched on a different machine... (0 Replies)
Discussion started by: Sonia_
0 Replies

10. Programming

MPI C++ in a nested loop

I have a MPI program like this: void slave1(int j){ MPI_Status status; MPI_Recv(&j,1,MPI_INT,0,0,MPI_COMM_WORLD,&status);} void slave2(int j){ MPI_Status status; MPI_Recv(&j,1,MPI_INT,0,1,MPI_COMM_WORLD,&status);} int main(int argc, char *argv){ int numprocs, rank; ... (0 Replies)
Discussion started by: wanliushao
0 Replies
MPIL_Request_set_name(3)					      LAM/MPI						  MPIL_Request_set_name(3)

NAME
MPIL_Request_set_name - LAM/MPI-specific function to set a string name on an MPI_Request SYNOPSIS
#include <mpi.h> int MPIL_Request_set_name(MPI_Request req, char *name) INPUT PARAMETERS
req - MPI_Request (handle) name - Name NOTES
The name must be a null-terminated string. It is copied into internal storage during this call. ERRORS
If an error occurs in an MPI function, the current MPI error handler is called to handle it. By default, this error handler aborts the MPI job. The error handler may be changed with MPI_Errhandler_set ; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned (in C and Fortran; this error handler is less useful in with the C++ MPI bindings. The predefined error handler MPI::ERRORS_THROW_EXCEPTIONS should be used in C++ if the error value needs to be recovered). Note that MPI does not guarantee that an MPI program can continue past an error. All MPI routines (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as the value of the function and Fortran routines in the last argument. The C++ bindings for MPI do not return error values; instead, error values are communicated by throwing exceptions of type MPI::Exception (but not by default). Exceptions are only thrown if the error value is not MPI::SUCCESS . Note that if the MPI::ERRORS_RETURN handler is set in C++, while MPI functions will return upon an error, there will be no way to recover what the actual error value was. MPI_SUCCESS - No error; MPI routine completed successfully. MPI_ERR_ARG - Invalid argument. Some argument is invalid and is not identified by a specific error class. This is typically a NULL pointer or other such error. SEE ALSO
MPIL_Request_get_name LOCATION
mpil_rsetname.c LAM
/MPI 7.1.4 6/24/2006 MPIL_Request_set_name(3)
All times are GMT -4. The time now is 04:16 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy