Sponsored Content
Top Forums Programming A serious application using MPI???? Post 302785483 by DGPickett on Monday 25th of March 2013 05:14:51 PM
Old 03-25-2013
Well, to exploit parallelism, you can pipeline or multi-serve. Pipeline is doing parts on the problem in multiple modules that pull workd from one MPI and push product to another MPI. The multi-serve concept means dividing the problem in rotatio or by internal information, so when problem parts arrive at a dispatcher, it send them to different parallel, identical instances. It can be in rotation, on a queue depth basis when service suration is variable, or based on a message value, like stock symbol to drive a book and match engine, or two account digits to divide the flow into 100 streams. From what I saw of MPI, it lacks the transactional nature of MQ from IBM, which allowed restarts and supported RDBMS interfaces. And when you have scatter, you need gather, so MPI processes that collect data from many sources can merge them, perhaps in a binary tree of stream merges. MPI supports loosely coupled systems, and can be used to manage memory and process dispatching, where the dispatcher provides CPUs to threads emptying fuller queues and filling emptier queues. One idea I had was for a high speed sorting container, where the input was very parallel buffered for quick writing and the sorting was dynamically divided up into parallel streams followed by a merge tree. MPI can also be used to flow data to a deep buffer, at a point where file I/O can support data aggregation, to flow data into or out of RDBMS, and to flow data to and from files. Parallel flat file reading is not that difficult, and for fixed arrays of data into flat files, parallel writing is also possible. MPI supports heterogenous processing, so you could code in JAVA and run on a mix of platforms: PCs, MACs, SPARCs, etc. The mind boggles.
 

10 More Discussions You Might Find Interesting

1. Solaris

where can i download unix mpi ?

where can i download unix mpi? I have a sun workstation with solaris2.8 system and 8 cpus. I want to run parallel program but have no mpi software. Some people say there is unix mpi but i can not find it.Who can help me?Thanks.<removed> is my email. (2 Replies)
Discussion started by: jingwp
2 Replies

2. Solaris

please help me about solaris mpi

Dear Sir./Madam, Thanks the help from some users here,I have downloaded solaris mpi software and installed into my SUN E3500 SUNOS5.8 system. I need more help. When I submit the parallel computing task,there is a error message "ld.so.1 : /workdir/computingsoft.x : fatal : libmpi.so.1 : open... (1 Reply)
Discussion started by: jingwp
1 Replies

3. Programming

help required for linux MPI

hi I am starting with MPI programming, can anyone suggest me some books for MPI or some good online tutorials which will cover both theory and programming. thanks (0 Replies)
Discussion started by: bhakti
0 Replies

4. UNIX for Dummies Questions & Answers

MPI in Rock cluster

hi, may i know how to run mpi after i had install the rock cluster? is there any guidelines or examples? (0 Replies)
Discussion started by: joannetan9984
0 Replies

5. High Performance Computing

MPI, recovering node

Hi all, I'm writing an MPI application, in which I handle failures and recover them. In order to do that, in case of one node failure, I would like to remove that node from the MPI_COMM_WORLD group and continue with the remaining nodes. Does anybody know how I can do that? I'm using... (5 Replies)
Discussion started by: SaTYR
5 Replies

6. High Performance Computing

MPI error message and other applicatioins

1st,I'm a newbie. I've written a mpi program to realize the parallel computing and encounter many problems. 1. When the computing scale is small, that means the communication time is short, just needs few minutes, such as 14 minutes or less. The program runs well and finished the jog. ... (0 Replies)
Discussion started by: mystline
0 Replies

7. High Performance Computing

MPI - Error on sending argv

Hi all, I write a simple MPI program to send a text message to another process. The code is below. (test.c) #include "mpi.h" #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char* argv) { int dest, noProcesses, processId; MPI_Status status; ... (0 Replies)
Discussion started by: awiles
0 Replies

8. High Performance Computing

MPI + Cluster SMP

Hola, he instalado mpich2 vs. 1.2.1p1 en un cluster de biprocesadores con las opciones por defecto (antes usaba ssm pero visto que se quedaba colgado, lo he dejado con nemesis). El caso es que quisiera que cada vez que lanzo un job (por ejemplo de 2 procesos), cada proceso del trabajo se fuera... (1 Reply)
Discussion started by: Sonia_
1 Replies

9. High Performance Computing

Installation of MPI in a cluster of SMPs

Hi, I've installed mpich2 v. 1.2.1p1 on a cluster of dual-processors with the default options (in previous versions I used 'ssm' device, but now I use 'nemesis'). I'd like that every time I execute a job (e.g. with 2 MPI-processes), each job's process be dispatched on a different machine... (0 Replies)
Discussion started by: Sonia_
0 Replies

10. Programming

MPI C++ in a nested loop

I have a MPI program like this: void slave1(int j){ MPI_Status status; MPI_Recv(&j,1,MPI_INT,0,0,MPI_COMM_WORLD,&status);} void slave2(int j){ MPI_Status status; MPI_Recv(&j,1,MPI_INT,0,1,MPI_COMM_WORLD,&status);} int main(int argc, char *argv){ int numprocs, rank; ... (0 Replies)
Discussion started by: wanliushao
0 Replies
MPI_Init_thread(3)                                                      MPI                                                     MPI_Init_thread(3)

NAME
MPI_Init_thread - Initialize the MPI execution environment SYNOPSIS
int MPI_Init_thread( int *argc, char ***argv, int required, int *provided ) INPUT PARAMETERS
argc - Pointer to the number of arguments argv - Pointer to the argument vector required - Level of desired thread support OUTPUT PARAMETER
provided - Level of provided thread support COMMAND LINE ARGUMENTS
MPI specifies no command-line arguments but does allow an MPI implementation to make use of them. See MPI_INIT for a description of the command line arguments supported by MPI_INIT and MPI_INIT_THREAD . NOTES
The valid values for the level of thread support are: MPI_THREAD_SINGLE - Only one thread will execute. MPI_THREAD_FUNNELED - The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are funneled to the main thread). MPI_THREAD_SERIALIZED - The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time: MPI calls are not made concur- rently from two distinct threads (all MPI calls are serialized). MPI_THREAD_MULTIPLE - Multiple threads may call MPI, with no restrictions. NOTES FOR FORTRAN
Note that the Fortran binding for this routine does not have the argc and argv arguments. ( MPI_INIT_THREAD(required, provided, ierror) ) ERRORS
All MPI routines (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as the value of the function and Fortran routines in the last argument. Before the value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job. The error handler may be changed with MPI_Comm_set_errhandler (for communicators), MPI_File_set_errhandler (for files), and MPI_Win_set_errhandler (for RMA windows). The MPI-1 routine MPI_Errhandler_set may be used but its use is deprecated. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarentee that an MPI program can continue past an error; however, MPI implementations will attempt to continue whenever possible. MPI_SUCCESS - No error; MPI routine completed successfully. MPI_ERR_OTHER - Other error; use MPI_Error_string to get more information about this error code. SEE ALSO
MPI_Init, MPI_Finalize LOCATION
initthread.c 6/18/2011 MPI_Init_thread(3)
All times are GMT -4. The time now is 05:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy