Unix/Linux Go Back    

RedHat 9 (Linux i386) - man page for mpi_comm_join (redhat section 3)

Linux & Unix Commands - Search Man Pages
Man Page or Keyword Search:   man
Select Man Page Set:       apropos Keyword Search (sections above)

MPI_Comm_join(3)			     LAM/MPI				 MPI_Comm_join(3)

       MPI_Comm_join -	Connect two MPI processed joined by a socket

       #include <mpi.h>
       MPI_Comm_join(int fd, MPI_Comm *newcomm)

       fd     - socket file descriptor

	      - intercommunicator with client as remote group

       This  function  only  works  between two LAM/MPI processes that are connected by a socket.
       They either must have the same endian orientation, or not have used the	homogeneous  flag
       to  mpirun  (1)	(-O).  Both processes must be in a single LAM universe -- they must share
       LAM daemons that are already connected to each other.  That is, they were either initially
       lamboot(1)ed  together,	or a lamgrow(1) command was given to grow an initial LAM universe
       such that the resulting set includes the two hosts in question.

       Once this call completes successfully, fd may not be used by the caller	for  any  reason.
       LAM will eventually close the socket (possibly as late as during MPI_Finalize (3)).

       All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have an additional argu-
       ment ierr at the end of the argument list.  ierr is an integer and has the same meaning as
       the  return  value of the routine in C.	In Fortran, MPI routines are subroutines, and are
       invoked with the call statement.

       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in Fortran.

       The IMPI standard only supports MPI-1 functions.  Hence, this function  is  currently  not
       designed to operate within an IMPI job.

       If  an  error occurs in an MPI function, the current MPI error handler is called to handle
       it.  By default, this error handler aborts the MPI job.	The error handler may be  changed
       with  MPI_Errhandler_set  ;  the predefined error handler MPI_ERRORS_RETURN may be used to
       cause error values to be returned (in C and Fortran; this error handler is less useful  in
       with  the  C++  MPI  bindings.	The predefined error handler MPI::ERRORS_THROW_EXCEPTIONS
       should be used in C++ if the error value needs to be recovered).  Note that MPI	does  not
       guarantee that an MPI program can continue past an error.

       All  MPI  routines  (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as
       the value of the function and Fortran routines in the last argument.  The C++ bindings for
       MPI  do not return error values; instead, error values are communicated by throwing excep-
       tions of type MPI::Exception (but not by default).  Exceptions  are  only  thrown  if  the
       error value is not MPI::SUCCESS

       Note that if the MPI::ERRORS_RETURN handler is set in C++, while MPI functions will return
       upon an error, there will be no way to recover what the actual error value was.
	      - No error; MPI routine completed successfully.
	      - Invalid argument.  Some argument is invalid and is not identified by  a  specific
	      error class.  This is typically a NULL pointer or other such error.
	      - An internal error has been detected.  This is fatal.  Please send a bug report to
	      the LAM mailing list (see http://www.lam-mpi.org/contact.php ).
	      - Other error; use MPI_Error_string to get more information about this error code.

       lamboot(1), lamgrow(1), mpirun(1), MPI_Finalize(3)

       For more information, please see the official MPI Forum web site, which contains the  text
       of both the MPI-1 and MPI-2 standards.  These documents contain detailed information about
       each MPI function (most of which is not duplicated in these man pages).



LAM/MPI 6.5.8				    11/10/2002				 MPI_Comm_join(3)
Unix & Linux Commands & Man Pages : ©2000 - 2018 Unix and Linux Forums

All times are GMT -4. The time now is 08:37 AM.