MPI_Allreduce(3) LAM/MPI MPI_Allreduce(3)
MPI_Allreduce - Combines values from all processes and distribute the result back to all
int MPI_Allreduce(void *sbuf, void *rbuf, int count,
MPI_Datatype dtype, MPI_Op op, MPI_Comm comm)
sbuf - starting address of send buffer (choice)
count - number of elements in send buffer (integer)
dtype - data type of elements of send buffer (handle)
op - operation (handle)
comm - communicator (handle)
rbuf - starting address of receive buffer (choice)
USAGE WITH IMPI EXTENSIONS
This function has had the IMPI extensions implemented. It is legal to call this function
on IMPI communicators.
NOTES FOR FORTRAN
All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have an additional argu-
ment ierr at the end of the argument list. ierr is an integer and has the same meaning as
the return value of the routine in C. In Fortran, MPI routines are subroutines, and are
invoked with the call statement.
All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in Fortran.
NOTES ON COLLECTIVE OPERATIONS
The reduction functions ( MPI_Op ) do not return an error value. As a result, if the
functions detect an error, all they can do is either call MPI_Abort or silently skip the
problem. Thus, if you change the error handler from MPI_ERRORS_ARE_FATAL to something
else (e.g., MPI_ERRORS_RETURN ), then no error may be indicated.
The reason for this is the performance problems that arise in ensuring that all collective
routines return the same error value.
If an error occurs in an MPI function, the current MPI error handler is called to handle
it. By default, this error handler aborts the MPI job. The error handler may be changed
with MPI_Errhandler_set ; the predefined error handler MPI_ERRORS_RETURN may be used to
cause error values to be returned (in C and Fortran; this error handler is less useful in
with the C++ MPI bindings. The predefined error handler MPI::ERRORS_THROW_EXCEPTIONS
should be used in C++ if the error value needs to be recovered). Note that MPI does not
guarantee that an MPI program can continue past an error.
All MPI routines (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as
the value of the function and Fortran routines in the last argument. The C++ bindings for
MPI do not return error values; instead, error values are communicated by throwing excep-
tions of type MPI::Exception (but not by default). Exceptions are only thrown if the
error value is not MPI::SUCCESS
Note that if the MPI::ERRORS_RETURN handler is set in C++, while MPI functions will return
upon an error, there will be no way to recover what the actual error value was.
- Invalid communicator. A common error is to use a null communicator in a call
(not even allowed in MPI_Comm_rank ).
- Invalid communicator. LAM/MPI does not yet support invoking collectives on
- A communicator that contains some non-local IMPI procs was used for some function
which has not yet had the IMPI extensions implemented yet. For example, most col-
lectives on IMPI communicators have not been implemented yet.
- Invalid buffer pointer. Usually a null buffer where one is not valid.
- Invalid count argument. Count arguments must be non-negative; a count of zero is
- Invalid datatype argument. May be an uncommitted MPI_Datatype (see MPI_Type_com-
- Invalid operation. MPI operations (objects of type MPI_Op ) must either be one
of the predefined operations (e.g., MPI_SUM ) or created with MPI_Op_create
. Additionally, only certain datatypes are alloed with given predefined opera-
tions. See MPI-1, section 4.9.2.
For more information, please see the official MPI Forum web site, which contains the text
of both the MPI-1 and MPI-2 standards. These documents contain detailed information about
each MPI function (most of which is not duplicated in these man pages).
The LAM Team would like the thank the MPICH Team for the handy program to generate man
pages ("doctext" from ftp://ftp.mcs.anl.gov/pub/sowing/sowing.tar.gz ), the initial for-
matting, and some initial text for most of the MPI-1 man pages.
LAM/MPI 6.5.8 11/10/2002 MPI_Allreduce(3)