HL7 Comm 0.8.5 (Default branch)


 
Thread Tools Search this Thread
Special Forums News, Links, Events and Announcements Software Releases - RSS News HL7 Comm 0.8.5 (Default branch)
# 1  
Old 07-18-2008
HL7 Comm 0.8.5 (Default branch)

HL7 Comm is a stand-alone integration tool written in Java that allows you to send and receive HL7 messages over a TCP/IP MLLP connection. It offers a simple mode for testing and a configured mode for running as a full-fledged integration client, either with or without a GUI interface. License: GNU General Public License (GPL) Changes:
A bug with the logging mechanism in the 0.8 series has been corrected. A new InboundTrigger mechanism has been added to support applications that may wish to poll other services without a normal triggering event.Image

More...
Login or Register to Ask a Question

Previous Thread | Next Thread

4 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

HL7 MLLP Sender in C

Hi Everyone, This is a pretty big request, but I was wondering if anyone out there has a program written in C, Perl, Tcl or whatever that can be executed from the command line and will send HL7 messages from a NL delimited file and send them to the specifid host/port using the MLLP HL7 TCP... (1 Reply)
Discussion started by: troym72
1 Replies

2. UNIX for Dummies Questions & Answers

help in comm command

Hi all, I need help in comm command , I am having 2 files . I have to display the common line in the two file only onnce and i have to also display the non common line as well. tmpcut1 -- First file cat tmpcut1 smstr_303000_O_432830_... f_c2_queue_sys30.sys30 RUNNING 10 1000... (1 Reply)
Discussion started by: arunkumar_mca
1 Replies

3. Shell Programming and Scripting

comm with a variable

Hello all, I have two flat files that are colon delineated and I am trying to run a compare (Solaris v8 ksh) of $1 within a script to access a mysql database based on the results. Unix is telling me that it has to have physical file names. Is there a way to run a compare using variables? This is... (3 Replies)
Discussion started by: gozer13
3 Replies

4. Shell Programming and Scripting

comm ?!

Hi, I have two large files with uid's: - 581004 File1.txt - 292675 File2.txt I want to know which uid's are in File1.txt and not in File2.txt. I have used comm -23 File1.txt File2.txt. This should do the trick i thought. But in the output i keep having uid's in File1.txt that are also in... (8 Replies)
Discussion started by: tine
8 Replies
Login or Register to Ask a Question
MPI_Recv(3OpenMPI)														MPI_Recv(3OpenMPI)

NAME
MPI_Recv - Performs a standard-mode blocking receive. SYNTAX
C Syntax #include <mpi.h> int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) Fortran Syntax INCLUDE 'mpif.h' MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM INTEGER STATUS(MPI_STATUS_SIZE), IERROR C++ Syntax #include <mpi.h> void Comm::Recv(void* buf, int count, const Datatype& datatype, int source, int tag, Status& status) const void Comm::Recv(void* buf, int count, const Datatype& datatype, int source, int tag) const INPUT PARAMETERS
count Maximum number of elements to receive (integer). datatype Datatype of each receive buffer entry (handle). source Rank of source (integer). tag Message tag (integer). comm Communicator (handle). OUTPUT PARAMETERS
buf Initial address of receive buffer (choice). status Status object (status). IERROR Fortran only: Error status (integer). DESCRIPTION
This basic receive operation, MPI_Recv, is blocking: it returns only after the receive buffer contains the newly received message. A receive can complete before the matching send has completed (of course, it can complete only after the matching send has started). The blocking semantics of this call are described in Section 3.4 of the MPI-1 Standard, "Communication Modes." The receive buffer contains a number (defined by the value of count) of consecutive elements. The first element in the set of elements is located at address_buf. The type of each of these elements is specified by datatype. The length of the received message must be less than or equal to the length of the receive buffer. An MPI_ERR_TRUNCATE is returned upon the overflow condition. If a message that is shorter than the length of the receive buffer arrives, then only those locations corresponding to the (shorter) received message are modified. NOTES
The count argument indicates the maximum number of entries of type datatype that can be received in a message. Once a message is received, use the MPI_Get_count function to determine the actual number of entries within that message. To receive messages of unknown length, use the MPI_Probe function. (For more information about MPI_Probe and MPI_Cancel, see their respec- tive man pages; also, see Section 3.8 of the MPI-1 Standard, "Probe and Cancel.") A message can be received by a receive operation only if it is addressed to the receiving process, and if its source, tag, and communicator (comm) values match the source, tag, and comm values specified by the receive operation. The receive operation may specify a wildcard value for source and/or tag, indicating that any source and/or tag are acceptable. The wildcard value for source is source = MPI_ANY_SOURCE. The wildcard value for tag is tag = MPI_ANY_TAG. There is no wildcard value for comm. The scope of these wildcards is limited to the proceses in the group of the specified communicator. The message tag is specified by the tag argument of the receive operation. The argument source, if different from MPI_ANY_SOURCE, is specified as a rank within the process group associated with that same communica- tor (remote process group, for intercommunicators). Thus, the range of valid values for the source argument is {0,...,n-1} {MPI_ANY_SOURCE}, where n is the number of processes in this group. Note the asymmetry between send and receive operations: A receive operation may accept messages from an arbitrary sender; on the other hand, a send operation must specify a unique receiver. This matches a "push" communication mechanism, where data transfer is effected by the sender (rather than a "pull" mechanism, where data transfer is effected by the receiver). Source = destination is allowed, that is, a process can send a message to itself. However, it is not recommended for a process to send mes- sages to itself using the blocking send and receive operations described above, since this may lead to deadlock. See Section 3.5 of the MPI-1 Standard, "Semantics of Point-to-Point Communication." If your application does not need to examine the status field, you can save resources by using the predefined constant MPI_STATUS_IGNORE as a special value for the status argument. ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ func- tions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI:Exception object. Before the error value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error. SEE ALSO
MPI_Irecv MPI_Probe Open MPI 1.2 March 2007 MPI_Recv(3OpenMPI)