Sponsored Content
Top Forums UNIX for Advanced & Expert Users Asynchronus resource sharing between processes? Post 302894829 by linuxpenguin on Thursday 27th of March 2014 01:25:23 PM
Old 03-27-2014
For asynchronous sharing, shared memory is what I would think of.

For synchronous processes you would have to use combination of shared memory and semaphores. I believe you would need it synchronous if you are reading and writing from and to the pages.
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

resource manaement

Hi all I would like to know which other tools i can use besides top & sar to track the system resources i heard of somthing that sounds like acamdmin or acsadm... Thanks for your help (1 Reply)
Discussion started by: yelalouf
1 Replies

2. Solaris

resource worries

When I run the prstat -a command I get the following output for user oracle. NPROC USERNAME SIZE RSS MEMORY TIME CPU 118 oracle 70G 30G 100% 4:38:03 52% The reading under the "MEMORY" heading is 100%. What does this mean? I hope it doesn't mean user oracle is using... (2 Replies)
Discussion started by: soliberus
2 Replies

3. IP Networking

sharing of IP address for load sharing avoiding virtual server & redirection machine

I have RedHat 9.0 installed on three of my servers (PIII - 233MHz) and want that they share a common IP address so that any request made reaches each of the servers. Can anyone suggest how should I setup my LAN. I'm new to networking in Linux so please elaborate and would be thankful for a timely... (2 Replies)
Discussion started by: Rakesh Ranjan
2 Replies

4. UNIX for Advanced & Expert Users

Monitoring Processes - Killing hung processes

Is there a way to monitor certain processes and if they hang too long to kill them, but certain scripts which are expected to take a long time to let them go? Thank you Richard (4 Replies)
Discussion started by: ukndoit
4 Replies

5. Solaris

Identifying and grouping OS processes and APP processes

Hi Is there an easy way to identify and group currently running processes into OS processes and APP processes. Not all applications are installed as packages. Any free tools or scripts to do this? Many thanks. (2 Replies)
Discussion started by: wilsonee
2 Replies

6. Filesystems, Disks and Memory

Processes sharing.......

What are the differences in processes sharing variables, memory pages or files? Is one safer than another? (1 Reply)
Discussion started by: MS_CC
1 Replies

7. Shell Programming and Scripting

Finding the age of a unix process, killing old processes, killing zombie processes

I had issues with processes locking up. This script checks for processes and kills them if they are older than a certain time. Its uses some functions you'll need to define or remove, like slog() which I use for logging, and is_running() which checks if this script is already running so you can... (0 Replies)
Discussion started by: sukerman
0 Replies

8. Programming

Sharing a serial port among multiple processes

I am creating a Daemon in Unix that will have exclusive access to a serial port "/dev/tty01". I am planning to create a Master - Slave process paradigm where there is one master (the daemon) and multiple slaves. I was thinking of having a structure in "Shared memory" where the slaves can... (2 Replies)
Discussion started by: zacharoni16
2 Replies

9. Solaris

Global and non-global zone resource sharing - tricky

hi all, Just a simple question but i cant get the answers in the book - In my globalzone , assuming i have 4 cpus (psrinfo -pv = 0-3), if i set dedicated-cpu (ncpus=2) for my local zone Is my globalzone left with 2 cpus or still 4 cpus ? Does localzone "resource reservation.e.g. cpu in... (6 Replies)
Discussion started by: javanoob
6 Replies
MPI_Finalize(3OpenMPI)													    MPI_Finalize(3OpenMPI)

NAME
MPI_Finalize - Terminates MPI execution environment. SYNTAX
C Syntax #include <mpi.h> int MPI_Finalize() Fortran Syntax INCLUDE 'mpif.h' MPI_FINALIZE(IERROR) INTEGER IERROR C++ Syntax #include <mpi.h> void Finalize() OUTPUT PARAMETER
IERROR Fortran only: Error status (integer). DESCRIPTION
This routine cleans up all MPI states. Once this routine is called, no MPI routine (not even MPI_Init) may be called, except for MPI_Get_version, MPI_Initialized, and MPI_Finalized. Unless there has been a call to MPI_Abort, you must ensure that all pending communica- tions involving a process are complete before the process calls MPI_Finalize. If the call returns, each process may either continue local computations or exit without participating in further communication with other processes. At the moment when the last process calls MPI_Finalize, all pending sends must be matched by a receive, and all pending receives must be matched by a send. MPI_Finalize is collective over all connected processes. If no processes were spawned, accepted, or connected, then this means it is col- lective over MPI_COMM_WORLD. Otherwise, it is collective over the union of all processes that have been and continue to be connected. NOTES
All processes must call this routine before exiting. All processes will still exist but may not make any further MPI calls. MPI_Finalize guarantees that all local actions required by communications the user has completed will, in fact, occur before it returns. However, MPI_Finalize guarantees nothing about pending communications that have not been completed; completion is ensured only by MPI_Wait, MPI_Test, or MPI_Request_free combined with some other verification of completion. For example, a successful return from a blocking communication operation or from MPI_Wait or MPI_Test means that the communication is com- pleted by the user and the buffer can be reused, but does not guarantee that the local process has no more work to do. Similarly, a suc- cessful return from MPI_Request_free with a request handle generated by an MPI_Isend nullifies the handle but does not guarantee that the operation has completed. The MPI_Isend is complete only when a matching receive has completed. If you would like to cause actions to happen when a process finishes, attach an attribute to MPI_COMM_SELF with a callback function. Then, when MPI_Finalize is called, it will first execute the equivalent of an MPI_Comm_free on MPI_COMM_SELF. This will cause the delete callback function to be executed on all keys associated with MPI_COMM_SELF in an arbitrary order. If no key has been attached to MPI_COMM_SELF, then no callback is invoked. This freeing of MPI_COMM_SELF happens before any other parts of MPI are affected. Calling MPI_Finalized will thus return "false" in any of these callback functions. Once you have done this with MPI_COMM_SELF, the results of MPI_Finalize are not speci- fied. ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ func- tions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI:Exception object. Before the error value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error. Open MPI 1.2 September 2006 MPI_Finalize(3OpenMPI)
All times are GMT -4. The time now is 12:14 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy