Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

dispatch_apply_f(3) [mojave man page]

dispatch_apply(3)					   BSD Library Functions Manual 					 dispatch_apply(3)

NAME
dispatch_apply -- schedule blocks for iterative execution SYNOPSIS
#include <dispatch/dispatch.h> void dispatch_apply(size_t iterations, dispatch_queue_t queue, void (^block)(size_t)); void dispatch_apply_f(size_t iterations, dispatch_queue_t queue, void *context, void (*function)(void *, size_t)); DESCRIPTION
The dispatch_apply() function provides data-level concurrency through a "for (;;)" loop like primitive: size_t iterations = 10; // 'idx' is zero indexed, just like: // for (idx = 0; idx < iterations; idx++) dispatch_apply(iterations, DISPATCH_APPLY_AUTO, ^(size_t idx) { printf("%zu ", idx); }); Although any queue can be used, it is strongly recommended to use DISPATCH_APPLY_AUTO as the queue argument to both dispatch_apply() and dispatch_apply_f(), as shown in the example above, since this allows the system to automatically use worker threads that match the configura- tion of the current thread as closely as possible. No assumptions should be made about which global concurrent queue will be used. Like a "for (;;)" loop, the dispatch_apply() function is synchronous. If asynchronous behavior is desired, wrap the call to dispatch_apply() with a call to dispatch_async() against another queue. Sometimes, when the block passed to dispatch_apply() is simple, the use of striding can tune performance. Calculating the optimal stride is best left to experimentation. Start with a stride of one and work upwards until the desired performance is achieved (perhaps using a power of two search): #define STRIDE 3 dispatch_apply(count / STRIDE, DISPATCH_APPLY_AUTO, ^(size_t idx) { size_t j = idx * STRIDE; size_t j_stop = j + STRIDE; do { printf("%zu ", j++); } while (j < j_stop); }); size_t i; for (i = count - (count % STRIDE); i < count; i++) { printf("%zu ", i); } IMPLIED REFERENCES
Synchronous functions within the dispatch framework hold an implied reference on the target queue. In other words, the synchronous function borrows the reference of the calling function (this is valid because the calling function is blocked waiting for the result of the synchro- nous function, and therefore cannot modify the reference count of the target queue until after the synchronous function has returned). This is in contrast to asynchronous functions which must retain both the block and target queue for the duration of the asynchronous opera- tion (as the calling function may immediately release its interest in these objects). FUNDAMENTALS
dispatch_apply() and dispatch_apply_f() attempt to quickly create enough worker threads to efficiently iterate work in parallel. By con- trast, a loop that passes work items individually to dispatch_async() or dispatch_async_f() will incur more overhead and does not express the desired parallel execution semantics to the system, so may not create an optimal number of worker threads for a parallel workload. For this reason, prefer to use dispatch_apply() or dispatch_apply_f() when parallel execution is important. The dispatch_apply() function is a wrapper around dispatch_apply_f(). CAVEATS
Unlike dispatch_async(), a block submitted to dispatch_apply() is expected to be either independent or dependent only on work already per- formed in lower-indexed invocations of the block. If the block's index dependency is non-linear, it is recommended to use a for-loop around invocations of dispatch_async(). SEE ALSO
dispatch(3), dispatch_async(3), dispatch_queue_create(3) Darwin May 1, 2009 Darwin

Check Out this Related Man Page

dispatch_async(3)					   BSD Library Functions Manual 					 dispatch_async(3)

NAME
dispatch_async, dispatch_sync -- schedule blocks for execution SYNOPSIS
#include <dispatch/dispatch.h> void dispatch_async(dispatch_queue_t queue, void (^block)(void)); void dispatch_sync(dispatch_queue_t queue, void (^block)(void)); void dispatch_async_f(dispatch_queue_t queue, void *context, void (*function)(void *)); void dispatch_sync_f(dispatch_queue_t queue, void *context, void (*function)(void *)); DESCRIPTION
The dispatch_async() and dispatch_sync() functions schedule blocks for concurrent execution within the dispatch(3) framework. Blocks are sub- mitted to a queue which dictates the policy for their execution. See dispatch_queue_create(3) for more information about creating dispatch queues. These functions support efficient temporal synchronization, background concurrency and data-level concurrency. These same functions can also be used for efficient notification of the completion of asynchronous blocks (a.k.a. callbacks). TEMPORAL SYNCHRONIZATION
Synchronization is often required when multiple threads of execution access shared data concurrently. The simplest form of synchronization is mutual-exclusion (a lock), whereby different subsystems execute concurrently until a shared critical section is entered. In the pthread(3) family of procedures, temporal synchronization is accomplished like so: int r = pthread_mutex_lock(&my_lock); assert(r == 0); // critical section r = pthread_mutex_unlock(&my_lock); assert(r == 0); The dispatch_sync() function may be used with a serial queue to accomplish the same style of synchronization. For example: dispatch_sync(my_queue, ^{ // critical section }); In addition to providing a more concise expression of synchronization, this approach is less error prone as the critical section cannot be accidentally left without restoring the queue to a reentrant state. The dispatch_async() function may be used to implement deferred critical sections when the result of the block is not needed locally. Deferred critical sections have the same synchronization properties as the above code, but are non-blocking and therefore more efficient to perform. For example: dispatch_async(my_queue, ^{ // critical section }); BACKGROUND CONCURRENCY
dispatch_async() function may be used to execute trivial background tasks on a global concurrent queue. For example: dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0), ^{ // background operation }); This approach is an efficient replacement for pthread_create(3). COMPLETION CALLBACKS
Completion callbacks can be accomplished via nested calls to the dispatch_async() function. It is important to remember to retain the desti- nation queue before the first call to dispatch_async(), and to release that queue at the end of the completion callback to ensure the desti- nation queue is not deallocated while the completion callback is pending. For example: void async_read(object_t obj, void *where, size_t bytes, dispatch_queue_t destination_queue, void (^reply_block)(ssize_t r, int err)) { // There are better ways of doing async I/O. // This is just an example of nested blocks. dispatch_retain(destination_queue); dispatch_async(obj->queue, ^{ ssize_t r = read(obj->fd, where, bytes); int err = errno; dispatch_async(destination_queue, ^{ reply_block(r, err); }); dispatch_release(destination_queue); }); } RECURSIVE LOCKS
While dispatch_sync() can replace a lock, it cannot replace a recursive lock. Unlike locks, queues support both asynchronous and synchronous operations, and those operations are ordered by definition. A recursive call to dispatch_sync() causes a simple deadlock as the currently executing block waits for the next block to complete, but the next block will not start until the currently running block completes. As the dispatch framework was designed, we studied recursive locks. We found that the vast majority of recursive locks are deployed retroac- tively when ill-defined lock hierarchies are discovered. As a consequence, the adoption of recursive locks often mutates obvious bugs into obscure ones. This study also revealed an insight: if reentrancy is unavoidable, then reader/writer locks are preferable to recursive locks. Disciplined use of reader/writer locks enable reentrancy only when reentrancy is safe (the "read" side of the lock). Nevertheless, if it is absolutely necessary, what follows is an imperfect way of implementing recursive locks using the dispatch framework: void sloppy_lock(object_t object, void (^block)(void)) { if (object->owner == pthread_self()) { return block(); } dispatch_sync(object->queue, ^{ object->owner = pthread_self(); block(); object->owner = NULL; }); } The above example does not solve the case where queue A runs on thread X which calls dispatch_sync() against queue B which runs on thread Y which recursively calls dispatch_sync() against queue A, which deadlocks both examples. This is bug-for-bug compatible with nontrivial pthread usage. In fact, nontrivial reentrancy is impossible to support in recursive locks once the ultimate level of reentrancy is deployed (IPC or RPC). IMPLIED REFERENCES
Synchronous functions within the dispatch framework hold an implied reference on the target queue. In other words, the synchronous function borrows the reference of the calling function (this is valid because the calling function is blocked waiting for the result of the synchro- nous function, and therefore cannot modify the reference count of the target queue until after the synchronous function has returned). For example: queue = dispatch_queue_create("com.example.queue", NULL); assert(queue); dispatch_sync(queue, ^{ do_something(); //dispatch_release(queue); // NOT SAFE -- dispatch_sync() is still using 'queue' }); dispatch_release(queue); // SAFELY balanced outside of the block provided to dispatch_sync() This is in contrast to asynchronous functions which must retain both the block and target queue for the duration of the asynchronous opera- tion (as the calling function may immediately release its interest in these objects). FUNDAMENTALS
Conceptually, dispatch_sync() is a convenient wrapper around dispatch_async() with the addition of a semaphore to wait for completion of the block, and a wrapper around the block to signal its completion. See dispatch_semaphore_create(3) for more information about dispatch sema- phores. The actual implementation of the dispatch_sync() function may be optimized and differ from the above description. The dispatch_async() function is a wrapper around dispatch_async_f(). The application-defined context parameter is passed to the function when it is invoked on the target queue. The dispatch_sync() function is a wrapper around dispatch_sync_f(). The application-defined context parameter is passed to the function when it is invoked on the target queue. SEE ALSO
dispatch(3), dispatch_apply(3), dispatch_once(3), dispatch_queue_create(3), dispatch_semaphore_create(3) Darwin May 1, 2009 Darwin
Man Page