Quote:
Originally Posted by
DreamWarrior
Interesting, but it seems to rely on the sem_post being fair.
True enough. It should be a straightforward queue like a mutex, afaik. The code did operate the same way in BSD, Solaris, and Linux(and was almost alone in that), but I've never built anything inside AIX. sem_post is the only IPC that's actually defined as
signal-safe, so POSIX sems are perhaps more strictly defined than other IPC.
Quote:
Also, while the writer is waiting, the longer he waits, the less reader throughput there is because as he gets "deeper" into the wait he hogs more of the reader locks.
True to a point. No matter how you code it(unless you have "stop-the-world" style IPC like in Java) a write lock must block all new readers and wait for all the old readers to finish. But when the writer needs to wait 10 times, there's no guarantee it would get the next 10 waits
in a row even when the sem's fair.
It fulfills your requirements, though: No blocking at all occurs whenever a writelock isn't in progress, which makes reader efficiency very high. If POSIX sem's work the way I think they do, it won't starve writers either.
There's also SYSV IPC sems, which can do arbitrary subtraction atomically. I have no idea how fairly it would be treated however.
You could also roll your own sem capable of arbitrary subtraction, though that'd either mean adding one frequently-blocking call per readlock, or building something like a linux futex out of hard ASM. A futex is a half-userspace sem that's
extremely efficient in some circumstances since the nonblocking case involves no system calls at all, just 2-3 instructions.
Quote:
I also assume that for this to work "well" I need to know ahead of time the maximum number of readers I'll have. Otherwise, if I have more readers than the starting count, all extraneous readers are pretty much pointless.
You could do one extra post per new thread, and one extra wait per terminating thread if you're careful about it. Or you could set it to a fixed number higher than you need.