I've written a daemon in bash, that waits for a HUP signal and then does some processing, before waiting for the next HUP. It goes something like this:
I couldn't figure out a way to get bash to store up signals during a critical piece of code. The above was the best I could get, but my concern is, what if I receive a HUP *after* this test:
but before the next instruction:
In that scenario, the wait will block for 30 seconds or until another HUP is received, i.e. it would have missed one HUP.
How can I improve this so there's no chance of missing a HUP?
I don't think it's ever going to miss a HUP, it will still set the variable. The loop may just detect it 30 seconds late.
There's no point backgrounding sleep 30 when all you're going to do is wait for it. You catch the circumstance when HUP comes in the instant between running wait and waiting for wait, and open 3 other circumstances where the HUP could creep in before you're ready.
A C application would use SIGMASK to temporarily stop a signal but shell doesn't have this... hmm...
How about a pipe?
The shell will create a FIFO and try to read from it. Since there's no other process writing to it, it will block until something does something to it. A HUP signal will delete the FIFO, causing the read to fail. Once it's ready, the shell will create a new FIFO and start over.
Redirecting stderr is necessary since deleting the FIFO causes a bit of error spam. You can redirect into >&5 if you need to write to stderr.
Tested this in lots of shells and two OSes so it looks decently portable.
Last edited by Corona688; 08-18-2011 at 02:24 PM..
The trouble with using a FIFO is that's not going to support multiple clients attempting to contact the daemon simultaneously ...
Maybe I've just reached the limit of what is achievable in bash.
Well, you can't do that with posix signals in any environment. If multiple clients signal the daemon simultaneously, using the same signal number, it's possible that the daemon will only receive that signal once. While if instead the daemon reads a fifo, multiple clients can write to it without having their messages lost. And, so long as the messages are smaller than or equal to PIPE_BUF bytes, multiple clients writing to the same fifo will each have their messages written atomically, without interleaving.
Hi there.
I have a problem with pc's dropping their mounts to a network Nas. The Nas is a Synology DiskStation, it has enough concurrent connections which I think off the top of my head is about 200 and I only need 120.
So, question 1 is why will a unix box drop a mount, and 2, how can I... (2 Replies)
We often have to update our ipfilter rules on Solaris 11. svcadm refresh ipfilter drops users every time (we're logged in via the global and then a zlogin to the zone in question).
Is there any way not to drop user's connections when modifying ipfilter rules and refreshing the service? (2 Replies)
Hi,
Under '/home' directory, there is one file called 'maddy'.Usually there used to be directories under /home directory.
# ls -alrt
total 132
drwx------ 2 hcladmin sys 4096 May 30 10:54 admin
drwxr-xr-x 29 root root 4096 Aug 27 03:54 ..
drwx------ 2 v6admin dba ... (3 Replies)
Hi folks,
We are pushing messages to an IBM MQ queue on a AIX server where our client connects to from their Windows server and pick up the message. The
problem is that every now and then the connection drops and the client application cannot pick up the message. Someone has to bring up the... (1 Reply)
We are having a problem with names being dropped from the /etc/mail/aliases file. There's no pattern to the names being dropped. It is very random.
We are running sendmail 8.14.3 on a Solaris 10 server. There are about 9000 lines in the /etc/mail/aliases file.
Is there a limitation on... (8 Replies)
RH 4.2.1.13
Hi All,
I just installed RH and I am able to connect to the internet via my router.
My high speed is such that I should be able to download at over 1000 kb/s.
While trying to download oracle database, it is starting at above 1000kb/s
and gradually droping to below 40kb/s which... (1 Reply)
Hi,
I have written the following it is pretty sloppy but I don't see any reason why I should be losing 54 records from a 3.5 million line file after using it.
What I am doing:
I have a 3.5 million record file with about 80,000 records need a correction. They are missing the last data from... (8 Replies)
Linux Kernels 2.6.16 and up provide a way to instruct the kernel to drop the page cache, inode and dentry caches on command. This tip can help free Linux memory without a reboot.
Note: This is a non-destructive operation. Dirty objects are not freeable, hence; you must run sync beforehand.
... (0 Replies)
Hello:
I wrote a sed statement that is inserting 3 variables at the beginning of each record in a comma-delimited file:
for FILE in *gnrc_lkup.csv
do
c=`echo $FILE | cut -c1-3`
d=`grep $c $RTLIST | cut -c4-6`
e=`grep $c $RTLIST | cut -c7`
f=`grep $c $RTLIST | cut -c8`
sed -e... (5 Replies)
i have unixware 2.1.
A warning message Strintercept dropping message start scrolling on screen.
does anyone have any idea what it means? :confused:
and some times system hangs with all terminals.? (2 Replies)