OK, for a real-time app, whatever you do has to be FAST, and if the resources aren't available for your processing thread to do the logging, I'm assuming you'll just have to skip logging that event.
Just as importantly, whatever logging you add, the performance has to be deterministic - it takes the same overhead to do the logging for a processing thread every single time you do the logging. That number should be as small as possible, but it can't be good most every time, with an occasional long-running hang waiting for resources.
So something like Boost is probably the last thing you should be looking into using. Do you know what's going on inside the Boose libraries? And how using it will impact your real-time processing in every single case? No, you don't. If you could get the entire Boost development team and have them answer, "Tell me exactly what happens when I use this Boost functionality in every possible use case.", even the Boost developers themselves almost certainly wouldn't be able to tell you.
Using syslog() might meet your needs, but I'd measure the overhead of doing so.
If I were to roll my own, I'd probably try something like this:
2. Define event ids for all the events you want to collect data from.
3. Create a pipe of some kind - named pipe, SYS V message queue, etc.
4. For each event you time, fill in one of those structures.
5. After filling in the structure, write the binary structure to the pipe. Write using a non-blocking low-level call such as a simple write(). Do NOT use C++ functionality, for a lot of reasons. Again, this needs to be non-blocking so if whatever pipe/queue you're writing to is full, you don't wait. It also needs to be atomic - which is why YOU need to make the write() call (or whatever) directly in YOUR code.
6. Write another process that reads the binary data from the pipe (should be easy since the reader knows the size of the binary structure - read() the proper number of bytes and get a filled-in struct), translates it into a readable log message, and logs it. Or maybe you just log the raw binary data and translate it offline later depending on your space needs.
You might want to add more data to the structure.
One thing to watch out for, especially if you implement something like a simple write() to a Unix pipe: partial write()'s. For example, if your structure comes out to be 32 bytes, maybe the pipe is almost full and write() only puts 24 bytes in the pipe. You might want do so something like start each structure with a magic number int field and put something in there that hopefully can't be duplicated in any other field. Then the reading/logging process can use that magic number field to try to sync up the binary stream of data.
Another issue: make sure your processing code and your logging code use the same size for the binary structure. Compiler optimization flags can change data alignments, which can change structure sizes. So if you aggressively optimize the processing code but not the logging code, the processing code could be using a binary structure that's 48 bytes because of alignment padding while the logging process could be using a non-padded structure that's 32 bytes.
Does anyone know what's new with Efficient dispatching in the Solaris 2.8 release (vs Solaris 2.6) release?
Specifically, does anyone know of a good website to get detailed information on thread dispatching using efficient dispatching in solaris 2.8?
Thank you. (1 Reply)
I'm using korn shell to connect to oracle, retrieve certain values, put them in a list, and iterate through them. While this method works, I can't help but think there is an easier method.
If you know of one, please suggest a shorter, more efficient method.
############### FUNCTIONS ... (6 Replies)
Hi,
Can someone let me know if the below AWK can be made much simpler / efficient ?
I have 200 fields, I need to substr only the last fields.
So i'm printing awk -F~ 'print {$1, $2, $3....................................$196,$197 , susbstr($198,1,3999), substr($199,1,3999)..}'
Is there a... (4 Replies)
Hi
I have the following at the end of a service shutdown script used in part of an active-passive failover setup:
###
# Shutdown all primary Network Interfaces
# associated with failover
###
# get interface names based on IP's
# and shut them down to simulate loss of
# heartbeatd
... (1 Reply)
I have the following code.
printf "Test Message Report" > report.txt
while read line
do
msgid=$(printf "%n" "$line" | cut -c1-6000| sed -e 's///g' -e 's|.*ex:Msg\(.*\)ex:Msg.*|\1|')
putdate=$(printf "%n" "$line" | cut -c1-6000| sed -e 's///g' -e 's|.*PutDate\(.*\)PutTime.*|\1|')... (9 Replies)
some of the data i receive has been typed in manually due to which there are often places where i find 8 instead of ( and the incorrect use of case
what according to you is the best way to correct such data.
The data has around 20,000 records.
The value i want to change is in the 4th field.... (2 Replies)
Discussion started by: VGR
2 Replies
7. Post Here to Contact Site Administrators and Moderators
Hi Everyone. First, I want to thank all of you for letting me participate in this great group.
I am having a bit of a problem.
After I get an email from a responder, I login to make my reply.
In the mean time I get another response by email from another member, I go to reply to them and I... (6 Replies)
Hi Experts,
I've been trying simple grep to search for a string in a huge number of files in a directory.
grep <pattern> *
this gives the search results as well as the following -
grep: <filename>: Permission denied
grep: <filename>: Permission denied
for files which I don't have... (4 Replies)
Hello guys
My requirement is to read a file with parent-child relationship
we need to iterate through each row to find its latest child.
for eg. parent child
ABC PQR
PQR DEF
DEF XYZ
Expected Output
ABC XYZ
PQR XYZ
DEF XYZ
Script Logic :
read parent from file
seach child... (4 Replies)
When unlocking a Linux server's console there's no event indicating successful logging
Is there a way I can fix this ?
I have the following in my rsyslog.conf
auth.info /var/log/secure
authpriv.info /var/log/secure (1 Reply)
Discussion started by: walterthered
1 Replies
LEARN ABOUT OSX
datetime::timezone::olsondb
DateTime::TimeZone::OlsonDB(3) User Contributed Perl Documentation DateTime::TimeZone::OlsonDB(3)NAME
DateTime::TimeZone::OlsonDB - An object to represent an Olson time zone database
VERSION
version 1.51
SYNOPSIS
none yet
DESCRIPTION
This module parses the Olson database time zone definition files and creates various objects representing time zone data.
Each time zone is broken down into several parts. The first piece is an observance, which is an offset from UTC and an abbreviation. A
single zone may contain many observances, reflecting historical changes in that time zone over time. An observance may also refer to a set
of rules.
Rules are named, and may apply to many different zones. For example, the "US" rules apply to most of the time zones in the US,
unsurprisingly. Rules are made of an offset from standard time and a definition of when that offset changes. Changes can be a one time
thing, or they can recur at regular times through a span of years.
Each rule may have an associated letter, which is used to generate an abbreviated name for the time zone, along with the offset's
abbreviation. For example, if the offset's abbreviation is "C%sT", and the a rule specifies the letter "S", then the abbreviation when
that rule is in effect is "CST".
USAGE
Not yet documented. This stuff is a mess.
AUTHOR
Dave Rolsky <autarch@urth.org>
COPYRIGHT AND LICENSE
This software is copyright (c) 2012 by Dave Rolsky.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
perl v5.16.2 2012-10-17 DateTime::TimeZone::OlsonDB(3)