Quote:
How would you propose to achieve that?
Locking the log file- If that was the option for such an I/O intensive operation, that's not to be appreciated.
Because once I deliberately switched logfile locking from shared memory resource locking to a simple file locking method for threaded application, the performance drastically came down just because they have to wait for the logfile to be unlocked ( that was done as staling the process untill the file lock on the log file was released )
Since other constraints about having one more process or additional resource utilization was not mentioned, am suggesting this idea. (Not sure whether its creative or trustworthy
)
for the logs from 'n' different processes to a single log file 'lf'
a) stamp the log messages ( precision to be decided based on the frequency in which log messages are dumped consecutively ) such that they are redirected to a temp_log_file 'temp_lf'
b) either you can have an another iterative process or standard utility to parse and feed the actual log file 'lf' from the temp file 'temp_lf'. That is the reason I had requested to stamp the log messages which would be easier while parsing.
c) once done periodically clean the 'temp_lf' file.
d) apart from erratic/strange behaviour I really dont see any situation/condition where you have to worry for a single byte loss
(Oops! Jim had already mentioned about time-stamping)