Quote:
Hi, bakunin.
I don't know how fast your machine is, but you are dealing with several processes here, so those will definitely take up some time. If I understand this, we want to minimize real time to avoid loss of data. You didn't mention how large the file was. If it's really large, this might not work.
The machine is a LPAR in a IBM p570 with 2 physical CPUs (4 LCPUs). The quantity structure is as follows:
~10k lines per day
~4MB per day
The file is the garbage collector log of a JVM (the machine is running some Websphere 6.1 application servers) and the logfile is in XML format. That means, the lines are not written in constant intervals, but always a bunch of lines (one "paragraph", so to say) at a time. The information units i want to separate. are each starting with a "<af>" tag and ending with with a "</af>" tag.
Quote:
Memory is obviously faster than disk, so I suggest creating a perl script to slurp in the file, perhaps have several subroutines (if you like modularity) to take the place of the seds, etc., and write out the results. Even if you copy the perl lists a few times internally, that's still a real-time savings over disk access.
Not at all! perl is definitely way slower than sed, at about a factor 10. I came to this conclusion when working on my last project, where i had to change database dumps (frighteningly huge files) and replaced the perl programs doing it with sed - that sped up the process greatly.
As i see it the critical part is only between lines 11 and 12 of the code snippet. All the previous operations are working from line 1 up to some predetermined line x of the file and it won't hurt of there come additional lines in during this time.
As an additional requirement i have to preserve the inode of the file, because the process which writes to it (the garbage collector of the JVM) will continue to write into it. This is why i used "cat > ..." instead of "mv ...".
bakunin