I am trying to parse a file that looks like the below:
Quote:
2012/09/10 12:18:18: username@192.168.1.1: OPERATION user (<event_n>blah</event_n><column>username</column><old_val>blabla</old_val><new_val>newblablah</new_value><time>1347270053954</time><new_val></new_val>[XML file continues..]</event_data>) - succeeded
There are thousands of lines like the above and the file is expected to run into hundreds of thousands.
The issue i have is the mixed format of the file. If it was just an xmlfile, i would use an xmllint to parse the file. Now, i am using nawk to parse the xml and convert it into a flatfile, piping it into an sed to remove unwanted characters and then piping the result into an nawk again to parse the flatfile. A sample code is below:
The awk file present.awk is as follows:
The file output looks like below:
Quote:
Date & time : 2012/09/10 08:43:01
Login : username@192.168.1.1
Ops: UPDATE
Mod : users
Det:
username old_val blah new_val newblablah time old_val 1386782 new_value 1347270053954 RESULT succeeded
---------------------------------------------------------------------
I have the following queries/concerns regarding my work:
1. In the file present.awk, if you look at the final print statement (print $11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24) you will see that the output is not neat as this particular output is separated by tabs. I find that there are multiple tabs between the fields and sometimes nothing is printed. So my question is this: When using a tab as a field separator, can i ignore multiple tabls and consider them as 1 tab? How do i do this in the FS statement?
2. Going forward, my plan is to extract data daily from the file for the previous day. My plan is to get the previous day's date, grep the log file for this date and then pipe the result into the above code. Is there a better way to do this and avoid the grep and pipe?
3. I have refrained from using pipes as much as possible to reduce the time complexity but i couldnt avoid the above pipes. Is there anyway i can do the parsing above without using any pipes? I have this nagging feeling that there should be a better way to do the parsing without going through the painstaking work of finding which fiields correspond to the data i need and then printing the particular field from the awk
4. Once i get the above output file, is there anything i can do to convert the file into a format that would be easily readable from windows? I would like to add some logos and page breaks to the file. Is this possible?
I will be grateful if you can take some time to help me with my predicament
When using a tab as a field separator, can i ignore multiple tabls and consider them as 1 tab? How do i do this in the FS statement?
Yes. Just change your FS as FS="[ |:|@||\t+]";
Quote:
Originally Posted by goddevil
2. Going forward, my plan is to extract data daily from the file for the previous day. My plan is to get the previous day's date, grep the log file for this date and then pipe the result into the above code. Is there a better way to do this and avoid the grep and pipe?
Usually you can avoid piping a grep result into aw by just using the awk condition filtering like in /string to capture/{... awk processing...}
Quote:
Originally Posted by goddevil
3. I have refrained from using pipes as much as possible to reduce the time complexity but i couldnt avoid the above pipes. Is there anyway i can do the parsing above without using any pipes?
Most probably.
Quote:
Originally Posted by goddevil
4. Once i get the above output file, is there anything i can do to convert the file into a format that would be easily readable from windows? I would like to add some logos and page breaks to the file. Is this possible?
I am not a windows guy but I would imagine this could be done.
First I should note that setting FS to a regex like this is a GNU awk feature. Most other versions of awk can't do that.
For a really complicated line like this, you can change FS on the fly and re-split a line by assigning $0 to it. You could do this with arrays and split() but it gets ugly to nest that too much.
First I split on () to extract the XML data, then I split on < to separate the tags from each other. I extract the string data in a loop and cram it into an array.
Then I split on whitespace, dashes, and commas while cramming all the data that wasn't processed before into $0.
Lastly I set FS back to [()] to get ready for the next line.
Not a complete solution since it's not clear where all your data is coming from, but should be enough for you to fill in the blanks:
I can't make something which works for your data if you don't post a representative sample. I can try, but it's a game of blind man's bluff.
It was more of a general question and not specific to my example. I was playing around with the script to make the output better and it occurred me to replace the blanks with ~ and then use it as an FS. Then i got to wondering if i can use regexp for the substution range in the sed command.
I am trying to parse the XML Google contact file using tools like xmllint and I even dived into the XSL Style Sheets using xsltproc but I get nowhere.
I can not supply any sample file as it contains private data but you can download your own contacts using this script:
#!/bin/sh
# imports... (9 Replies)
Hi,
I need to continuously monitor a logfile to get the log information between a process start and end.
the logfile look like this
abcdddddddddd
cjjckkkkkkkkkkkk
abc : Process started
aaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaa
bbbbbbbbbbbbbbbbbbbbbbb
abc... (6 Replies)
Hi all,
I am trying to generate an XML file from a flatfile in ksh/bash (could also use perl at a pinch, but out of my depth there!).
I have found several good solutions on this very forum for cases where the header line in the file forms the XML tags, however my flatfile is as follows:... (5 Replies)
I am trying to parse an xml file and trying to grab certain values and inserting them into database table. I have the following xml that I am parsing:
<dd:service name="locator" link="false">
<dd:activation mode="manual" />
<dd:run mode="direct_persistent" proxified="false" managed="true"... (7 Replies)
I had a big XML and from which I have to make a layout as below
*TOTAL+CB | *CB+FX | CS |*IR | *TOTAL |
--------------------------------------------------------------------------------------------------
|CB FX | | | |
DMFXNY EMSGFX... (6 Replies)
I have a xml file attached. I need to parse parameterId and its value
My output should be like
151515 38
151522 32769
and so on..
Please help me. Its urgent (6 Replies)
I thought I was pretty handy with awk until I got this one. :)
I'm trying to parse a log file where the events could have different delimiters (2 scripts is ok), the errors are spread over multiple lines, and I"m trying to figure out how to not read the same lines that have already been read. ... (1 Reply)
Hi all
I've been working on a bash script parsing through debug/trace files and extracting all lines that relate to some search string. So far, it works pretty well. However, I am challenged by one requirement that is still open.
What I want to do:
1) parse through a file and identify all... (3 Replies)
I want to use wget comment to parse an xml parse that exist in an online website. How can I connect it using shell script through Unix and how can I parse it?? (1 Reply)