How to grep on unique id which has request and response on different lines?


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting How to grep on unique id which has request and response on different lines?
# 1  
Old 06-17-2013
How to grep on unique id which has request and response on different lines?

Hi I want to find out those unique uids from the log file which have request and response.

The log file format is as follows. This log has other irrelevant lines too but each uid should have request and reponse, I need those uids only
Code:
2013-04-03 10:51:01,808 INFO [Logger] <?xml version="1.0" encoding="UTF-8" standalone="yes"?><amp;lt;UID&amp;gt;073104c-4e-4ce-bda-694344ee62&amp;lt;/UID&amp;gt;&amp;lt><begin request>
2013-04-03 10:51:02,898 INFO [Logger] <?xml version="1.0" encoding="UTF-8" standalone="yes"?>amp;lt;UID&amp;gt;073104c-4e-4ce-bda-694344ee62&amp;lt;/UID&amp;gt;&amp;lt>complete reponse>

output I want in below format
UID reponsetime(milisecond)( end time - start time)
073104c-4e-4ce-bda-694344ee62 1090

Last edited by vbe; 06-17-2013 at 09:27 AM.. Reason: next time use code tags!!
# 2  
Old 06-17-2013
Here is how to get start and stop for every UID into an array and print it.
Code:
awk -F "[;& ]" '
	/begin request/ {st[$13]=$1 " " $2} 
	/complete reponse/ {en[$13]=$1 " " $2} 
	END {
	for (i in st) 
		print i,st[i],en[i]
	}' logfile

Here is how to calc with milliseconds.
Code:
echo $(( ($(date -d "2013-04-03 08:54:40,889" +%s%N) - $(date -d "2013-04-03 08:54:39,689" +%s%N))/1000/1000  ))

I have some problem getting all into awk, but I think it should be possible
This User Gave Thanks to Jotne For This Post:
# 3  
Old 06-17-2013
Thanks Jotne ,

Can you pls put comment in first 2 lines so that I can understand. as I am new to awk. Thanks a lot for your time.Smilie

---------- Post updated at 09:53 AM ---------- Previous update was at 09:14 AM ----------

I think you are trying to get 13th column in the line, it doesnt work in my case, it's not 13th column actually.

---------- Post updated at 10:52 AM ---------- Previous update was at 09:53 AM ----------

Thanks, I got it working. I used gensub inside st array to filter uid.

Last edited by random_thoughts; 06-17-2013 at 10:36 AM..
# 4  
Old 06-18-2013
This should do it, may for sure be shorten some.
Code:
awk -F "[;& ]" '
	/begin request/ {st[$13]=$1 " " $2} 
	/complete reponse/ {en[$13]=$1 " " $2} 
	END {
	for (i in st) {
		split(st[i],s1,"[- :,]")
		s2=s1[1] " " s1[2] " " s1[3] " " s1[4] " " s1[5] " " s1[6] " "
		s3=mktime(s2)
		split(en[i],e1,"[- :,]")
		e2=e1[1] " " e1[2] " " e1[3] " " e1[4] " " e1[5] " " e1[6] " "
		e3=mktime(e2)
		d=(e3*1000+e1[7])-(s3*1000-s1[7])
		print i,d }
	}' file

Code:
073104c-4e-4ce-bda-694344ee62 2706

Can you post your solution?

-F "[;& ]" sets how to split the line (filed separators)
/begin request/ {st[$13]=$1 " " $2} create an array and uses field 13 as index (UDI) and store time into it.
eks: st[073104c-4e-4ce-bda-694344ee62]="2013-04-03 10:51:01,808"

---------- Post updated 18-06-13 at 07:53 ---------- Previous update was 17-06-13 at 17:31 ----------

Cleaned code some
Code:
awk -F "[;& ]" '
	/begin request/ {st[$13]=$1 " " $2} 
	/complete reponse/ {en[$13]=$1 " " $2} 
	END {
	for (i in st) {
		split(st[i],s,"[- :,]")
		start=mktime(s[1] " " s[2] " " s[3] " " s[4] " " s[5] " " s[6])

		split(en[i],e,"[- :,]")
		end=mktime(e[1] " " e[2] " " e[3] " " e[4] " " e[5] " " e[6])

		diff=(end*1000+e[7])-(start*1000-s[7])
		print i,diff }
	}' file


Last edited by Jotne; 06-18-2013 at 02:55 AM.. Reason: Added missing {}
This User Gave Thanks to Jotne For This Post:
# 5  
Old 06-18-2013
I used gensub inside st and ed array to get uids instead of st[13] as follows.
Code:
 
st[gensub(/.*&amp;gt;([^&]+).*/,"\\1","")]=$2

And I want to use the whole awk on lines which has INFO, How would I do it.

---------- Post updated at 08:47 AM ---------- Previous update was at 08:26 AM ----------

Also in last calculation it should be like this.

Code:
 
diff=(end*1000+e[7])-(start*1000+s[7])

---------- Post updated at 11:20 AM ---------- Previous update was at 08:47 AM ----------

another block..while running this code of calculating response time ins another funtion i get time in this format
Code:
 
1.36501e+12

Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Print unique lines without sort or unique

I would like to print unique lines without sort or unique. Unfortunately the server I am working on does not have sort or unique. I have not been able to contact the administrator of the server to ask him to add it for several weeks. (7 Replies)
Discussion started by: cokedude
7 Replies

2. Shell Programming and Scripting

Regular expression for XML request and response

Hi Gents, I am having a xml request, I need to match the entire paragraph in unix regex. Can some one please provide me the regex for unix. I need to embed this in a shell script. Below is the XML. I am bit weak in regular expression. Regex *testameykamble* here testameykamble is my key... (8 Replies)
Discussion started by: ameyrk
8 Replies

3. Shell Programming and Scripting

Transpose lines from individual blocks to unique lines

Hello to all, happy new year 2013! May somebody could help me, is about a very similar problem to the problem I've posted here where the member rdrtx1 and bipinajith helped me a lot. https://www.unix.com/shell-programming-scripting/211147-map-values-blocks-single-line-2.html It is very... (3 Replies)
Discussion started by: Ophiuchus
3 Replies

4. Shell Programming and Scripting

[Request] Copying a file with cp except for the last one or two lines.

Hi folks! I'm new to the unix-side of the world! ;) and I'm trying to learn as much as possible the good stuff that's in it! So.. here comes the point.. as you can see in the title, I need to copy a file but truncating it so that last 1-2 lines are not copied... any suggests from the... (6 Replies)
Discussion started by: WideMind
6 Replies

5. UNIX for Advanced & Expert Users

In a huge file, Delete duplicate lines leaving unique lines

Hi All, I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space. I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Discussion started by: krishnix
16 Replies

6. UNIX for Dummies Questions & Answers

Grep Unique

Hello, I have a file with a list of car makes and specific information for each make. An example is: @Audi:Warranty @Audi:Pricing @Audi:Colors @Acura:Warranty @Acura:Pricing @Acura:Colors and so on through a bunch of makes. I need to make a list in a selection box of just one name of... (4 Replies)
Discussion started by: macbb1117
4 Replies

7. Shell Programming and Scripting

Remove All Lines Between Two Unique Lines

Hi all! Im wondering if its possible to remove all lines between two lines. Im working with a document like this: data1 data2 <Remove> data3 data4 </Remove> data5 data6 I need it to end up like this if that possible: data1 data2 data5 data6 There are multiple instances of... (2 Replies)
Discussion started by: Grizzly
2 Replies

8. Shell Programming and Scripting

grep for unique value

If i want to grep for a value in a file but display only unique value then which option can i use. ex: Values in the file IP <1.2.3.4> value <2> IP <1.2.3.4> value <2> IP <1.2.3.4> value <3> IP <1.2.3.5> value <1> i should get only the unique value (3 Replies)
Discussion started by: vls1210
3 Replies

9. UNIX for Dummies Questions & Answers

grep unique occurrences

Hi, How do i grep unique occurrences from a file. I have a log file with multiple occurrences of 'foo' for instance. When i do: grep foo, I get all the lines that contain foo. Is there any way to get only one line for foo? Example file: foo at 09.01am Fri 11 May 2007 foo at 09.13am... (6 Replies)
Discussion started by: mz043
6 Replies
Login or Register to Ask a Question