Removing duplicates in a sorted file by field.


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Removing duplicates in a sorted file by field.
# 1  
Old 04-29-2008
Tools Removing duplicates in a sorted file by field.

I have data like this:
It's sorted by the 2nd field (TID).
envoy,90000000000000634600010001,04/11/2008,23:19:27,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/12/2008,04:23:45,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/12/2008,23:14:25,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/13/2008,04:23:39,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/13/2008,22:41:58,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/13/2008,22:42:44,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/13/2008,22:49:43,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/13/2008,22:50:45,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/13/2008,22:53:23,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/14/2008,12:38:40,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000634600010001,04/14/2008,12:52:22,RB00266,0015,DETAIL,ERROR,
envoy,90000000000000693200010001,04/17/2008,09:07:09,RB00060,0009,ENVOY,ERROR,26
envoy,90000000000000693200010001,04/18/2008,10:27:13,RB00083,0009,ENVOY,ERROR,26
envoy,90000000000000693200010001,04/18/2008,11:36:27,RB00084,0009,ENVOY,ERROR,26
envoy,90000000000001034800010001,04/01/2008,23:59:15,RB00294,0030,ENVOY,ERROR,57
envoy,90000000000001034800010001,04/02/2008,23:59:12,RB00295,0030,ENVOY,ERROR,57
envoy,90000000000001034800010001,04/03/2008,23:59:11,RB00296,0030,ENVOY,ERROR,57
envoy,90000000000001034800010001,04/04/2008,23:59:08,RB00297,0030,ENVOY,ERROR,57
envoy,90000000000001034800010001,04/05/2008,23:59:04,RB00297,0030,ENVOY,ERROR,57
envoy,90000000000001034800010001,04/06/2008,22:59:06,RB00297,0030,ENVOY,ERROR,57

I want to do the following:
Check the second field to see if the TID is the same as the previous line. If the TID has been seen before then check the 7th field to see if that is the same as the previous line. If both are the same, I want to remove the line and increment a counter.

My ideal output would look something like this.
11,envoy,90000000000000634600010001,04/11/2008,23:19:27,RB00266,0015,DETAIL,ERROR,
3,envoy,90000000000000693200010001,04/17/2008,09:07:09,RB00060,0009,ENVOY,ERROR,26

etc.

I figure I actually need to do an awk script rather than a 1 liner. The other option is for the last 3 fields to be one field and compare by the TID field and the error field, then split them into 3 on the output.

Any thoughts? I've looked at other stuff removing dups with awk and it's mostly one liners.

I'd love to ask get some explanation of WHY it works so that I can mod it if need be.
# 2  
Old 04-29-2008
For the first question you can use the 2nd and the 7th field in an associative array:

Code:
awk -F, '
!arr[$2,$7]{arr[$2,$7]=$0;c[$2,$7]++;next}    # If the array is new, preserve the first line and increase the counter
arr[$2,$7]{c[$2,$7]++}                        # else just increase the counter
END{for(i in arr){print c[i]","arr[i]}}       # Print the counter with the first line
' file

Give an example of the expected output for your other option.

Regards
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3.I have tried previous post also,but in that complete line must be similar.In this case i have to verify first column only regardless what is the content in succeeding columns. (3 Replies)
Discussion started by: sagar_1986
3 Replies

2. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3 (2 Replies)
Discussion started by: sagar_1986
2 Replies

3. UNIX for Dummies Questions & Answers

Grep from pattern file without removing duplicates?

I have been using grep to output whole lines using a pattern file with identifiers (fileA): fig|562.2322.peg.1 fig|562.2322.peg.3 fig|562.2322.peg.3 fig|562.2322.peg.3 fig|562.2322.peg.7 From fileB with corresponding identifiers in the second column: NODE_0 fig|562.2322.peg.1 peg ... (2 Replies)
Discussion started by: Mauve
2 Replies

4. UNIX for Dummies Questions & Answers

Removing duplicates from a file

Hi All, I am merging files coming from 2 different systems ,while doing that I am getting duplicates entries in the merged file I,01,000131,764,2,4.00 I,01,000131,765,2,4.00 I,01,000131,772,2,4.00 I,01,000131,773,2,4.00 I,01,000168,762,2,2.00 I,01,000168,763,2,2.00... (5 Replies)
Discussion started by: Sri3001
5 Replies

5. Shell Programming and Scripting

Removing duplicates depending on file size

Hi all, I am working with a huge amount of files in a Linux environment and I was trying to filter my data. Here's what my data looks like Name............................Size OLUSDN.gf.gif-1.JPEG.......5 kb LKJFDA01.gf.gif-1.JPEG.....3 kb LKJFDA01.gf.gif-2.JPEG.....1 kb... (7 Replies)
Discussion started by: Error404
7 Replies

6. Shell Programming and Scripting

formatting a file and removing duplicates

Hi, I have a file that I want to change the format of. It is a large file in rows but I want it to be comma separated (comma then a space). The current file looks like this: HI, Joe, Bob, Jack, Jack After I would want to remove any duplicates so it would look like this: HI, Joe,... (2 Replies)
Discussion started by: kylle345
2 Replies

7. Shell Programming and Scripting

Removing Duplicates from file

Hi Experts, Please check the following new requirement. I got data like the following in a file. FILE_HEADER 01cbbfde7898410| 3477945| home| 1 01cbc275d2c122| 3478234| WORK| 1 01cbbe4362743da| 3496386| Rich Spare| 1 01cbc275d2c122| 3478234| WORK| 1 This is pipe separated file with... (3 Replies)
Discussion started by: tinufarid
3 Replies

8. Shell Programming and Scripting

Removing duplicates from log file?

I have a log file with posts looking like this: -- Messages can be delivered by different systems at different times. The id number is used to sort out duplicate messages. What I need is to strip the arrival time from each post, sort posts by id number, and reattach arrival time to respective... (2 Replies)
Discussion started by: Ilja
2 Replies

9. UNIX for Dummies Questions & Answers

removing duplicates of a pattern from a file

hey all, I need some help. I have a text file with names in it. My target is that if a particular pattern exists in that file more than once..then i want to rename all the occurences of that pattern by alternate patterns.. for e.g if i have PATTERN occuring 5 times then i want to... (3 Replies)
Discussion started by: ashisharora
3 Replies

10. UNIX for Dummies Questions & Answers

removing duplicates from a file

i have a file with some 1000 entries it will contain entries like 1000,ram 2000,pankaj 1001,rahim 1000,ram 2532,govind 2000,pankaj 3000,venkat 2532,govind what i want is i want to extract only the distinct rows from this file so my output should contain only 1000,ram... (2 Replies)
Discussion started by: trichyselva
2 Replies
Login or Register to Ask a Question