Removing duplicate rows & selecting only latest date

Thread Tools Search this Thread
# 1  
Removing duplicate rows & selecting only latest date


From a file I need to remove duplicate rows based on the first column data but also we need to consider a date column where we need to keep the latest date (13th column).


Input File:

0015198310301016542R1|0015|001519831030101654||2|2||F|GBP|20050905||20151003|20091103|||0|1,000000|1 ,000000||5,340000|5,340000|||0,000000|||20090609|20090609|20090609
0015198310301016542R1|0015|001519831030101654||2|2||F|GBP|20050905||20151003|20151103|||0|1,000000|1 ,000000||5,340000|5,340000|||0,000000|||20090609|20090609|20090609
0015198310301016542R1|0015|001519831030101654||2|2||F|GBP|20050905||20151003||||0|1,000000|1,000000| |5,340000|5,340000|||0,000000|||20090609|20090609|20090609
0015198310301016543E1|0015|001519831030101654||2|2||V|GBP|20040923||20170903||||0|1,000000|1,000000| |1,500000|1,500000|||0,000000|||20090609|20090609|20090609
0015198310301016543E1|0015|001519831030101654||2|2||V|GBP|20040923||20170903||||0|1,000000|1,000000| |1,500000|1,500000|||0,000000|||20090609|20090609|20090609
Output File:

0015198310301016542R1|0015|001519831030101654||2|2||F|GBP|20050905||20151003|20151103|||0|1,000000|1 ,000000||5,340000|5,340000|||0,000000|||20090609|20090609|20090609
0015198310301016543E1|0015|001519831030101654||2|2||V|GBP|20040923||20170903||||0|1,000000|1,000000| |1,500000|1,500000|||0,000000|||20090609|20090609|20090609
I know how to take out the duplicates but I couldn't figure out selecting the latest date based on column 13th.

Can you please help me?

# 2  
Few rows doesn't have the 13th column, what about these ?
# 3  
Try this..

perl -F'\|' -lane '$hash{$F[0]}=$_ if($F[13] <= $hash{$F[0]})}{print $_ for values %hash' input

# 4  
if the column is common through-out, use awk like awk '{print $col_number}' to extract this field firstly and then go for extracting the latest one (tail -1 or head -1 depends upon your requirement).
# 5  
Lets say a file has rows with 2 columns & the file is sorted. Commad below will look for duplicates based on data in column1. If there are no duplicates it traverses forward. If duplicates are found, it reads column2 data for comparison and returns the row having largest value in column2.




awk -F"|" '{if(! a[$1] ) {a[$1]=$2;b[++i]=$0} else if( $2 > a[$1]){a[$1]=$2;b[i]=$0}} END {for(j=1;j<=i;j++) {print b[j]}}' inputfile.txt


Try this on your file after changing the coulmn entries.
# 6  
Thanks Sheel!

If I use the same code as yours I can get rid of duplicates however, it is not selecting the latest date. Can you please let me know what is the change required for selecting the latest date on column 13th?


---------- Post updated at 10:34 AM ---------- Previous update was at 10:11 AM ----------

Just figured it. Thanks a lot for the help.

Last edited by shash; 06-08-2011 at 01:20 PM.. Reason: Got the code to work a bit

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #969
Difficulty: Medium
The Linux kernel is a free and open-source, Unix-like operating system microkernel.
True or False?

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Size Selecting rows

I have a rather convoluted script that I use to extract information from CSV files: sed '1d' PeakTable.txt | awk -F ',' '!/Ladder/{ if ( $4 > 430 && $4 < 490 && $5 > 45 ) print $2, $5; else print $2, 0 }' | awk '{a+=$2}END{for(i in a){print i, a}}' | sed 's/\(\)\(\) /\10\2 /' | sort | awk... (4 Replies)
Discussion started by: Xterra
4 Replies

2. Shell Programming and Scripting

Selecting latest entry in the log file

Hi there I am trying to write a script where I will need to look for a specific word in the log file and I am aware this can be done by grep for example. As there will be multiple entries for this I want to grep the last one to enter the log... how would I go about this - would I have to use... (5 Replies)
Discussion started by: simpsa27
5 Replies

3. UNIX for Dummies Questions & Answers

Log file - Delete duplicate line & keep last date

Hello All ! I need your help on this case, I have a csv file with this: ITEM105;ARI FSR;2016-02-01 08:02;243 ITEM101;ARI FSR;2016-02-01 06:02;240 ITEM032;RNO TLE;2016-02-01 11:03;320 ITEM032;RNO TLE;2016-02-02 05:43;320 ITEM032;RNO TLE;2016-02-01 02:03;320 ITEM032;RNO... (2 Replies)
Discussion started by: vadim-bzh
2 Replies

4. Shell Programming and Scripting

Removing Duplicate Rows in a file

Hello I have a file with contents like this... Part1 Field2 Field3 Field4 (line1) Part2 Field2 Field3 Field4 (line2) Part3 Field2 Field3 Field4 (line3) Part1 Field2 Field3 Field4 (line4) Part4 Field2 Field3 Field4 (line5) Part5 Field2 Field3 Field4 (line6) Part2 Field2 Field3 Field4... (7 Replies)
Discussion started by: ekbaazigar
7 Replies

5. UNIX for Dummies Questions & Answers

Selecting the file of latest Date

Hi Folks, I have one query that there is a folder in which daily several logs files are getting created , I reached to that location through putty but what I observer that 10 files of different date are been created with same name , what I need to see is the latest file ...let say the location is ... (5 Replies)
Discussion started by: KAREENA18
5 Replies

6. Shell Programming and Scripting

removing rows from text file older than certain date

Hi I need a way of removing rows from a txt file that are older than 30 days from today, going by the date in column 2, below is an example from my file. I have tried awk but don't have enough knowledge. I would really appreciate some help. 41982,15/07/2010,H833AB/0,JZ,288... (6 Replies)
Discussion started by: firefox2k2
6 Replies

7. Shell Programming and Scripting

Removing rows from a file based on date comparison

I have a '|' delimited file and want to remove all the records from the file if the date is greater than a year from sysdate. The layout of the file is as below - xxxxxxxxxxxxxx|yyyyyy|zzzzzz|2009-12-27-00:00| 000000000|N xxxxxxxxxxxxxx|yyyyyy|zzzzzz|2010-01-03-00:00| 000000000|N... (4 Replies)
Discussion started by: Max_2503
4 Replies

8. UNIX for Dummies Questions & Answers

Help selecting some rows with awk

Hi there, I have a text file with several colums separated by "|;#" I need to search the file extracting all columns starting with the value of "1" or "2" saving in a separate file just the first 7 columns of each row maching the criteria, with replacement of the saparators in the nearly created... (2 Replies)
Discussion started by: capnino
2 Replies

9. Shell Programming and Scripting

To remove date and duplicate rows from a log file using unix commands

Hi, I have a log file having size of 48mb. For such a large log file. I want to get the message in a particular format which includes only unique error and exception messages. The following things to be done : 1) To remove all the date and time from the log file 2) To remove all the... (1 Reply)
Discussion started by: Pank10
1 Replies

10. UNIX for Dummies Questions & Answers

Subtract date & time in diferent rows

Hi Friends :) I have a long file having fields in the form : Field1 yy/mm/dd hh:mm:ss Duration(Sec) line 1) 123123 05/11/30 12:12:56 145 line 2) 145235 05/11/30 12:15:15 30 line 3) 145264 05/11/30 13:14:56 178 . . I want to subtract yy/dd/dd hh:mm:ss in line (2) from yy/mm/dd hh:mm:ss in... (1 Reply)
Discussion started by: vanand420
1 Replies

Featured Tech Videos