Sponsored Content
Operating Systems Linux Need awk script for removing duplicate records Post 302305620 by nmumbarkar on Thursday 9th of April 2009 11:03:58 AM
Old 04-09-2009
Need to filter duplicate records.

Hi All,

I have huge trade file with milions of trades.I need to remove duplicate records (e.g I have following records)
30/10/2009,trdeId1,..,..
26/10/2009.tradeId1,..,..,,
30/10/2009,tradeId2,..

In the above case i need to filter duplicate recods and I should get following output.
30/10/2009,trdeId1,..,..
30/10/2009,tradeId2,..
(trade record with latest COB date)
COB -closed of business day

I need to handle following three conditions.
1.Trade file will be sorted in ascending order on first two columns(COB date and trade id)
2.Trade file will be sorted in descending order on first two columns(COB date and trade id)
3.Trade file may not have duplicate records.
In all the above condition my code should work.

I have written following code.but it doen't seems to be working.As i m new to awk can anybody help me in getting this.

#!/usr/bin/gawk
BEGIN {
FS = ","
}
END {
print prevLine;
}
{
if( FNR == 1)
{
prevDate=$1;
prevSourceTradeId=$2;
prevLine=$0;
}
else
{
if(prevSourceTradeId==$2)
{
if((compareDate(prevDate,$1) == 1))
{
print prevLine;
flag=true
}
else
{
prevDate=$1;
prevLine=$0;
prevSourceTradeId=$2;
print prevLine;
flag=true
}
}
else
{
if(flag)
{
prevDate=$1;
prevSourceTradeId=$2;
prevLine=$0;
}
else print prevLine;
prevDate=$1;
prevSourceTradeId=$2;
prevLine=$0;
flag=false;
}
}
}
}
function compareDate(lhsDate, rhsDate)
{
lhsSize = split(lhsDate, lhsFields, "/");
rhsSize = split(rhsDate, rhsFields, "/");
if(lhsSize != rhsSize)
{
print "Invalid prevDate " lhsDate " "rhsDate;
return 0;
}
for(i = rhsSize; i > 0; i--)
{
if(lhsFields[i] > rhsFields[i]) return 1;
}
return 0;
}
{
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Issues with filtering duplicate records using gawk script

Hi All, I have huge trade file with milions of trades.I need to remove duplicate records (e.g I have following records) 30/10/2009,trdeId1,..,.. 26/10/2009.tradeId1,..,..,, 30/10/2009,tradeId2,.. In the above case i need to filter duplicate recods and I should get following output.... (2 Replies)
Discussion started by: nmumbarkar
2 Replies

2. Shell Programming and Scripting

Removing duplicate records from 2 files

Can anyone help me to removing duplicate records from 2 separate files in UNIX? Please find the sample records for both the files cat Monday.dat 3FAHP0JA1AR319226MOHMED ATEK 966504453742 SAU2010DE 3LNHL2GC6AR636361HEA DEUK CHOI 821057314531 KOR2010LE 3MEHM0JG7AR652083MUTLAB NAL-NAFISAH... (4 Replies)
Discussion started by: zooby
4 Replies

3. Linux

Need awk script for removing duplicate records

I have log file having Traffic line 2011-05-21 15:11:50.356599 TCP (6), length: 52) 10.10.10.1.3020 > 10.10.10.254.50404: 2011-05-21 15:11:50.652739 TCP (6), length: 52) 10.10.10.254.50404 > 10.10.10.1.3020: 2011-05-21 15:11:50.652558 TCP (6), length: 89) 10.10.10.1.3020 >... (1 Reply)
Discussion started by: Rastamed
1 Replies

4. Shell Programming and Scripting

Removing duplicate records in a file based on single column

Hi, I want to remove duplicate records including the first line based on column1. For example inputfile(filer.txt): ------------- 1,3000,5000 1,4000,6000 2,4000,600 2,5000,700 3,60000,4000 4,7000,7777 5,999,8888 expected output: ---------------- 3,60000,4000 4,7000,7777... (5 Replies)
Discussion started by: G.K.K
5 Replies

5. Shell Programming and Scripting

removing duplicate records comparing 2 csv files

Hi All, I want to remove the rows from File1.csv by comparing a column/field in the File2.csv. If both columns matches then I want that row to be deleted from File1 using shell script(awk). Here is an example on what I need. File1.csv: RAJAK,ACTIVE,1 VIJAY,ACTIVE,2 TAHA,ACTIVE,3... (6 Replies)
Discussion started by: rajak.net
6 Replies

6. Shell Programming and Scripting

Removing duplicate records in a file based on single column explanation

I was reading this thread. It looks like a simpler way to say this is to only keep uniq lines based on field or column 1. https://www.unix.com/shell-programming-scripting/165717-removing-duplicate-records-file-based-single-column.html Can someone explain this command please? How are there no... (5 Replies)
Discussion started by: cokedude
5 Replies

7. Shell Programming and Scripting

Help with removing duplicate entries with awk or Perl

Hi, I have a file which looks like:ke this : chr1 11127067 11132181 89 chr1 11128023 11128311 chr1 11130990 11131025 chr1 11127067 11132181 89 chr1 11128023 11128311 chr1 11131583... (22 Replies)
Discussion started by: Amit Pande
22 Replies

8. Homework & Coursework Questions

Script: Removing HTML tags and duplicate lines

Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted! 1. The problem statement, all variables and given/known data: You will write a script that will remove all HTML tags from an HTML document and remove any consecutive... (3 Replies)
Discussion started by: tburns517
3 Replies

9. Shell Programming and Scripting

To select non-duplicate records using awk

Friends, I have data sorted on id like this id addressl 1 abc 2 abc 2 abc 2 abc 3 aabc 4 abc 4 abc I want to pick all ids with addressesses leaving out duplicate records. Desired output would be id address 1 abc 2 abc 3 abc 4 abc (5 Replies)
Discussion started by: paresh n doshi
5 Replies

10. Shell Programming and Scripting

Removing specific records from files when duplicate key

Hello I have been trying to remove a row from a file which has the same first three columns as another row - I have tried lots of different combinations of suggestion on this forum but can't get it exactly right. what I have is 900 - 1000 = 0 900 - 1000 = 2562 1000 - 1100 = 0 1000 - 1100... (7 Replies)
Discussion started by: tinytimmay
7 Replies
All times are GMT -4. The time now is 04:50 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy