Removing duplicate records in a file based on single column


Login or Register for Dates, Times and to Reply

 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Removing duplicate records in a file based on single column
# 1  
Data Removing duplicate records in a file based on single column

Hi,

I want to remove duplicate records including the first line based on column1. For example

inputfile(filer.txt):
-------------
Code:
1,3000,5000
1,4000,6000
2,4000,600
2,5000,700
3,60000,4000
4,7000,7777
5,999,8888

expected output:
----------------
Code:
3,60000,4000
4,7000,7777
5,999,8888

Is it possible to achieve this using awk command ??

I tried below awk command , it is working but i dont want to give two times file name(filer.txt) in the command. I am allowed to give file name only one time.
Code:
awk -F"," 'NR == FNR {  cnt[$1] ++} NR != FNR {  if (cnt[$1] == 1) print $0 }' filer.txt filer.txt

Please suggest me how to achieve this.

Thanks in advance

Last edited by Franklin52; 08-22-2011 at 04:57 AM.. Reason: Please use code tags for code and data samples, thank you
# 2  
Use the unique option of the sort command.
Sort the file using the unique option. Then use diff between the original and the output (of the sort) file. Then use the diff file to remove the records from the output file of the sort.
# 3  
Thanks for reply jgt Smilie, i am allowed to use awk/sed command alone Smilie. can someone give suggestion how exactly i can code it in single command line.



Quote:
Originally Posted by jgt
Use the unique option of the sort command.
Sort the file using the unique option. Then use diff between the original and the output (of the sort) file. Then use the diff file to remove the records from the output file of the sort.
# 4  
Quote:
Originally Posted by G.K.K
i am allowed to use awk/sed command alone Smilie. can someone give suggestion how exactly i can code it in single command line.
Who makes up these rules, and why????
# 5  
Got solution using single line command. Thanks. Problem resolved Smilie

Quote:
Originally Posted by jgt
Who makes up these rules, and why????
# 6  
Hi,

One solution using 'sed':
Code:
$ cat infile
1,3000,5000
1,4000,6000
2,4000,600
2,5000,700
3,60000,4000
4,7000,7777
5,999,8888
$ sed -ne '$! { /\n/! N; } ; :a ; $! { /^\([0-9]*\),.*\n\1[^\n]\+$/ { N; ba; }; } ; s/^\([0-9]*\),.*\n\1// ; tb ; P ; D ; :b ; D' infile
3,60000,4000
4,7000,7777
5,999,8888

Regards,
Birei
Login or Register for Dates, Times and to Reply

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #708
Difficulty: Medium
The MySQL SELECT statement allows you to read data from only a single table.
True or False?

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

CSV File:Filter duplicate records from column1 & another column having unique record

Hi Experts, I have csv file with 30, 40 columns Pasting just 2 column for problem description. Need to print error if below combination is not present in file check for column-1 (DocumentNumber) and filter columns where value in DocumentNumber field is same. For all such rows, the field... (7 Replies)
Discussion started by: as7951
7 Replies

2. Shell Programming and Scripting

Filter duplicate records from csv file with condition on one column

I have csv file with 30, 40 columns Pasting just three column for problem description I want to filter record if column 1 matches CN or DN then, check for values in column 2 if column contain 1235, 1235 then in column 3 values must be sequence of 2345, 2345 and if column 2 contains 6789, 6789... (5 Replies)
Discussion started by: as7951
5 Replies

3. Shell Programming and Scripting

Removing duplicate lines on first column based with pipe delimiter

Hi, I have tried to remove dublicate lines based on first column with pipe delimiter . but i ma not able to get some uniqu lines Command : sort -t'|' -nuk1 file.txt Input : 38376KZ|09/25/15|1.057 38376KZ|09/25/15|1.057 02006YB|09/25/15|0.859 12593PS|09/25/15|2.803... (2 Replies)
Discussion started by: parithi06
2 Replies

4. Shell Programming and Scripting

Removing duplicate records in a file based on single column explanation

I was reading this thread. It looks like a simpler way to say this is to only keep uniq lines based on field or column 1. https://www.unix.com/shell-programming-scripting/165717-removing-duplicate-records-file-based-single-column.html Can someone explain this command please? How are there no... (5 Replies)
Discussion started by: cokedude
5 Replies

5. UNIX for Dummies Questions & Answers

Remove duplicate rows when >10 based on single column value

Hello, I'm trying to delete duplicates when there are more than 10 duplicates, based on the value of the first column. e.g. a 1 a 2 a 3 b 1 c 1 gives b 1 c 1 but requires 11 duplicates before it deletes. Thanks for the help Video tutorial on how to use code tags in The UNIX... (11 Replies)
Discussion started by: informaticist
11 Replies

6. Shell Programming and Scripting

duplicate row based on single column

I am a newbie to shell scripting .. I have a .csv file. It has 1000 some rows and about 7 columns... but before I insert this data to a table I have to parse it and clean it ..basing on the value of the first column..which a string of phone number type... example below.. column 1 ... (2 Replies)
Discussion started by: mitr
2 Replies

7. Shell Programming and Scripting

Removing duplicate records from 2 files

Can anyone help me to removing duplicate records from 2 separate files in UNIX? Please find the sample records for both the files cat Monday.dat 3FAHP0JA1AR319226MOHMED ATEK 966504453742 SAU2010DE 3LNHL2GC6AR636361HEA DEUK CHOI 821057314531 KOR2010LE 3MEHM0JG7AR652083MUTLAB NAL-NAFISAH... (4 Replies)
Discussion started by: zooby
4 Replies

8. Shell Programming and Scripting

Find Duplicate records in first Column in File

Hi, Need to find a duplicate records on the first column, ANU4501710430989 0000000W20389390 ANU4501710430989 0000000W67065483 ANU4501130050520 0000000W80838713 ANU4501210170685 0000000W69246611... (3 Replies)
Discussion started by: Murugesh
3 Replies

9. Linux

Need awk script for removing duplicate records

I have huge txt file having millions of trade data. For e.g Trade.txt (first 8 lines in the file is header info) COB_DATE,TRADE_ID,SOURCE_SYSTEM_TRADE_ID,TRADE_GROUP_ID, TRADE_TYPE,DEALER_NAME,EXTERNAL_COUNTERPARTY_ID, EXTERNAL_COUNTERPARTY_NAME,DB_COUNTERPARTY_ID,... (6 Replies)
Discussion started by: nmumbarkar
6 Replies

10. UNIX for Dummies Questions & Answers

Filtering records of a file based on a value of a column

Hi all, I would like to extract records of a file based on a condition. The file contains 47 fields, and I would like to extract only those records that match a certain value in one of the columns, e.g. COL1 COL2 COL3 ............... COL47 1 XX 45 ... (4 Replies)
Discussion started by: risk_sly
4 Replies

Featured Tech Videos