Search based on 1,2,4,5 columns and remove duplicates in the same file.


Login or Register for Dates, Times and to Reply

 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Search based on 1,2,4,5 columns and remove duplicates in the same file.
# 1  
Search based on 1,2,4,5 columns and remove duplicates in the same file.

Hi,

I am unable to search the duplicates in a file based on the 1st,2nd,4th,5th columns in a file and also remove the duplicates in the same file.


Source filename: Filename.csv

Code:
"1","ccc","information","5000","temp","concept","new"
"1","ddd","information","6000","temp","concept","new"
"1","aaa","information","4000","temp","concept","new"
"1","aaa","information","4000","temp","concept","new"
"1","bbb","information","4000","temp","concept","new"
"1","bbb","information","4000","temp","concept","new"
"1","abc","information","7000","temp","concept","new"



Output filename: Filename.csv


Output:

Code:
"1","ccc","information","5000","temp","concept","new"
"1","ddd","information","6000","temp","concept","new"
"1","aaa","information","4000","temp","concept","new"
"1","bbb","information","4000","temp","concept","new"
"1","abc","information","7000","temp","concept","new"


Any help greatly appreciated.

thanks
Suri

Last edited by radoulov; 10-25-2010 at 05:12 AM.. Reason: Added code tags!
# 2  
Code:
awk -F, '!_[$1, $2, $4, $5]++' infile

# 3  
Code:
for i in `sed = file1 | sed -e N -e 's/\(.*\)\n\(.*\)/\1  \2/' 
| sed -e 's/\(.*\),\(.*\),.*,\(.*\),\(.*\),.*,.*/\1\2\3\4/' 
| sed 'N;s/\([0-9]*\)  \(.*\)\n[0-9]*  \2/\1 \2/' 
| sed 's/\([0-9]*\).*/\1/'`
   do 
    sed -n "$i p" file1 
   done
"1","ccc","information","5000","temp","concept","new"
"1","ddd","information","6000","temp","concept","new"
"1","aaa","information","4000","temp","concept","new"
"1","bbb","information","4000","temp","concept","new"
"1","abc","information","7000","temp","concept","new"

Login or Register for Dates, Times and to Reply

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #519
Difficulty: Easy
3,201 = 0b110010000001
True or False?

9 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

Sort and remove duplicates in directory based on first 5 columns:

I have /tmp dir with filename as: 010020001_S-FOR-Sort-SYEXC_20160229_2212101.marker 010020001_S-FOR-Sort-SYEXC_20160229_2212102.marker 010020001-S-XOR-Sort-SYEXC_20160229_2212104.marker 010020001-S-XOR-Sort-SYEXC_20160229_2212105.marker 010020001_S-ZOR-Sort-SYEXC_20160229_2212106.marker... (4 Replies)
Discussion started by: gnnsprapa
4 Replies

2. Shell Programming and Scripting

Removing duplicates from delimited file based on 2 columns

Hi guys,Got a bit of a bind I'm in. I'm looking to remove duplicates from a pipe delimited file, but do so based on 2 columns. Sounds easy enough, but here's the kicker... Column #1 is a simple ID, which is used to identify the duplicate. Once dups are identified, I need to only keep the one... (2 Replies)
Discussion started by: kevinprood
2 Replies

3. Shell Programming and Scripting

Remove duplicates based on a field's value

Hi All, I have a text file with three columns. I would like a simple script that removes lines in which column 1 has duplicate entries, but use the largest value in column 3 to decide which one to keep. For example: Input file: 12345a rerere.rerere len=23 11111c fsdfdf.dfsdfdsf len=33 ... (3 Replies)
Discussion started by: anniecarv
3 Replies

4. Shell Programming and Scripting

Remove duplicates based on query and subject fields from blast output file

Hi all I have a blast outfile file like this : NZ_1540841_1561981 ICMP_1687819_1695946 92.59 27 2 0 12826 12852 3136 3162 0.28 38.2 NZ_1540841_1561981 ICMP_1687819_1695946 95.65 23 1 0 12268 12290 5815 5837 0.28 38.2 NZ_1540841_1561981 ICMP_3674888_3676546 82.70 185 32 0 9454 9638 11 195 6e-24 ... (2 Replies)
Discussion started by: pbioinfo
2 Replies

5. UNIX for Dummies Questions & Answers

remove duplicates based on a field and criteria

Hi, I have a file with fields like below: A;XYZ;102345;222 B;XYZ;123243;333 C;ABC;234234;444 D;MNO;103345;222 E;DEF;124243;333 desired output: C;ABC;234234;444 D;MNO;103345;222 E;DEF;124243;333 ie, if the 4rth field is a duplicate.. i need only those records where... (5 Replies)
Discussion started by: wanderingmind16
5 Replies

6. Shell Programming and Scripting

finding duplicates in csv based on key columns

Hi team, I have 20 columns csv files. i want to find the duplicates in that file based on the column1 column10 column4 column6 coulnn8 coulunm2 . if those columns have same values . then it should be a duplicate record. can one help me on finding the duplicates, Thanks in advance. ... (2 Replies)
Discussion started by: baskivs
2 Replies

7. Shell Programming and Scripting

remove duplicates based on single column

Hello, I am new to shell scripting. I have a huge file with multiple columns for example: I have 5 columns below. HWUSI-EAS000_29:1:105 + chr5 76654650 AATTGGAA HHHHG HWUSI-EAS000_29:1:106 + chr5 76654650 AATTGGAA B@HYL HWUSI-EAS000_29:1:108 + ... (4 Replies)
Discussion started by: Diya123
4 Replies

8. Shell Programming and Scripting

Remove duplicates based on the two key columns

Hi All, I needs to fetch unique records based on a keycolumn(ie., first column1) and also I needs to get the records which are having max value on column2 in sorted manner... and duplicates have to store in another output file. Input : Input.txt 1234,0,x 1234,1,y 5678,10,z 9999,10,k... (7 Replies)
Discussion started by: kmsekhar
7 Replies

9. UNIX for Dummies Questions & Answers

Remove duplicates based on a column in fixed width file

Hi, How to output the duplicate record to another file. We say the record is duplicate based on a column whose position is from 2 and its length is 11 characters. The file is a fixed width file. ex of Record: DTYU12333567opert tjhi kkklTRG9012 The data in bold is the key on which... (1 Reply)
Discussion started by: Qwerty123
1 Replies

Featured Tech Videos