Extract and exclude rows based on duplicate values


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Extract and exclude rows based on duplicate values
# 1  
Old 07-05-2017
Extract and exclude rows based on duplicate values

Hello

I have a file like this:

Code:
> cat examplefile
ghi|NN603762|eee
mno|NN607265|ttt
pqr|NN613879|yyy
stu|NN615002|uuu
jkl|NN607265|rrr
vwx|NN615002|iii
yzA|NN618555|ooo
def|NN190486|www
BCD|NN628717|ppp
abc|NN190486|qqq
EFG|NN628717|aaa
HIJ|NN628717|sss
>

I can sort the file by the second field like this:

Code:
> sort -t\| +1 examplefile
abc|NN190486|qqq
def|NN190486|www
ghi|NN603762|eee
jkl|NN607265|rrr
mno|NN607265|ttt
pqr|NN613879|yyy
vwx|NN615002|iii
stu|NN615002|uuu
yzA|NN618555|ooo
EFG|NN628717|aaa
BCD|NN628717|ppp
HIJ|NN628717|sss
>

But how can I also extract *only* the records where field 2 is duplicated & also *only* where field 2 isn't duplicated?

So I want to get these 2 separate outputs:

duplicated rows:

Code:
abc|NN190486|qqq
def|NN190486|www
jkl|NN607265|rrr
mno|NN607265|ttt
vwx|NN615002|iii
stu|NN615002|uuu
EFG|NN628717|aaa
BCD|NN628717|ppp
HIJ|NN628717|sss

non-duplicated rows:

Code:
ghi|NN603762|eee
pqr|NN613879|yyy
yzA|NN618555|ooo

I've seen some discussions where uniq -D is suggested but that flag isn't available on my version of uniq (I'm working on Solaris). I suspect that something clever can be done with awk to solve this but I'm afraid that's beyond me.

Any advice appreciated.

Chris
# 2  
Old 07-05-2017
Here you go -

Code:
# cat examplefile.txt
ghi|NN603762|eee
mno|NN607265|ttt
pqr|NN613879|yyy
stu|NN615002|uuu
jkl|NN607265|rrr
vwx|NN615002|iii
yzA|NN618555|ooo
def|NN190486|www
BCD|NN628717|ppp
abc|NN190486|qqq
EFG|NN628717|aaa
HIJ|NN628717|sss

## Duplicate
# awk 'BEGIN { FS="|" } { c[$2]++; l[$2,c[$2]]=$0 } END { for (i in c) { if (c[i] > 1) for (j = 1; j <= c[i]; j++) print l[i,j] } }'  examplefile.txt
stu|NN615002|uuu
vwx|NN615002|iii
mno|NN607265|ttt
jkl|NN607265|rrr
BCD|NN628717|ppp
EFG|NN628717|aaa
HIJ|NN628717|sss
def|NN190486|www
abc|NN190486|qqq

## Non-Duplicate
# awk 'BEGIN { FS="|" } { c[$2]++; l[$2,c[$2]]=$0 } END { for (i in c) { if (c[i] <= 1) for (j = 1; j <= c[i]; j++) print l[i,j] } }'  examplefile.txt
yzA|NN618555|ooo
ghi|NN603762|eee
pqr|NN613879|yyy

This User Gave Thanks to Mannu2525 For This Post:
# 3  
Old 07-05-2017
Hello CHoggarth,

Could you please try following and let me know if this helps you.
Code:
awk -F"|" '{A[$2]=A[$2]?A[$2] ORS $0:$0;B[$2]++} END{for(i in B){if(B[i]>1){print A[i] > "more_than_one_file"} else {print A[i] > "one_time_occurence_file"}};} END{system("echo more_than_one_file;sort -k1 more_than_one_file;echo;echo one_time_occurence_file;sort -k1 one_time_occurence_file")}'   Input_file

A non-one liner solution for same too.
Code:
awk -F"|" '{
                A[$2]=A[$2]?A[$2] ORS $0:$0;
                B[$2]++
           }
                END{
                        for(i in B){
                                        if(B[i]>1){
                                                        print A[i] > "more_than_one_file"
                                                  }
                                        else      {
                                                        print A[i] > "one_time_occurence_file"
                                                  }
                                   };
                   }
                END{
                        system("echo more_than_one_file;\
                                sort -k1 more_than_one_file;\
                                echo;\
                                echo one_time_occurence_file;\
                                sort -k1 one_time_occurence_file")
                   }
           '   Input_file

Thanks,
R. Singh
# 4  
Old 07-05-2017
If you'd rather create both file in one step and you don't need the output files to be sorted, the following will read your sample input file twice and produce two output files. The output file named duplicated will contain the lines that have field 2 values that appear on more than one line and the output file named non-duplicated will contain the lines that have field 2 values that only appear once:
Code:
awk -F '|' '
NR == FNR {
	c[$2]++
	next
}
c[$2] == 1 {
	print > "non-duplicated"
	next
}
{	print > "duplicated"
}' examplefile examplefile

If you need the output files to be in sorted order, sort the input file before running the above.

If you are running this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
# 5  
Old 07-05-2017
Try also
Code:
sort -t"|" -k2,2 file | uniq -s4 -w8 -D
abc|NN190486|qqq
def|NN190486|www
jkl|NN607265|rrr
mno|NN607265|ttt
stu|NN615002|uuu
vwx|NN615002|iii
BCD|NN628717|ppp
EFG|NN628717|aaa
HIJ|NN628717|sss
sort -t"|" -k2,2 file | uniq -s4 -w8 -u
ghi|NN603762|eee
pqr|NN613879|yyy
yzA|NN618555|ooo

# 6  
Old 07-06-2017
Many thanks for all the replies. I'm likely going with the solution from Mannu2525 which worked fine on my Solaris server once I changed awk to nawk.

I'll try to find time to try the others & let you all know how it goes.

I consider myself a reasonable shell scripter but these awk constructs are well beyond my abilities. Thanks again.

Chris
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Extract duplicate rows with conditions

Gents Can you help please. Input file 5490921425 1 7 1310342 54909214251 5490921425 2 1 1 54909214252 5491120937 1 1 3 54911209371 5491120937 3 1 1 54911209373 5491320785 1 ... (4 Replies)
Discussion started by: jiam912
4 Replies

2. Shell Programming and Scripting

Average values of duplicate rows

I have this file input.txt. I want to take average column-wise for the rows having duplicate gene names. Gene Sample_1 Sample_2 Sample_3 gene_A 2 4 5 gene_B 1 2 3 gene_A 0 5 7 gene_B 4 5 6 gene_A 11 12 13 gene_C 2 3 4 Desired output: gene_A 4.3 7 8.3 gene_B 2.5 3.5 4.5 gene_C 2 3 4... (6 Replies)
Discussion started by: Sanchari
6 Replies

3. Shell Programming and Scripting

Remove duplicate rows based on one column

Dear members, I need to filter a file based on the 8th column (that is id), and does not mather the other columns, because I want just one id (1 line of each id) and remove the duplicates lines based on this id (8th column), and does not matter wich duplicate will be removed. example of my file... (3 Replies)
Discussion started by: clarissab
3 Replies

4. Shell Programming and Scripting

Extract and count number of Duplicate rows

Hi All, I need to extract duplicate rows from a file and write these bad records into another file. And need to have a count of these bad records. i have a command awk ' {s++} END { for(i in s) { if(s>1) { print i } } }' ${TMP_DUPE_RECS}>>${TMP_BAD_DATA_DUPE_RECS}... (5 Replies)
Discussion started by: Arun Mishra
5 Replies

5. Shell Programming and Scripting

Duplicate rows in CSV files based on values

I am new to this forum and this is my first post. I am looking at an old post with exactly the same name. Can not paste URL because I do not have 5 posts My requirement is exactly opposite. I want to get rid of duplicate rows and try to append the values of columns in those rows ... (10 Replies)
Discussion started by: vbhonde11
10 Replies

6. Shell Programming and Scripting

How to extract duplicate rows

Hi! I have a file as below: line1 line2 line2 line3 line3 line3 line4 line4 line4 line4 I would like to extract duplicate lines (not unique, triplicate or quadruplicate lines). Output will be as below: line2 line2 I would appreciate if anyone can help. Thanks. (4 Replies)
Discussion started by: chromatin
4 Replies

7. UNIX for Dummies Questions & Answers

forming duplicate rows based on value of a key

if the key (A or B or ...others) has 4 in its 3rd column the 1st A row has to form 4 dupicates along with the all the values of A in 4th column (2.9, 3.8, 4.2) . Hope I explain the question clearly. Cheers Ruby input "A" 1 4 2.9 "A" 2 5 ... (7 Replies)
Discussion started by: ruby_sgp
7 Replies

8. Shell Programming and Scripting

Duplicate rows in CSV files based on values

I want to duplicate a row if found two or more values in a particular column for corresponding row which is delimitted by comma. Input abc,line one,value1 abc,line two, value1, value2 abc,line three,value1 needs to converted to abc,line one,value1 abc,line two, value1 abc,line... (8 Replies)
Discussion started by: Incrediblian
8 Replies

9. Shell Programming and Scripting

How to extract duplicate rows

I have searched the internet for duplicate row extracting. All I have seen is extracting good rows or eliminating duplicate rows. How do I extract duplicate rows from a flat file in unix. I'm using Korn shell on HP Unix. For.eg. FlatFile.txt ======== 123:456:678 123:456:678 123:456:876... (5 Replies)
Discussion started by: bobbygsk
5 Replies

10. Shell Programming and Scripting

Extract duplicate fields in rows

I have a input file with formating: 6000000901 ;36200103 ;h3a01f496 ; 2000123605 ;36218982 ;heefa1328 ; 2000273132 ;36246985 ;h08c5cb71 ; 2000041207 ;36246985 ;heef75497 ; Each fields is seperated by semi-comma. Sometime, the second files is... (6 Replies)
Discussion started by: anhtt
6 Replies
Login or Register to Ask a Question