How to extract duplicate rows


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting How to extract duplicate rows
# 1  
Old 01-17-2008
How to extract duplicate rows

I have searched the internet for duplicate row extracting.
All I have seen is extracting good rows or eliminating duplicate rows.

How do I extract duplicate rows from a flat file in unix.
I'm using Korn shell on HP Unix.

For.eg.
FlatFile.txt
========
123:456:678
123:456:678

123:456:876
345:457:987
345:457:987

345:123:745

The output should be
OutPutFile.txt
============
123:456:678
345:457:987

I appreciate your help in advance. Thanks
# 2  
Old 01-17-2008
Code:
awk '
{s[$0]++}
END {
  for(i in s) {
    if(s[i]>1) {
      print i
    }
  }
}' file

Regards
This User Gave Thanks to Franklin52 For This Post:
# 3  
Old 01-17-2008
Or, of course, if sorting is not a problem:


Code:
sort filename|uniq -d


Last edited by radoulov; 01-17-2008 at 05:59 PM..
These 2 Users Gave Thanks to radoulov For This Post:
# 4  
Old 01-18-2008
Gr8. Both scripts worked.

Thanks Franklin 52 and radoulov
# 5  
Old 11-20-2008
This does not work when we have space between the data?

example:

1231080 5000104891 21592002082811037
1231080 5000104892 27492002082821037
1231080 5000104891 21592002082811037
1231080 5000104892 27492002082821037
934262 5000021182 27502002040110518
934262 5000021181 21552002040120518
934262 5000021182 27502002040110518
934262 5000021181 21552002040120518
# 6  
Old 11-20-2008
What does not work when there are spaces? $0 in awk refers to the entire row, spaces and all.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Extract and exclude rows based on duplicate values

Hello I have a file like this: > cat examplefile ghi|NN603762|eee mno|NN607265|ttt pqr|NN613879|yyy stu|NN615002|uuu jkl|NN607265|rrr vwx|NN615002|iii yzA|NN618555|ooo def|NN190486|www BCD|NN628717|ppp abc|NN190486|qqq EFG|NN628717|aaa HIJ|NN628717|sss > I can sort the file by... (5 Replies)
Discussion started by: CHoggarth
5 Replies

2. Shell Programming and Scripting

Extract duplicate rows with conditions

Gents Can you help please. Input file 5490921425 1 7 1310342 54909214251 5490921425 2 1 1 54909214252 5491120937 1 1 3 54911209371 5491120937 3 1 1 54911209373 5491320785 1 ... (4 Replies)
Discussion started by: jiam912
4 Replies

3. Shell Programming and Scripting

How to duplicate rows using awk or any other method?

I want to duplicate each row in my file Egfile.txt Name State Age Jack NJ 34 John MA 23 Jessica FL 45 I want the code to produce this output Name State Age Jack NJ 34 Jack NJ 34 John MA 23 John MA 23 Jessica FL 45 Jessica FL 45 (6 Replies)
Discussion started by: sidnow
6 Replies

4. Shell Programming and Scripting

Extract and count number of Duplicate rows

Hi All, I need to extract duplicate rows from a file and write these bad records into another file. And need to have a count of these bad records. i have a command awk ' {s++} END { for(i in s) { if(s>1) { print i } } }' ${TMP_DUPE_RECS}>>${TMP_BAD_DATA_DUPE_RECS}... (5 Replies)
Discussion started by: Arun Mishra
5 Replies

5. Shell Programming and Scripting

Delete duplicate rows

Hi, This is a followup to my earlier post him mno klm 20 76 . + . klm_mango unix_00000001; alp fdc klm 123 456 . + . klm_mango unix_0000103; her tkr klm 415 439 . + . klm_mango unix_00001043; abc tvr klm 20 76 . + . klm_mango unix_00000001; abc def klm 83 84 . + . klm_mango... (5 Replies)
Discussion started by: jacobs.smith
5 Replies

6. Shell Programming and Scripting

Duplicate rows in a text file

notes: i am using cygwin and notepad++ only for checking this and my OS is XP. #!/bin/bash typeset -i totalvalue=(wc -w /cygdrive/c/cygwinfiles/database.txt) typeset -i totallines=(wc -l /cygdrive/c/cygwinfiles/database.txt) typeset -i columnlines=`expr $totalvalue / $totallines` awk -F' ' -v... (5 Replies)
Discussion started by: whitecross
5 Replies

7. Shell Programming and Scripting

How to extract duplicate rows

Hi! I have a file as below: line1 line2 line2 line3 line3 line3 line4 line4 line4 line4 I would like to extract duplicate lines (not unique, triplicate or quadruplicate lines). Output will be as below: line2 line2 I would appreciate if anyone can help. Thanks. (4 Replies)
Discussion started by: chromatin
4 Replies

8. HP-UX

How to get Duplicate rows in a file

Hi all, I have written one shell script. The output file of this script is having sql output. In that file, I want to extract the rows which are having multiple entries(duplicate rows). For example, the output file will be like the following way. ... (7 Replies)
Discussion started by: raghu.iv85
7 Replies

9. Shell Programming and Scripting

duplicate rows in a file

hi all can anyone please let me know if there is a way to find out duplicate rows in a file. i have a file that has hundreds of numbers(all in next row). i want to find out the numbers that are repeted in the file. eg. 123434 534 5575 4746767 347624 5575 i want 5575 please help (3 Replies)
Discussion started by: infyanurag
3 Replies

10. Shell Programming and Scripting

Extract duplicate fields in rows

I have a input file with formating: 6000000901 ;36200103 ;h3a01f496 ; 2000123605 ;36218982 ;heefa1328 ; 2000273132 ;36246985 ;h08c5cb71 ; 2000041207 ;36246985 ;heef75497 ; Each fields is seperated by semi-comma. Sometime, the second files is... (6 Replies)
Discussion started by: anhtt
6 Replies
Login or Register to Ask a Question