How to remove duplicate records with out sort


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting How to remove duplicate records with out sort
# 15  
Old 06-09-2008
Hi radoulov,

I think this might be more than just a one liner,eh?


10-a,1000010.0,500010.0,110.0
2-b,1000002.2,500002.2,102.3
9-b,1000009.9,500009.9,110.0
10-b,1000010.0,500010.0,110.1
9-a,1000009.9,500009.9,109.9
8-a,1000008.8,500008.8,108.8
3-b,1000003.3,500003.3,103.4
8-b,1000008.8,500008.8,108.9
7-a,1000007.7,500007.7,107.7
7-b,1000007.7,500007.7,107.8
6-b,1000006.6,500006.6,106.7
3-a,1000003.3,500003.3,103.3
6-a,1000006.6,500006.6,106.6
5-a,1000005.5,500005.5,105.5
5-b,1000005.5,500005.5,105.6
4-a,1000004.4,500004.4,104.4
4-b,1000004.4,500004.4,104.5
2-a,1000002.2,500002.2,102.2
1-a,1000001.1,500001.1,101.1
1-b,1000001.1,500001.1,101.2


Not sure what the output would be exactly.

But the objective would be to create two output files.

OutputFile1 would be the remainder records (i.e. the duplicates removed).
OutputFile2 would be the duplicates.

I apologize for the long winded description, but I am not a programmer.

#####

The objective would be achieved by,
taking fields 2 and 3 from the first record

10-a,1000010.0,500010.0,110.0

and comparing them to fields 2 and 3 from the remainder of the records,
within the range $2-1.1 to $2+1.1 and range $3-1.1 to $3+1.1,
to determine which of the remaining records are to be considered duplicates.

I think the duplicates would be:

9-b,1000009.9,500009.9,110.0
10-b,1000010.0,500010.0,110.1
9-a,1000009.9,500009.9,109.9

because their $2 and $3 fields satisfy the ranges.


So with record

10-a,1000010.0,500010.0,110.0

marked as a keeper and written to OutputFile1,

and with records

9-b,1000009.9,500009.9,110.0
10-b,1000010.0,500010.0,110.1
9-a,1000009.9,500009.9,109.9

identified as duplicates and written to OutputFile2.

These 4 records would now be removed from processing by
possibly creating a new temporary file/list with these 4 records removed.

This would be the new tempory file/list with the 4 records removed.

2-b,1000002.2,500002.2,102.3
8-a,1000008.8,500008.8,108.8
3-b,1000003.3,500003.3,103.4
8-b,1000008.8,500008.8,108.9
7-a,1000007.7,500007.7,107.7
7-b,1000007.7,500007.7,107.8
6-b,1000006.6,500006.6,106.7
3-a,1000003.3,500003.3,103.3
6-a,1000006.6,500006.6,106.6
5-a,1000005.5,500005.5,105.5
5-b,1000005.5,500005.5,105.6
4-a,1000004.4,500004.4,104.4
4-b,1000004.4,500004.4,104.5
2-a,1000002.2,500002.2,102.2
1-a,1000001.1,500001.1,101.1
1-b,1000001.1,500001.1,101.2


Now the "new first record" would be:

2-b,1000002.2,500002.2,102.3

and gets written to OutputFile1.

Compare fields 2 and 3
to fields 2 and 3 from the remainder of the records
in the tempory list,
within the range $2-1.1 to $2+1.1 and range $3-1.1 to $3+1.1,
to determine which of the remaining records are to be considered duplicates.


I think the duplicates would be:

3-b,1000003.3,500003.3,103.4
3-a,1000003.3,500003.3,103.3
2-a,1000002.2,500002.2,102.2
1-a,1000001.1,500001.1,101.1
1-b,1000001.1,500001.1,101.2

and get written to OutputFile2


The new temporary file/list with these 6 records removed would be:

8-a,1000008.8,500008.8,108.8
8-b,1000008.8,500008.8,108.9
7-a,1000007.7,500007.7,107.7
7-b,1000007.7,500007.7,107.8
6-b,1000006.6,500006.6,106.7
6-a,1000006.6,500006.6,106.6
5-a,1000005.5,500005.5,105.5
5-b,1000005.5,500005.5,105.6
4-a,1000004.4,500004.4,104.4
4-b,1000004.4,500004.4,104.5


8-a,1000008.8,500008.8,108.8
becomes the "new first record"
and gets written to OutputFile1

8-b,1000008.8,500008.8,108.9
7-a,1000007.7,500007.7,107.7
7-b,1000007.7,500007.7,107.8

are the duplicates and get written to OutputFile2


The new temporary file with these 4 records removed would be:

6-b,1000006.6,500006.6,106.7
6-a,1000006.6,500006.6,106.6
5-a,1000005.5,500005.5,105.5
5-b,1000005.5,500005.5,105.6
4-a,1000004.4,500004.4,104.4
4-b,1000004.4,500004.4,104.5

6-b,1000006.6,500006.6,106.7
becomes the "new first record"
and gets written to OutputFile1

6-a,1000006.6,500006.6,106.6
5-a,1000005.5,500005.5,105.5
5-b,1000005.5,500005.5,105.6

are the duplicates and get written to OutputFile2


The new temporary file with these 4 records removed would be:

4-a,1000004.4,500004.4,104.4
4-b,1000004.4,500004.4,104.5


4-a,1000004.4,500004.4,104.4
becomes the "new first record"
and gets written to OutputFile1

4-b,1000004.4,500004.4,104.5

is the duplicate and gets written to OutputFile2

Done
Whew!

Best Regards,
Kenny.
# 16  
Old 06-11-2008
Ok,
not a one-liner Smilie

It's all about casting to number Smilie

Code:
awk -F, '{ _[NR] = $0 }
END {
  m = "%.2f"
  for (i=1; i<=NR; i++) {
    if (_[i]) {
    print _[i] > "out1"
    split(_[i], tt)
    delete _[i]
    for (j=1; j<=NR; j++) {
      if (_[j]) {
      split(_[j], t)
      if ((sprintf(m, t[2] - v) <= sprintf(m, tt[2]) && sprintf(m, tt[2]) <= sprintf(m, t[2] + v)) && \
      (sprintf(m, t[3] - v) <= sprintf(m, tt[3]) && sprintf(m, tt[3]) <= sprintf(m, t[3] + v))) {
        print _[j] > "out2"
        delete _[j]
      }        
     }
    }
   }
  }
 }' v=1.1 file

# 17  
Old 06-11-2008
can you explain this..


sed -n '/'${i}'/{p;q;}' filename
# 18  
Old 06-11-2008
Quote:
Originally Posted by vijay_0209
can you explain this..


sed -n '/'${i}'/{p;q;}' filename
Pls don't hijack other people threads - start a new thread.
# 19  
Old 06-11-2008
Hi radoulov,

I am not having any success (yet).

But I know that with your expertise (and patience) I will eventually find the solution. And I appreciate the time that you are taking on this puzzle.

First, I probably should come clean and confess that I am not on Unix.
My operating system is Microsoft Windows XP, so I am hoping that I won't be cast out of the forum Smilie

I have some Unix for Windows utilities, mainly awk awk95 mawk and gawk.

Which of these flavours would your code work with?

I have found that things like ' and " are handled differently between Unix and Microsoft Windows. So maybe that is why I get error messages when using your code.

One other question though. If you put the awk commands in a file (to used for awk -f), does the v=1.1 go inside the awk command file or outside on the command line?

Examples of the error messages are:
And this is after putting your awk commands in a file with the v=1.1 inside the file. Note, I did not make any edits to any character in the awk commands string.
The reason that I put the awk commands in a file is:
I edited them to one long command line and used it in my DOS command window, but the operation system said it couln't find my input file. There were no errors from mawk.

=====

awk -f AwkCommandFile -F, InputFile

'{ _[NR] = $0 }
^
awk ERROR temp2.awk line 1: syntax error

=====

awk95 -f AwkCommandFile -F, InputFile

awk95: syntax error at source line 1 source file AwkCommandFile
context is
>>> ' <<<
awk95: bailing out at source line 21

=====

gawk -f AwkCommandFile -F, InputFile

gawk: AwkCommandFile:1: '{ _[NR] = $0 }
gawk: AwkCommandFile:1: ^ invalid char ''' in expression

=====

mawk -f AwkCommandFile -F, InputFile

mawk: 1: unexpected character '''
mawk: 21: unexpected character '''

=====

The contents of the AwkCommandFile was:

'{ _[NR] = $0 }
END {
m = "%.2f"
for (i=1; i<=NR; i++) {
if (_[i]) {
print _[i] > "out1"
split(_[i], tt)
delete _[i]
for (j=1; j<=NR; j++) {
if (_[j]) {
split(_[j], t)
if ((sprintf(m, t[2] - v) <= sprintf(m, tt[2]) && sprintf(m, tt[2]) <= sprintf(m, t[2] + v)) && \
(sprintf(m, t[3] - v) <= sprintf(m, tt[3]) && sprintf(m, tt[3]) <= sprintf(m, t[3] + v))) {
print _[j] > "out2"
delete _[j]
}
}
}
}
}
}' v=1.1


Again,
Many Thanks,
Kenny.
# 20  
Old 06-11-2008
Put this into AwkCommandFile:

Code:
BEGIN { FS="," }
{ _[NR] = $0 }
END {
  m = "%.2f"
  for (i=1; i<=NR; i++) {
    if (_[i]) {
    print _[i] > "out1"
    split(_[i], tt)
    delete _[i]
    for (j=1; j<=NR; j++) {
      if (_[j]) {
      split(_[j], t)
      if ((sprintf(m, t[2] - v) <= sprintf(m, tt[2]) && \
sprintf(m, tt[2]) <= sprintf(m, t[2] + v)) && \
      (sprintf(m, t[3] - v) <= sprintf(m, tt[3]) && \
sprintf(m, tt[3]) <= sprintf(m, t[3] + v))) {
        print _[j] > "out2"
        delete _[j]
      }
     }
    }
   }
  }
 }

Run it like this:

Code:
awk -f AwkCommandFile v=1.1 InputFile

I put the v variable outside the script for commodity (you can run the script with different values without modifying the code).
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Remove duplicate lines, sort it and save it as file itself

Hi, all I have a csv file that I would like to remove duplicate lines based on 1st field and sort them by the 1st field. If there are more than 1 line which is same on the 1st field, I want to keep the first line of them and remove the rest. I think I have to use uniq or something, but I still... (8 Replies)
Discussion started by: refrain
8 Replies

2. Shell Programming and Scripting

Remove duplicate records

Hi, i am working on a script that would remove records or lines in a flat file. The only difference in the file is the "NOT NULL" word. Please see below example of the input file. INPUT FILE:> CREATE a ( TRIAL_CLIENT NOT NULL VARCHAR2(60), TRIAL_FUND NOT NULL... (3 Replies)
Discussion started by: reignangel2003
3 Replies

3. Shell Programming and Scripting

Remove duplicate chars and sort string [SED]

Hi, INPUT: DCBADD OUTPUT: ABCD The SED script should alphabetically sort the chars in the string and remove the duplicate chars. (5 Replies)
Discussion started by: jds93
5 Replies

4. Shell Programming and Scripting

Remove duplicate lines based on field and sort

I have a csv file that I would like to remove duplicate lines based on field 1 and sort. I don't care about any of the other fields but I still wanna keep there data intact. I was thinking I could do something like this but I have no idea how to print the full line with this. Please show any method... (8 Replies)
Discussion started by: cokedude
8 Replies

5. Shell Programming and Scripting

Remove somewhat Duplicate records from a flat file

I have a flat file that contains records similar to the following two lines; 1984/11/08 7 700000 123456789 2 1984/11/08 1941/05/19 7 700000 123456789 2 The 123456789 2 represents an account number, this is how I identify the duplicate record. The ### signs represent... (4 Replies)
Discussion started by: jolney
4 Replies

6. Shell Programming and Scripting

Sort and Remove Duplicate on file

How do we sort and remove duplicate on column 1,2 retaining the record with maximum date (in feild 3) for the file with following format. aaa|1234|2010-12-31 aaa|1234|2010-11-10 bbb|345|2011-01-01 ccc|346|2011-02-01 bbb|345|2011-03-10 aaa|1234|2010-01-01 Required Output ... (5 Replies)
Discussion started by: mabarif16
5 Replies

7. Shell Programming and Scripting

Remove Duplicate Records

Hi frinds, Need your help. item , color ,desc ==== ======= ==== 1,red ,abc 1,red , a b c 2,blue,x 3,black,y 4,brown,xv 4,brown,x v 4,brown, x v I have to elemnet the duplicate rows on the basis of item. the final out put will be 1,red ,abc (6 Replies)
Discussion started by: imipsita.rath
6 Replies

8. Shell Programming and Scripting

Remove duplicate records

I want to remove the records based on duplicate. I want to remove if two or more records exists with combination fields. Those records should not come once also file abc.txt ABC;123;XYB;HELLO; ABC;123;HKL;HELLO; CDE;123;LLKJ;HELLO; ABC;123;LSDK;HELLO; CDF;344;SLK;TEST key fields are... (7 Replies)
Discussion started by: svenkatareddy
7 Replies

9. Solaris

How to remove duplicate records with out sort

Can any one give me command How to delete duplicate records with out sort. Suppose if the records like below: 345,bcd,789 123,abc,456 234,abc,456 712,bcd,789 out tput should be 345,bcd,789 123,abc,456 Key for the records is 2nd and 3rd fields.fields are seperated by colon(,). (2 Replies)
Discussion started by: svenkatareddy
2 Replies

10. Shell Programming and Scripting

Remove all instances of duplicate records from the file

Hi experts, I am new to scripting. I have a requirement as below. File1: A|123|NAME1 A|123|NAME2 B|123|NAME3 File2: C|123|NAME4 C|123|NAME5 D|123|NAME6 1) I have 2 merge both the files. 2) need to do a sort ( key fields are first and second field) 3) remove all the instances... (3 Replies)
Discussion started by: vukkusila
3 Replies
Login or Register to Ask a Question