Sponsored Content
Top Forums Shell Programming and Scripting How to remove duplicate records with out sort Post 302204422 by radoulov on Wednesday 11th of June 2008 02:10:21 PM
Old 06-11-2008
Put this into AwkCommandFile:

Code:
BEGIN { FS="," }
{ _[NR] = $0 }
END {
  m = "%.2f"
  for (i=1; i<=NR; i++) {
    if (_[i]) {
    print _[i] > "out1"
    split(_[i], tt)
    delete _[i]
    for (j=1; j<=NR; j++) {
      if (_[j]) {
      split(_[j], t)
      if ((sprintf(m, t[2] - v) <= sprintf(m, tt[2]) && \
sprintf(m, tt[2]) <= sprintf(m, t[2] + v)) && \
      (sprintf(m, t[3] - v) <= sprintf(m, tt[3]) && \
sprintf(m, tt[3]) <= sprintf(m, t[3] + v))) {
        print _[j] > "out2"
        delete _[j]
      }
     }
    }
   }
  }
 }

Run it like this:

Code:
awk -f AwkCommandFile v=1.1 InputFile

I put the v variable outside the script for commodity (you can run the script with different values without modifying the code).
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Remove all instances of duplicate records from the file

Hi experts, I am new to scripting. I have a requirement as below. File1: A|123|NAME1 A|123|NAME2 B|123|NAME3 File2: C|123|NAME4 C|123|NAME5 D|123|NAME6 1) I have 2 merge both the files. 2) need to do a sort ( key fields are first and second field) 3) remove all the instances... (3 Replies)
Discussion started by: vukkusila
3 Replies

2. Solaris

How to remove duplicate records with out sort

Can any one give me command How to delete duplicate records with out sort. Suppose if the records like below: 345,bcd,789 123,abc,456 234,abc,456 712,bcd,789 out tput should be 345,bcd,789 123,abc,456 Key for the records is 2nd and 3rd fields.fields are seperated by colon(,). (2 Replies)
Discussion started by: svenkatareddy
2 Replies

3. Shell Programming and Scripting

Remove duplicate records

I want to remove the records based on duplicate. I want to remove if two or more records exists with combination fields. Those records should not come once also file abc.txt ABC;123;XYB;HELLO; ABC;123;HKL;HELLO; CDE;123;LLKJ;HELLO; ABC;123;LSDK;HELLO; CDF;344;SLK;TEST key fields are... (7 Replies)
Discussion started by: svenkatareddy
7 Replies

4. Shell Programming and Scripting

Remove Duplicate Records

Hi frinds, Need your help. item , color ,desc ==== ======= ==== 1,red ,abc 1,red , a b c 2,blue,x 3,black,y 4,brown,xv 4,brown,x v 4,brown, x v I have to elemnet the duplicate rows on the basis of item. the final out put will be 1,red ,abc (6 Replies)
Discussion started by: imipsita.rath
6 Replies

5. Shell Programming and Scripting

Sort and Remove Duplicate on file

How do we sort and remove duplicate on column 1,2 retaining the record with maximum date (in feild 3) for the file with following format. aaa|1234|2010-12-31 aaa|1234|2010-11-10 bbb|345|2011-01-01 ccc|346|2011-02-01 bbb|345|2011-03-10 aaa|1234|2010-01-01 Required Output ... (5 Replies)
Discussion started by: mabarif16
5 Replies

6. Shell Programming and Scripting

Remove somewhat Duplicate records from a flat file

I have a flat file that contains records similar to the following two lines; 1984/11/08 7 700000 123456789 2 1984/11/08 1941/05/19 7 700000 123456789 2 The 123456789 2 represents an account number, this is how I identify the duplicate record. The ### signs represent... (4 Replies)
Discussion started by: jolney
4 Replies

7. Shell Programming and Scripting

Remove duplicate lines based on field and sort

I have a csv file that I would like to remove duplicate lines based on field 1 and sort. I don't care about any of the other fields but I still wanna keep there data intact. I was thinking I could do something like this but I have no idea how to print the full line with this. Please show any method... (8 Replies)
Discussion started by: cokedude
8 Replies

8. Shell Programming and Scripting

Remove duplicate chars and sort string [SED]

Hi, INPUT: DCBADD OUTPUT: ABCD The SED script should alphabetically sort the chars in the string and remove the duplicate chars. (5 Replies)
Discussion started by: jds93
5 Replies

9. Shell Programming and Scripting

Remove duplicate records

Hi, i am working on a script that would remove records or lines in a flat file. The only difference in the file is the "NOT NULL" word. Please see below example of the input file. INPUT FILE:> CREATE a ( TRIAL_CLIENT NOT NULL VARCHAR2(60), TRIAL_FUND NOT NULL... (3 Replies)
Discussion started by: reignangel2003
3 Replies

10. Shell Programming and Scripting

Remove duplicate lines, sort it and save it as file itself

Hi, all I have a csv file that I would like to remove duplicate lines based on 1st field and sort them by the 1st field. If there are more than 1 line which is same on the 1st field, I want to keep the first line of them and remove the rest. I think I have to use uniq or something, but I still... (8 Replies)
Discussion started by: refrain
8 Replies
XMLSORT(1p)						User Contributed Perl Documentation					       XMLSORT(1p)

NAME
xmlsort - sorts 'records' in XML files SYNOPSIS
xmlsort -r=<recordname> [ <other options> ] [ <filename> ] Options: -r <name> name of the elements to be sorted -k <keys> child nodes to be used as sort keys -i ignore case when sorting -s normalise whitespace when comparing sort keys -t <dir> buffer records to named directory rather than in memory -m <bytes> set memory chunk size for disk buffering -h help - display the full documentation Example: xmlsort -r 'person' -k 'lastname;firstname' -i -s in.xml >out.xml DESCRIPTION
This script takes an XML document either on STDIN or from a named file and writes a sorted version of the file to STDOUT. The "-r" option should be used to identify 'records' in the document - the bits you want sorted. Elements before and after the records will be unaffected by the sort. OPTIONS
Here is a brief summary of the command line options (and the XML::Filter::Sort options which they correspond to). For more details see XML::Filter::Sort. -r <recordname> (Record) The name of the elements to be sorted. This can be a simple element name like 'person' or a pathname like 'employees/person' (only person elements contained directly within an employees element). -k <keys> (Keys) Semicolon separated list of elements (or attributes) within a record which should be used as sort keys. Each key can optionally be followed by 'alpha' or 'num' to indicate alphanumeric of numeric sorting and 'asc' or 'desc' for ascending or descending order (eg: -k 'lastname;firstname;age,n,d'). -i (IgnoreCase) This option makes sort comparisons case insensitive. -s (NormaliseKeySpace) By default all whitespace in the sort key elements is considered significant. Specifying -s will case leading and trailing whitespace to be stripped and internal whitespace runs to be collapsed to a single space. -t <directory> (TempDir) When sorting large documents, it may be prudent to use disk buffering rather than memory buffering. This option allows you to specify where temporary files should be written. -m <bytes> (MaxMem) If you use the -t option to enable disk buffering, records will be collected in memory in 'chunks' of up to about 10 megabytes before being sorted and spooled to temporary files. This option allows you to specify a larger chunk size. A suffix of K or M indicates kilobytes or megabytes respectively. SEE ALSO
This script uses the following modules: XML::SAX::ParserFactory XML::Filter::Sort XML::SAX::Writer AUTHOR
Grant McLean <grantm@cpan.org> COPYRIGHT
Copyright (c) 2002 Grant McLean. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.12.4 2002-06-14 XMLSORT(1p)
All times are GMT -4. The time now is 04:45 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy