Sponsored Content
Top Forums Shell Programming and Scripting Removing duplicate records in a file based on single column explanation Post 302608683 by Scrutinizer on Sunday 18th of March 2012 10:00:03 AM
Old 03-18-2012
$1 refers to field 1. awk read line by line and on every line it stores the increments the counter of C[$1]. For example if we take you input file:

1,3000,5000C[1]=1
1,4000,6000C[1]=2
2,4000,600C[2]=1
2,5000,700C[2]=2
3,60000,4000C[3]=1
4,7000,7777C[4]=1
5,999,8888C[5]=1

After awk rereads the file (then NR!=FNR, so the first part is skipped)


1,3000,5000C[1]==2not printed
1,4000,6000C[1]==2not printed
2,4000,600C[2]==2not printed
2,5000,700C[2]==2not printed
3,60000,4000C[3]==1printed
4,7000,7777C[4]==1printed
5,999,8888C[5]==1printed

C[$1]==1 means "if C[$1] equals 1". In this case there is no {...} action part after this condition, so the default action is performed, which is {print $0}
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Filtering records of a file based on a value of a column

Hi all, I would like to extract records of a file based on a condition. The file contains 47 fields, and I would like to extract only those records that match a certain value in one of the columns, e.g. COL1 COL2 COL3 ............... COL47 1 XX 45 ... (4 Replies)
Discussion started by: risk_sly
4 Replies

2. Linux

Need awk script for removing duplicate records

I have huge txt file having millions of trade data. For e.g Trade.txt (first 8 lines in the file is header info) COB_DATE,TRADE_ID,SOURCE_SYSTEM_TRADE_ID,TRADE_GROUP_ID, TRADE_TYPE,DEALER_NAME,EXTERNAL_COUNTERPARTY_ID, EXTERNAL_COUNTERPARTY_NAME,DB_COUNTERPARTY_ID,... (6 Replies)
Discussion started by: nmumbarkar
6 Replies

3. Shell Programming and Scripting

Find Duplicate records in first Column in File

Hi, Need to find a duplicate records on the first column, ANU4501710430989 0000000W20389390 ANU4501710430989 0000000W67065483 ANU4501130050520 0000000W80838713 ANU4501210170685 0000000W69246611... (3 Replies)
Discussion started by: Murugesh
3 Replies

4. Shell Programming and Scripting

Removing duplicate records from 2 files

Can anyone help me to removing duplicate records from 2 separate files in UNIX? Please find the sample records for both the files cat Monday.dat 3FAHP0JA1AR319226MOHMED ATEK 966504453742 SAU2010DE 3LNHL2GC6AR636361HEA DEUK CHOI 821057314531 KOR2010LE 3MEHM0JG7AR652083MUTLAB NAL-NAFISAH... (4 Replies)
Discussion started by: zooby
4 Replies

5. Shell Programming and Scripting

duplicate row based on single column

I am a newbie to shell scripting .. I have a .csv file. It has 1000 some rows and about 7 columns... but before I insert this data to a table I have to parse it and clean it ..basing on the value of the first column..which a string of phone number type... example below.. column 1 ... (2 Replies)
Discussion started by: mitr
2 Replies

6. Shell Programming and Scripting

Removing duplicate records in a file based on single column

Hi, I want to remove duplicate records including the first line based on column1. For example inputfile(filer.txt): ------------- 1,3000,5000 1,4000,6000 2,4000,600 2,5000,700 3,60000,4000 4,7000,7777 5,999,8888 expected output: ---------------- 3,60000,4000 4,7000,7777... (5 Replies)
Discussion started by: G.K.K
5 Replies

7. UNIX for Dummies Questions & Answers

Remove duplicate rows when >10 based on single column value

Hello, I'm trying to delete duplicates when there are more than 10 duplicates, based on the value of the first column. e.g. a 1 a 2 a 3 b 1 c 1 gives b 1 c 1 but requires 11 duplicates before it deletes. Thanks for the help Video tutorial on how to use code tags in The UNIX... (11 Replies)
Discussion started by: informaticist
11 Replies

8. Shell Programming and Scripting

Removing duplicate lines on first column based with pipe delimiter

Hi, I have tried to remove dublicate lines based on first column with pipe delimiter . but i ma not able to get some uniqu lines Command : sort -t'|' -nuk1 file.txt Input : 38376KZ|09/25/15|1.057 38376KZ|09/25/15|1.057 02006YB|09/25/15|0.859 12593PS|09/25/15|2.803... (2 Replies)
Discussion started by: parithi06
2 Replies

9. Shell Programming and Scripting

Filter duplicate records from csv file with condition on one column

I have csv file with 30, 40 columns Pasting just three column for problem description I want to filter record if column 1 matches CN or DN then, check for values in column 2 if column contain 1235, 1235 then in column 3 values must be sequence of 2345, 2345 and if column 2 contains 6789, 6789... (5 Replies)
Discussion started by: as7951
5 Replies

10. Shell Programming and Scripting

CSV File:Filter duplicate records from column1 & another column having unique record

Hi Experts, I have csv file with 30, 40 columns Pasting just 2 column for problem description. Need to print error if below combination is not present in file check for column-1 (DocumentNumber) and filter columns where value in DocumentNumber field is same. For all such rows, the field... (7 Replies)
Discussion started by: as7951
7 Replies
funidx(7)							SAORD Documentation							 funidx(7)

NAME
Funidx - Using Indexes to Filter Rows in a Table SYNOPSIS
This document contains a summary of the user interface for filtering rows in binary tables with indexes. DESCRIPTION
Funtools Table Filtering allows rows in a table to be selected based on the values of one or more columns in the row. Because the actual filter code is compiled on the fly, it is very efficient. However, for very large files (hundreds of Mb or larger), evaluating the filter expression on each row can take a long time. Therefore, funtools supports index files for columns, which are used automatically during fil- tering to reduce dramatically the number of row evaluations performed. The speed increase for indexed filtering can be an order of magni- tude or more, depending on the size of the file. The funindex program creates an index on one or more columns in a binary table. For example, to create an index for the column pi in the file huge.fits, use: funindex huge.fits pi This will create an index named huge_pi.idx. When a filter expression is initialized for row evaluation, funtools looks for an index file for each column in the filter expression. If found, and if the file modification date of the index file is later than that of the data file, then the index will be used to reduce the number of rows that are evaluated in the filter. When Spatial Region Filtering is part of the expression, the columns associated with the region are checked for index files. If an index file is not available for a given column, then in general, all rows must be checked when that column is part of a filter expression. This is not true, however, when a non-indexed column is part of an AND expression. In this case, only the rows that pass the other part of the AND expression need to be checked. Thus, in some cases, filtering speed can increase significantly even if all columns are not indexed. Also note that certain types of filter expression syntax cannot make use of indices. For example, calling functions with column names as arguments implies that all rows must be checked against the function value. Once again, however, if this function is part of an AND expres- sion, then a significant improvement in speed still is possible if the other part of the AND expression is indexed. For example, note below the dramatic speedup in searching a 1 Gb file using an AND filter, even when one of the columns (pha) has no index: time fundisp huge.fits'[idx_activate=0,idx_debug=1,pha=2348&&cir 4000 4000 1]' "x y pha" x y pha ---------- ----------- ---------- 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 42.36u 13.07s 6:42.89 13.7% time fundisp huge.fits'[idx_activate=1,idx_debug=1,pha=2348&&cir 4000 4000 1]' "x y pha" x y pha ---------- ----------- ---------- idxeq: [INDEF] idxand sort: x[ROW 8037025:8070128] y[ROW 5757665:5792352] idxand(1): INDEF [IDX_OR_SORT] idxall(1): [IDX_OR_SORT] 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 1.55u 0.37s 1:19.80 2.4% When all columns are indexed, the increase in speed can be even more dramatic: time fundisp huge.fits'[idx_activate=0,idx_debug=1,pi=770&&cir 4000 4000 1]' "x y pi" x y pi ---------- ----------- ---------- 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 42.60u 12.63s 7:28.63 12.3% time fundisp huge.fits'[idx_activate=1,idx_debug=1,pi=770&&cir 4000 4000 1]' "x y pi" x y pi ---------- ----------- ---------- idxeq: pi start=9473025,stop=9492240 => pi[ROW 9473025:9492240] idxand sort: x[ROW 8037025:8070128] y[ROW 5757665:5792352] idxor sort/merge: pi[ROW 9473025:9492240] [IDX_OR_SORT] idxmerge(5): [IDX_OR_SORT] pi[ROW] idxall(1): [IDX_OR_SORT] 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 1.67u 0.30s 0:24.76 7.9% The miracle of indexed filtering (and indeed, of any indexing) is the speed of the binary search on the index, which is of order log2(n) instead of n. (The funtools binary search method is taken from http://www.tbray.org/ongoing/When/200x/2003/03/22/Binary, to whom grateful acknowledgement is made.) This means that the larger the file, the better the performance. Conversely, it also means that for small files, using an index (and the overhead involved) can slow filtering down somewhat. Our tests indicate that on a file containing a few tens of thousands of rows, indexed filtering can be 10 to 20 percent slower than non-indexed filtering. Of course, your mileage will vary with con- ditions (disk access speed, amount of available memory, process load, etc.) Any problem encountered during index processing will result in indexing being turned off, and replaced by filtering all rows. You can turn filtering off manually by setting the idx_activate variable to 0 (in a filter expression) or the FILTER_IDX_ACTIVATE environment variable to 0 (in the global environment). Debugging output showing how the indexes are being processed can be displayed to stderr by setting the idx_debug variable to 1 (in a filter expression) or the FILTER_IDX_DEBUG environment variable to 1 (in the global environment). Currently, indexed filtering only works with FITS binary tables and raw event files. It does not work with text files. This restriction might be removed in a future release. SEE ALSO
See funtools(7) for a list of Funtools help pages version 1.4.2 January 2, 2008 funidx(7)
All times are GMT -4. The time now is 12:13 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy