Honey, I broke awk! (duplicate line removal in 30M line 3.7GB csv file)

Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Honey, I broke awk! (duplicate line removal in 30M line 3.7GB csv file)
# 1  
Old 03-26-2014
Honey, I broke awk! (duplicate line removal in 30M line 3.7GB csv file)

I have a script that builds a database ~30 million lines, ~3.7 GB .cvs file. After multiple optimzations It takes about 62 min to bring in and parse all the files and used to take 10 min to remove duplicates until I was requested to add another column. I am using the highly optimized awk code:
awk '!($0 in a) { a[$0]; print }'

In this case a[$0] mearly initializes the array element without writing to it and ($0 in a) aparently accesses an array table but not the elements themselves.

The traditional solution took many hours even with test chunks of the data.

I previously tried to debug some files with corruptions by adding a filename@path column and that exceeded the maximum pipe size. I was asked to add a relatively short column and it does not break any pipes but does not complete even after 12 hours.

I've removed several pipes in my script and replaced them with temp files to hold data between commands, but the addition of that short column has caused the duplicate line removal to go from ~10 minutes to God knows how long.

Any other options to do the duplicate line removal more efficiently (in segments, sorting first, etc.)?

Do I have any other options? My environment is:
GNU Awk 4.0.2
GNU bash, version 4.1.10(4)-release
CYGWIN_NT-6.1-WOW64 1.7.17(0.262/5/3) 2012-10-19 14:39 i686 Cygwin
Windows 7 Enterprise Ver 6.1 Build 7601 Service Pack 1

Last edited by Michael Stora; 03-26-2014 at 05:32 PM..
# 2  
Old 03-26-2014
If I recall correctly, cygwin and win7 not -64 is 32 bit, so 'a' may get too big for the address space of the awk process. Depending on ram size, it might eventually thrash a bit. The awk solution, a hash map, does not support parallism.

The classic, robust solution is 'sort -u <file_set>' but tends to be slower. You can parallelize the sort with a command of the form:
sort -mu <( sort -u <file_set_1> )  <( sort -u <file_set_2> ) <( sort -u <file_set_3> ) . . . .

where the nicer ksh or bash makes named pipes of the '<(...)' that run concurrently. I like twice the core count <(sort)'s, assuming 50% i/o delay. The final pass of the <(sort)'s feeds the sort -m merge parent.

ETL programs like Ab Initio know how to tell parallel processes to split up big files and process each part separately, even when the files are linefeed delimited (they all agree to search up (or down) for the dividing linefeed closest to N bytes down file). Does anyone know of a utility that can split a file this way (without reading it sequentially)? 'GNU parallel?'
# 3  
Old 03-26-2014
I highly recommend using linux rather than cygwin for better performance. You might try using a virtual linux image (i.e. vmware). I also sometimes will create a ram disk to speed things up (on linux) when I am processing a lot of lines in a file.
# 4  
Old 03-26-2014
I agree that sort is an efficient way to do this. The sort utility in unxutils seems to work well if you control where it puts its temporary files.
# 5  
Old 03-26-2014
Do you really need to compare the whole line for uniqueness?

You should get a big reduction in memory usage by only checking the first 100 chars with:

awk '!(substr($0,1,100) in a) { a[substr($0,1,100)]; print }'

# 6  
Old 03-26-2014
Thanks for all the suggestions. My workaround so far has been to skip the duplicate line removal and add it to the querries I run against the database instead. This has allowed me to procede with my analysis.

I will go back and fix the database at some point however when the current deadline passes.

I will have to check closely which fields alone can indicate a duplicate record. Since the addition of a column (and therefore the length of $0) is what broke it, taking something out may help. I'm not sure as I am already taking the important columns from two types of database files and the "housekeeping columns" are not included.

Is there a way to do a checksum or fairly robust hash in awk? That might be the best way to shorten the array names which appears to what is killing awk.


PS. I would love to be using real Linux instead of Cygwin (as I do at home). Unfortunately that is a boundary condition.

Last edited by Michael Stora; 03-26-2014 at 08:03 PM..
# 7  
Old 03-26-2014
Mike, you could look at algorithm - Calculating CRC in awk - Stack Overflow where there is posted a nice POSIX chksum compatible crc32() function.

But I'm not sure how much overhead this will add to your code.

---------- Post updated at 09:12 AM ---------- Previous update was at 09:01 AM ----------

Here is a quick adaptation for you task:

awk 'BEGIN {
 # Initialize CRC32 table
    T[1]=0x04c11db7;  T[2]=0x09823b6e;  T[3]=0x0d4326d9;  T[4]=0x130476dc;  T[5]=0x17c56b6b;
    T[6]=0x1a864db2;  T[7]=0x1e475005;  T[8]=0x2608edb8;  T[9]=0x22c9f00f; T[10]=0x2f8ad6d6;
   T[11]=0x2b4bcb61; T[12]=0x350c9b64; T[13]=0x31cd86d3; T[14]=0x3c8ea00a; T[15]=0x384fbdbd;
   T[16]=0x4c11db70; T[17]=0x48d0c6c7; T[18]=0x4593e01e; T[19]=0x4152fda9; T[20]=0x5f15adac;
   T[21]=0x5bd4b01b; T[22]=0x569796c2; T[23]=0x52568b75; T[24]=0x6a1936c8; T[25]=0x6ed82b7f;
   T[26]=0x639b0da6; T[27]=0x675a1011; T[28]=0x791d4014; T[29]=0x7ddc5da3; T[30]=0x709f7b7a;
   T[31]=0x745e66cd; T[32]=0x9823b6e0; T[33]=0x9ce2ab57; T[34]=0x91a18d8e; T[35]=0x95609039;
   T[36]=0x8b27c03c; T[37]=0x8fe6dd8b; T[38]=0x82a5fb52; T[39]=0x8664e6e5; T[40]=0xbe2b5b58;
   T[41]=0xbaea46ef; T[42]=0xb7a96036; T[43]=0xb3687d81; T[44]=0xad2f2d84; T[45]=0xa9ee3033;
   T[46]=0xa4ad16ea; T[47]=0xa06c0b5d; T[48]=0xd4326d90; T[49]=0xd0f37027; T[50]=0xddb056fe;
   T[51]=0xd9714b49; T[52]=0xc7361b4c; T[53]=0xc3f706fb; T[54]=0xceb42022; T[55]=0xca753d95;
   T[56]=0xf23a8028; T[57]=0xf6fb9d9f; T[58]=0xfbb8bb46; T[59]=0xff79a6f1; T[60]=0xe13ef6f4;
   T[61]=0xe5ffeb43; T[62]=0xe8bccd9a; T[63]=0xec7dd02d; T[64]=0x34867077; T[65]=0x30476dc0;
   T[66]=0x3d044b19; T[67]=0x39c556ae; T[68]=0x278206ab; T[69]=0x23431b1c; T[70]=0x2e003dc5;
   T[71]=0x2ac12072; T[72]=0x128e9dcf; T[73]=0x164f8078; T[74]=0x1b0ca6a1; T[75]=0x1fcdbb16;
   T[76]=0x018aeb13; T[77]=0x054bf6a4; T[78]=0x0808d07d; T[79]=0x0cc9cdca; T[80]=0x7897ab07;
   T[81]=0x7c56b6b0; T[82]=0x71159069; T[83]=0x75d48dde; T[84]=0x6b93dddb; T[85]=0x6f52c06c;
   T[86]=0x6211e6b5; T[87]=0x66d0fb02; T[88]=0x5e9f46bf; T[89]=0x5a5e5b08; T[90]=0x571d7dd1;
   T[91]=0x53dc6066; T[92]=0x4d9b3063; T[93]=0x495a2dd4; T[94]=0x44190b0d; T[95]=0x40d816ba;
   T[96]=0xaca5c697; T[97]=0xa864db20; T[98]=0xa527fdf9; T[99]=0xa1e6e04e;T[100]=0xbfa1b04b;

# Init raw data to int lookup table
function u32(v) { 
   return and(v,0xffffffff)
function crc32(str,crc,i,A) {
   len +=A[0]

  # Step 3) End CRC32 calculation. Calculate the total size of buf read, and write into CRC
  return u32(compl(crc));
{ v1=crc32($0);v2=crc32("XXX"$0);if(v1 v2 in a) next; a[v1 v2]} 1'

Edit: Changed to calculate two checksums (v1 and v2) to try and avoid collisions, as you have such a large number of records

You could also try perl:

#!/usr/bin/perl -w
use Digest::MD5 qw(md5);
my %seen;
while (<>) {
  print $_ unless ($seen{md5($_)}++);

Last edited by Chubler_XL; 03-26-2014 at 08:54 PM..
This User Gave Thanks to Chubler_XL For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Printing string from last field of the nth line of file to start (or end) of each line (awk I think)

My file (the output of an experiment) starts off looking like this, _____________________________________________________________ Subjects incorporated to date: 001 Data file started on machine PKSHS260-05CP ********************************************************************** Subject 1,... (9 Replies)
Discussion started by: samonl
9 Replies

2. UNIX for Dummies Questions & Answers

Using awk to remove duplicate line if field is empty

Hi all, I've got a file that has 12 fields. I've merged 2 files and there will be some duplicates in the following: FILE: 1. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, 100 2. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, (EMPTY) 3. CDC, 54321, TEST3,... (4 Replies)
Discussion started by: tugar
4 Replies

3. Shell Programming and Scripting

Duplicate line removal matching some columns only

I'm looking to remove duplicate rows from a CSV file with a twist. The first row is a header. There are 31 columns. I want to remove duplicates when the first 29 rows are identical ignoring row 30 and 31 BUT the duplicate that is kept should have the shortest total character length in rows 30... (6 Replies)
Discussion started by: Michael Stora
6 Replies

4. Shell Programming and Scripting

awk concatenate every line of a file in a single line

I have several hundreds of tiny files which need to be concatenated into one single line and all those in a single file. Some files have several blank lines. Tried to use this script but failed on it. awk 'END { print r } r && !/^/ { print FILENAME, r; r = "" }{ r = r ? r $0 : $0 }' *.txt... (8 Replies)
Discussion started by: sdf
8 Replies

5. Shell Programming and Scripting

Read csv file line by line

Folks , i want to read a csv file line by line till the end of file and filter the text in the line and append everything into a variable. csv file format is :- trousers:shirts,price,50 jeans:tshirts,rate,60 pants:blazer,costprice,40 etc i want to read the first line and get... (6 Replies)
Discussion started by: venu
6 Replies

6. Shell Programming and Scripting

Updating a line in a large csv file, with sed/awk?

I have an extremely large csv file that I need to search the second field, and upon matches update the last field... I can pull the line with awk.. but apparently you cant use awk to directly update the file? So im curious if I can use sed to do this... The good news is the field I want to... (5 Replies)
Discussion started by: trey85stang
5 Replies

7. Shell Programming and Scripting

reading a file inside awk and processing line by line

Hi Sorry to multipost. I am opening the new thread because the earlier threads head was misleading to my current doubt. and i am stuck. list=`cat /u/Test/programs`; psg "ServTest" | awk -v listawk=$list '{ cmd_name=($5 ~ /^/)? $9:$8 for(pgmname in listawk) ... (6 Replies)
Discussion started by: Anteus
6 Replies

8. Shell Programming and Scripting

awk script to remove duplicate rows in line

i have the long file more than one ns and www and mx in the line like . i need the first ns record and first www and first mx from line . the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution. ... (4 Replies)
Discussion started by: kiranmosarla
4 Replies

9. Shell Programming and Scripting

Awk not working due to missing new line character at last line of file

Hi, My awk program is failing. I figured out using command od -c filename that the last line of the file doesnt end with a new line character. Mine is an automated process because of this data is missing. How do i handle this? I want to append new line character at the end of last... (2 Replies)
Discussion started by: pinnacle
2 Replies

10. Shell Programming and Scripting

Removal of Duplicate Entries from the file

I have a file which consists of 1000 entries. Out of 1000 entries i have 500 Duplicate Entires. I want to remove the first Duplicate Entry (i,e entire Line) in the File. The example of the File is shown below: 8244100010143276|MARISOL CARO||MORALES|HSD768|CARR 430 KM 1.7 ... (1 Reply)
Discussion started by: ravi_rn
1 Replies
Login or Register to Ask a Question