Honey, I broke awk! (duplicate line removal in 30M line 3.7GB csv file)


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Honey, I broke awk! (duplicate line removal in 30M line 3.7GB csv file)
# 22  
Old 03-28-2014
I am aware of the collision issues. There is a fair amount of entropy in the data (timestamps) so I'm probably not gaming the odds down with patterns.

Does perl support SHA-1 or SHA-2? That would take the odds from very very unlikely to (even more) astronomical.

Mike
# 23  
Old 03-28-2014
Perl supports everything and anything, but I'm not sure it'd come with it by default. Try it and see. Digest::SHA2 - search.cpan.org
# 24  
Old 03-28-2014
Quote:
Originally Posted by Corona688
Perl supports everything and anything, but I'm not sure it'd come with it by default. Try it and see. Digest::SHA2 - search.cpan.org
Looks like I should try regular SHA
Quote:
This module has numerious known bugs, is not compatable with the Digest interface and its functionality is a subset of the functionality of Digest::SHA (which is in perl core as of 5.9.3).
Please use Digest::SHA instead of this module in new and old code.
It looks like SAH even supports Base-64 which will keep the associative array table to the minimum size (what I suspect was breaking my initial AWK routine).

Mike

Last edited by Michael Stora; 03-28-2014 at 02:27 PM..
# 25  
Old 04-01-2014
You did not say how much RAM you have, which is a definite factor.

The MD5 result is impressive. Is MD5 as cheap as a good hash? Perhaps CPUs have gotten so much faster than disk that it is not a factor!

In perl/C/C++/JAVA you can mmap the input file both for input and so your hash map can hold a 64 bit char* for exact verification, reducing copying and space allocation overhead.

The exact verification seems to expand the vm footprint a lot, but the most likely case is that the md5 or hash is new and so the exact compare is not done, greatly reducing processing and the vm footprint with a small minority of duplicates. If there were a lot of duplicates, that hurts the VM footprint with more exact verifications, but conversely there is less final data in the map.
# 26  
Old 04-01-2014
Quote:
Originally Posted by DGPickett
You did not say how much RAM you have, which is a definite factor.

The MD5 result is impressive. Is MD5 as cheap as a good hash? Perhaps CPUs have gotten so much faster than disk that it is not a factor!

In perl/C/C++/JAVA you can mmap the input file both for input and so your hash map can hold a 64 bit char* for exact verification, reducing copying and space allocation overhead.

The exact verification seems to expand the vm footprint a lot, but the most likely case is that the md5 or hash is new and so the exact compare is not done, greatly reducing processing and the vm footprint with a small minority of duplicates. If there were a lot of duplicates, that hurts the VM footprint with more exact verifications, but conversely there is less final data in the map.
I have 16 GB of ram but my "disks" are also solid state. It's a fast system. The md5 solution is working well for me.

Mike
# 27  
Old 04-01-2014
Yes I had considered a method of storing the file position against each MD5 sum and when a potential collision occurs one could then fseek back and re-read the original data to verify a duplicate. This would only double the memory required but does require a random access file, so no stream processing.

As long as the frequency of duplicates is low I wouldn't expect a significant increase in speed.
# 28  
Old 04-01-2014
Duplicates appear to be ~3%.

Mike
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Printing string from last field of the nth line of file to start (or end) of each line (awk I think)

My file (the output of an experiment) starts off looking like this, _____________________________________________________________ Subjects incorporated to date: 001 Data file started on machine PKSHS260-05CP ********************************************************************** Subject 1,... (9 Replies)
Discussion started by: samonl
9 Replies

2. UNIX for Dummies Questions & Answers

Using awk to remove duplicate line if field is empty

Hi all, I've got a file that has 12 fields. I've merged 2 files and there will be some duplicates in the following: FILE: 1. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, 100 2. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, (EMPTY) 3. CDC, 54321, TEST3,... (4 Replies)
Discussion started by: tugar
4 Replies

3. Shell Programming and Scripting

Duplicate line removal matching some columns only

I'm looking to remove duplicate rows from a CSV file with a twist. The first row is a header. There are 31 columns. I want to remove duplicates when the first 29 rows are identical ignoring row 30 and 31 BUT the duplicate that is kept should have the shortest total character length in rows 30... (6 Replies)
Discussion started by: Michael Stora
6 Replies

4. Shell Programming and Scripting

awk concatenate every line of a file in a single line

I have several hundreds of tiny files which need to be concatenated into one single line and all those in a single file. Some files have several blank lines. Tried to use this script but failed on it. awk 'END { print r } r && !/^/ { print FILENAME, r; r = "" }{ r = r ? r $0 : $0 }' *.txt... (8 Replies)
Discussion started by: sdf
8 Replies

5. Shell Programming and Scripting

Read csv file line by line

Folks , i want to read a csv file line by line till the end of file and filter the text in the line and append everything into a variable. csv file format is :- trousers:shirts,price,50 jeans:tshirts,rate,60 pants:blazer,costprice,40 etc i want to read the first line and get... (6 Replies)
Discussion started by: venu
6 Replies

6. Shell Programming and Scripting

Updating a line in a large csv file, with sed/awk?

I have an extremely large csv file that I need to search the second field, and upon matches update the last field... I can pull the line with awk.. but apparently you cant use awk to directly update the file? So im curious if I can use sed to do this... The good news is the field I want to... (5 Replies)
Discussion started by: trey85stang
5 Replies

7. Shell Programming and Scripting

reading a file inside awk and processing line by line

Hi Sorry to multipost. I am opening the new thread because the earlier threads head was misleading to my current doubt. and i am stuck. list=`cat /u/Test/programs`; psg "ServTest" | awk -v listawk=$list '{ cmd_name=($5 ~ /^/)? $9:$8 for(pgmname in listawk) ... (6 Replies)
Discussion started by: Anteus
6 Replies

8. Shell Programming and Scripting

awk script to remove duplicate rows in line

i have the long file more than one ns and www and mx in the line like . i need the first ns record and first www and first mx from line . the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution. ... (4 Replies)
Discussion started by: kiranmosarla
4 Replies

9. Shell Programming and Scripting

Awk not working due to missing new line character at last line of file

Hi, My awk program is failing. I figured out using command od -c filename that the last line of the file doesnt end with a new line character. Mine is an automated process because of this data is missing. How do i handle this? I want to append new line character at the end of last... (2 Replies)
Discussion started by: pinnacle
2 Replies

10. Shell Programming and Scripting

Removal of Duplicate Entries from the file

I have a file which consists of 1000 entries. Out of 1000 entries i have 500 Duplicate Entires. I want to remove the first Duplicate Entry (i,e entire Line) in the File. The example of the File is shown below: 8244100010143276|MARISOL CARO||MORALES|HSD768|CARR 430 KM 1.7 ... (1 Reply)
Discussion started by: ravi_rn
1 Replies
Login or Register to Ask a Question