Sponsored Content
Top Forums Shell Programming and Scripting Honey, I broke awk! (duplicate line removal in 30M line 3.7GB csv file) Post 302894671 by DGPickett on Wednesday 26th of March 2014 04:07:49 PM
Old 03-26-2014
If I recall correctly, cygwin and win7 not -64 is 32 bit, so 'a' may get too big for the address space of the awk process. Depending on ram size, it might eventually thrash a bit. The awk solution, a hash map, does not support parallism.

The classic, robust solution is 'sort -u <file_set>' but tends to be slower. You can parallelize the sort with a command of the form:
Code:
sort -mu <( sort -u <file_set_1> )  <( sort -u <file_set_2> ) <( sort -u <file_set_3> ) . . . .

where the nicer ksh or bash makes named pipes of the '<(...)' that run concurrently. I like twice the core count <(sort)'s, assuming 50% i/o delay. The final pass of the <(sort)'s feeds the sort -m merge parent.

ETL programs like Ab Initio know how to tell parallel processes to split up big files and process each part separately, even when the files are linefeed delimited (they all agree to search up (or down) for the dividing linefeed closest to N bytes down file). Does anyone know of a utility that can split a file this way (without reading it sequentially)? 'GNU parallel?'
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Removal of Duplicate Entries from the file

I have a file which consists of 1000 entries. Out of 1000 entries i have 500 Duplicate Entires. I want to remove the first Duplicate Entry (i,e entire Line) in the File. The example of the File is shown below: 8244100010143276|MARISOL CARO||MORALES|HSD768|CARR 430 KM 1.7 ... (1 Reply)
Discussion started by: ravi_rn
1 Replies

2. Shell Programming and Scripting

Awk not working due to missing new line character at last line of file

Hi, My awk program is failing. I figured out using command od -c filename that the last line of the file doesnt end with a new line character. Mine is an automated process because of this data is missing. How do i handle this? I want to append new line character at the end of last... (2 Replies)
Discussion started by: pinnacle
2 Replies

3. Shell Programming and Scripting

awk script to remove duplicate rows in line

i have the long file more than one ns and www and mx in the line like . i need the first ns record and first www and first mx from line . the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution. ... (4 Replies)
Discussion started by: kiranmosarla
4 Replies

4. Shell Programming and Scripting

reading a file inside awk and processing line by line

Hi Sorry to multipost. I am opening the new thread because the earlier threads head was misleading to my current doubt. and i am stuck. list=`cat /u/Test/programs`; psg "ServTest" | awk -v listawk=$list '{ cmd_name=($5 ~ /^/)? $9:$8 for(pgmname in listawk) ... (6 Replies)
Discussion started by: Anteus
6 Replies

5. Shell Programming and Scripting

Updating a line in a large csv file, with sed/awk?

I have an extremely large csv file that I need to search the second field, and upon matches update the last field... I can pull the line with awk.. but apparently you cant use awk to directly update the file? So im curious if I can use sed to do this... The good news is the field I want to... (5 Replies)
Discussion started by: trey85stang
5 Replies

6. Shell Programming and Scripting

Read csv file line by line

Folks , i want to read a csv file line by line till the end of file and filter the text in the line and append everything into a variable. csv file format is :- trousers:shirts,price,50 jeans:tshirts,rate,60 pants:blazer,costprice,40 etc i want to read the first line and get... (6 Replies)
Discussion started by: venu
6 Replies

7. Shell Programming and Scripting

awk concatenate every line of a file in a single line

I have several hundreds of tiny files which need to be concatenated into one single line and all those in a single file. Some files have several blank lines. Tried to use this script but failed on it. awk 'END { print r } r && !/^/ { print FILENAME, r; r = "" }{ r = r ? r $0 : $0 }' *.txt... (8 Replies)
Discussion started by: sdf
8 Replies

8. Shell Programming and Scripting

Duplicate line removal matching some columns only

I'm looking to remove duplicate rows from a CSV file with a twist. The first row is a header. There are 31 columns. I want to remove duplicates when the first 29 rows are identical ignoring row 30 and 31 BUT the duplicate that is kept should have the shortest total character length in rows 30... (6 Replies)
Discussion started by: Michael Stora
6 Replies

9. UNIX for Dummies Questions & Answers

Using awk to remove duplicate line if field is empty

Hi all, I've got a file that has 12 fields. I've merged 2 files and there will be some duplicates in the following: FILE: 1. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, 100 2. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, (EMPTY) 3. CDC, 54321, TEST3,... (4 Replies)
Discussion started by: tugar
4 Replies

10. Shell Programming and Scripting

Printing string from last field of the nth line of file to start (or end) of each line (awk I think)

My file (the output of an experiment) starts off looking like this, _____________________________________________________________ Subjects incorporated to date: 001 Data file started on machine PKSHS260-05CP ********************************************************************** Subject 1,... (9 Replies)
Discussion started by: samonl
9 Replies
sort(3pm)						 Perl Programmers Reference Guide						 sort(3pm)

NAME
sort - perl pragma to control sort() behaviour SYNOPSIS
use sort 'stable'; # guarantee stability use sort '_quicksort'; # use a quicksort algorithm use sort '_mergesort'; # use a mergesort algorithm use sort 'defaults'; # revert to default behavior no sort 'stable'; # stability not important use sort '_qsort'; # alias for quicksort my $current = sort::current(); # identify prevailing algorithm DESCRIPTION
With the "sort" pragma you can control the behaviour of the builtin "sort()" function. In Perl versions 5.6 and earlier the quicksort algorithm was used to implement "sort()", but in Perl 5.8 a mergesort algorithm was also made available, mainly to guarantee worst case O(N log N) behaviour: the worst case of quicksort is O(N**2). In Perl 5.8 and later, quick- sort defends against quadratic behaviour by shuffling large arrays before sorting. A stable sort means that for records that compare equal, the original input ordering is preserved. Mergesort is stable, quicksort is not. Stability will matter only if elements that compare equal can be distinguished in some other way. That means that simple numerical and lexical sorts do not profit from stability, since equal elements are indistinguishable. However, with a comparison such as { substr($a, 0, 3) cmp substr($b, 0, 3) } stability might matter because elements that compare equal on the first 3 characters may be distinguished based on subsequent characters. In Perl 5.8 and later, quicksort can be stabilized, but doing so will add overhead, so it should only be done if it matters. The best algorithm depends on many things. On average, mergesort does fewer comparisons than quicksort, so it may be better when compli- cated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using "sort()" to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times. You can force the choice of algorithm with this pragma, but this feels heavy-handed, so the subpragmas beginning with a "_" may not persist beyond Perl 5.8. The default algorithm is mergesort, which will be stable even if you do not explicitly demand it. But the stability of the default sort is a side-effect that could change in later versions. If stability is important, be sure to say so with a use sort 'stable'; The "no sort" pragma doesn't forbid what follows, it just leaves the choice open. Thus, after no sort qw(_mergesort stable); a mergesort, which happens to be stable, will be employed anyway. Note that no sort "_quicksort"; no sort "_mergesort"; have exactly the same effect, leaving the choice of sort algorithm open. CAVEATS
This pragma is not lexically scoped: its effect is global to the program it appears in. That means the following will probably not do what you expect, because both pragmas take effect at compile time, before either "sort()" happens. { use sort "_quicksort"; print sort::current . " "; @a = sort @b; } { use sort "stable"; print sort::current . " "; @c = sort @d; } # prints: # quicksort stable # quicksort stable You can achieve the effect you probably wanted by using "eval()" to defer the pragmas until run time. Use the quoted argument form of "eval()", not the BLOCK form, as in eval { use sort "_quicksort" }; # WRONG or the effect will still be at compile time. Reset to default options before selecting other subpragmas (in case somebody carelessly left them on) and after sorting, as a courtesy to others. { eval 'use sort qw(defaults _quicksort)'; # force quicksort eval 'no sort "stable"'; # stability not wanted print sort::current . " "; @a = sort @b; eval 'use sort "defaults"'; # clean up, for others } { eval 'use sort qw(defaults stable)'; # force stability print sort::current . " "; @c = sort @d; eval 'use sort "defaults"'; # clean up, for others } # prints: # quicksort # stable Scoping for this pragma may change in future versions. perl v5.8.0 2002-06-01 sort(3pm)
All times are GMT -4. The time now is 06:59 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy