Sponsored Content
Top Forums Shell Programming and Scripting Honey, I broke awk! (duplicate line removal in 30M line 3.7GB csv file) Post 302894852 by Chubler_XL on Thursday 27th of March 2014 04:17:24 PM
Old 03-27-2014
Agreed it would be a very tight squeeze to solve in-memory on a 32bit environment with 30M records we would only get about 50bytes per record to play with. This is why large datasets are usually stored in databases.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Removal of Duplicate Entries from the file

I have a file which consists of 1000 entries. Out of 1000 entries i have 500 Duplicate Entires. I want to remove the first Duplicate Entry (i,e entire Line) in the File. The example of the File is shown below: 8244100010143276|MARISOL CARO||MORALES|HSD768|CARR 430 KM 1.7 ... (1 Reply)
Discussion started by: ravi_rn
1 Replies

2. Shell Programming and Scripting

Awk not working due to missing new line character at last line of file

Hi, My awk program is failing. I figured out using command od -c filename that the last line of the file doesnt end with a new line character. Mine is an automated process because of this data is missing. How do i handle this? I want to append new line character at the end of last... (2 Replies)
Discussion started by: pinnacle
2 Replies

3. Shell Programming and Scripting

awk script to remove duplicate rows in line

i have the long file more than one ns and www and mx in the line like . i need the first ns record and first www and first mx from line . the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution. ... (4 Replies)
Discussion started by: kiranmosarla
4 Replies

4. Shell Programming and Scripting

reading a file inside awk and processing line by line

Hi Sorry to multipost. I am opening the new thread because the earlier threads head was misleading to my current doubt. and i am stuck. list=`cat /u/Test/programs`; psg "ServTest" | awk -v listawk=$list '{ cmd_name=($5 ~ /^/)? $9:$8 for(pgmname in listawk) ... (6 Replies)
Discussion started by: Anteus
6 Replies

5. Shell Programming and Scripting

Updating a line in a large csv file, with sed/awk?

I have an extremely large csv file that I need to search the second field, and upon matches update the last field... I can pull the line with awk.. but apparently you cant use awk to directly update the file? So im curious if I can use sed to do this... The good news is the field I want to... (5 Replies)
Discussion started by: trey85stang
5 Replies

6. Shell Programming and Scripting

Read csv file line by line

Folks , i want to read a csv file line by line till the end of file and filter the text in the line and append everything into a variable. csv file format is :- trousers:shirts,price,50 jeans:tshirts,rate,60 pants:blazer,costprice,40 etc i want to read the first line and get... (6 Replies)
Discussion started by: venu
6 Replies

7. Shell Programming and Scripting

awk concatenate every line of a file in a single line

I have several hundreds of tiny files which need to be concatenated into one single line and all those in a single file. Some files have several blank lines. Tried to use this script but failed on it. awk 'END { print r } r && !/^/ { print FILENAME, r; r = "" }{ r = r ? r $0 : $0 }' *.txt... (8 Replies)
Discussion started by: sdf
8 Replies

8. Shell Programming and Scripting

Duplicate line removal matching some columns only

I'm looking to remove duplicate rows from a CSV file with a twist. The first row is a header. There are 31 columns. I want to remove duplicates when the first 29 rows are identical ignoring row 30 and 31 BUT the duplicate that is kept should have the shortest total character length in rows 30... (6 Replies)
Discussion started by: Michael Stora
6 Replies

9. UNIX for Dummies Questions & Answers

Using awk to remove duplicate line if field is empty

Hi all, I've got a file that has 12 fields. I've merged 2 files and there will be some duplicates in the following: FILE: 1. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, 100 2. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, (EMPTY) 3. CDC, 54321, TEST3,... (4 Replies)
Discussion started by: tugar
4 Replies

10. Shell Programming and Scripting

Printing string from last field of the nth line of file to start (or end) of each line (awk I think)

My file (the output of an experiment) starts off looking like this, _____________________________________________________________ Subjects incorporated to date: 001 Data file started on machine PKSHS260-05CP ********************************************************************** Subject 1,... (9 Replies)
Discussion started by: samonl
9 Replies
INDXBIB(1)						      General Commands Manual							INDXBIB(1)

NAME
indxbib - make inverted index for bibliographic databases SYNOPSIS
indxbib [ -vw ] [ -cfile ] [ -ddir ] [ -ffile ] [ -hn ] [ -istring ] [ -kn ] [ -ln ] [ -nn ] [ -ofile ] [ -tn ] [ filename... ] It is possible to have whitespace between a command line option and its parameter. DESCRIPTION
indxbib makes an inverted index for the bibliographic databases in filename... for use with refer(1), lookbib(1), and lkbib(1). The index will be named filename.i; the index is written to a temporary file which is then renamed to this. If no filenames are given on the command line because the -f option has been used, and no -o option is given, the index will be named Ind.i. Bibliographic databases are divided into records by blank lines. Within a record, each fields starts with a % character at the beginning of a line. Fields have a one letter name which follows the % character. The values set by the -c, -n, -l and -t options are stored in the index; when the index is searched, keys will be discarded and truncated in a manner appropriate to these options; the original keys will be used for verifying that any record found using the index actually con- tains the keys. This means that a user of an index need not know whether these options were used in the creation of the index, provided that not all the keys to be searched for would have been discarded during indexing and that the user supplies at least the part of each key that would have remained after being truncated during indexing. The value set by the -i option is also stored in the index and will be used in verifying records found using the index. OPTIONS
-v Print the version number. -w Index whole files. Each file is a separate record. -cfile Read the list of common words from file instead of /usr/share/groff/1.18.1/eign. -ddir Use dir as the pathname of the current working directory to store in the index, instead of the path printed by pwd(1). Usually dir will be a symbolic link that points to the directory printed by pwd(1). -ffile Read the files to be indexed from file. If file is -, files will be read from the standard input. The -f option can be given at most once. -istring Don't index the contents of fields whose names are in string. Initially string is XYZ. -hn Use the first prime greater than or equal to n for the size of the hash table. Larger values of n will usually make searching faster, but will make the index larger and indxbib use more memory. Initially n is 997. -kn Use at most n keys per input record. Initially n is 100. -ln Discard keys that are shorter than n. Initially n is 3. -nn Discard the n most common words. Initially n is 100. -obasename The index should be named basename.i. -tn Truncate keys to n. Initially n is 6. FILES
filename.i Index. Ind.i Default index name. /usr/share/groff/1.18.1/eign List of common words. indxbibXXXXXX Temporary file. SEE ALSO
refer(1), lkbib(1), lookbib(1) Groff Version 1.18.1 Apr 2002 INDXBIB(1)
All times are GMT -4. The time now is 11:31 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy