Sponsored Content
Top Forums UNIX for Advanced & Expert Users Performance problem with removing duplicates in a huge file (50+ GB) Post 302752651 by Kannan K on Monday 7th of January 2013 10:55:16 AM
Old 01-07-2013
Sample Records

Sample records from file:

Code:
14480020180,A20180,A020180,143245765381,A00062,17284171796 
14480020180,A20180,A020180,143245765381,A00062,17284171796 
14480000127,A00127,A000127,143245730649,A00127, 
14480020180,A20180,A020180,143245765381,A00062,17284171796 
14480000127,A00127,A000127,143245730649,A00127, 
14480020180,A20180,A020180,143245765381,A00062,17284171796 
14480042302,A42302,A000127,143245800913,A00127, 
14480020180,A20180,A020180,143245765381,A00062,17284171796 
14480041999,A41999,A000127,143245801337,A00127, 
14480020180,A20180,A020180,143245765381,A00062,17284171796 
14480000163,A00163,A000163,143245730774,A00163,4133403 
14480042302,A42302,A000127,143245800913,A00127,

Desired Output:-
Code:
14480020180,A20180,A020180,143245765381,A00062,17284171796 
14480000127,A00127,A000127,143245730649,A00127, 
14480000163,A00163,A000163,143245730774,A00163,4133403 
14480041999,A41999,A000127,143245801337,A00127, 
14480042302,A42302,A000127,143245800913,A00127,

I also want to add the fact that this file contains 40-50% (20-25 GB) of duplicate records.
And unfortunately, all columns need to considered as part of the key to determine duplicates.

The order of the data (sorted/unsorted) in the resultant file doesn't matter. Only the removal of duplicates is essential.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

removing duplicates from a file

i have a file with some 1000 entries it will contain entries like 1000,ram 2000,pankaj 1001,rahim 1000,ram 2532,govind 2000,pankaj 3000,venkat 2532,govind what i want is i want to extract only the distinct rows from this file so my output should contain only 1000,ram... (2 Replies)
Discussion started by: trichyselva
2 Replies

2. UNIX for Dummies Questions & Answers

removing duplicates of a pattern from a file

hey all, I need some help. I have a text file with names in it. My target is that if a particular pattern exists in that file more than once..then i want to rename all the occurences of that pattern by alternate patterns.. for e.g if i have PATTERN occuring 5 times then i want to... (3 Replies)
Discussion started by: ashisharora
3 Replies

3. Shell Programming and Scripting

Removing duplicates from log file?

I have a log file with posts looking like this: -- Messages can be delivered by different systems at different times. The id number is used to sort out duplicate messages. What I need is to strip the arrival time from each post, sort posts by id number, and reattach arrival time to respective... (2 Replies)
Discussion started by: Ilja
2 Replies

4. Shell Programming and Scripting

Removing Duplicates from file

Hi Experts, Please check the following new requirement. I got data like the following in a file. FILE_HEADER 01cbbfde7898410| 3477945| home| 1 01cbc275d2c122| 3478234| WORK| 1 01cbbe4362743da| 3496386| Rich Spare| 1 01cbc275d2c122| 3478234| WORK| 1 This is pipe separated file with... (3 Replies)
Discussion started by: tinufarid
3 Replies

5. Shell Programming and Scripting

formatting a file and removing duplicates

Hi, I have a file that I want to change the format of. It is a large file in rows but I want it to be comma separated (comma then a space). The current file looks like this: HI, Joe, Bob, Jack, Jack After I would want to remove any duplicates so it would look like this: HI, Joe,... (2 Replies)
Discussion started by: kylle345
2 Replies

6. HP-UX

Performance issue with 'grep' command for huge file size

I have 2 files; one file (say, details.txt) contains the details of employees and another file (say, emp.txt) has some selected employee names. I am extracting employee details from details.txt by using emp.txt and the corresponding code is: while read line do emp_name=`echo $line` grep -e... (7 Replies)
Discussion started by: arb_1984
7 Replies

7. UNIX for Dummies Questions & Answers

Removing duplicates from a file

Hi All, I am merging files coming from 2 different systems ,while doing that I am getting duplicates entries in the merged file I,01,000131,764,2,4.00 I,01,000131,765,2,4.00 I,01,000131,772,2,4.00 I,01,000131,773,2,4.00 I,01,000168,762,2,2.00 I,01,000168,763,2,2.00... (5 Replies)
Discussion started by: Sri3001
5 Replies

8. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3 (2 Replies)
Discussion started by: sagar_1986
2 Replies

9. Shell Programming and Scripting

Removing duplicates from new file

i hav two files like i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3.I have tried previous post also,but in that complete line must be similar.In this case i have to verify first column only regardless what is the content in succeeding columns. (3 Replies)
Discussion started by: sagar_1986
3 Replies

10. Shell Programming and Scripting

Removing White spaces from a huge file

I am trying to remove whitespaces from a file containing sample data as: 457 <EOFD> Mar 1 2007 12:00:00:000AM <EOFD> Mar 31 2007 12:00:00:000AM <EOFD> system <EORD> 458 <EOFD> Mar 1 2007 12:00:00:000AM<EOFD>agf <EOFD> Apr 20 2007 9:10:56:036PM <EOFD> prodiws<EORD> . Basically these... (11 Replies)
Discussion started by: amvip
11 Replies
ldns-gen-zone(1)					      General Commands Manual						  ldns-gen-zone(1)

NAME
ldns-gen-zone - read a zonefile and print it while adding DS records and extra RR's SYNOPSIS
ldns-gen-zone ZONEFILE DESCRIPTION
ldns-gen-zone reads a DNS zone file and prints it. It is build for speed, not for a nice formatting. The output has one resource record per line and no pretty-printing makeup. DNSSEC data (NSEC, NSEC3, RRSIG or DNSKEY) is not stripped. You may want to use ldns-read-zone for that. Existing DS records are also not stripped. The idea is to use this tool for quickly generating a representative artificial zonefile from a real zonefile, to use it for testing pur- poses. OPTIONS
-a NUM Adds NUM extra artificial NS RRSets to the output. The RRSets owner names start with 'xn--' in an attempt to ensure uniqueness (nl.-zone does not support IDN's - and this tool was written with that knowledge in mind). An artificial NS RRSet has two NS records; ns1.example.com and ns2.example.com. -p NUM Add NUM% of DS RRSets to the NS RRSets (anywhere between 1-4 DS records per RRSet). -o ORIGIN Sets an $ORIGIN, which can be handy if the one in the zonefile is set to '@' for example. If there is an $ORIGIN in the zonefile, this option will silently be ignored. -s This is the recommended way of processing large zones that are already sorted and canonicalized (ie lowercase). It skips the sorting and canonicalization step that is required for properly grouping RRSets together (before adding any DS records to them. Skipping this step will speed things up. It is not recommended to use this option if you want to add DS records to unsorted, non-canonicalized zones. -h Show usage and exit. -v Show version and exit. EXAMPLES
ldns-gen-zone -a 100000 -p 10 -s ./zonefile.txt Read a zonefile, add 100.000 artificial NS RRSets and 10% of DS records, print it to standard output. Don't sort (will only work well if the input zonefile is already sorted and canonicalized). ldns-gen-zone -p 10 -s -o nl zonefile.txt | named-compilezone -s relative -i none -o zonefile_10.txt nl /dev/stdin This creates a nicely formatted zone file with the help of named-compilezone. It adds 10% DS records to the .nl zone, reformats it and saves it as zonefile_10.txt. AUTHOR
Initially written by Marco Davids, several modifications added by Miek Gieben, both from SIDN. REPORTING BUGS
Report bugs to <ldns-team@nlnetlabs.nl>. BUGS
Only undiscovered ones. CAVEATS
May require a machine with a considerable amount of memory for large zone files. Fake DS records hashes are generated as digest type SHA-256 (RFC4509). Be aware not to change the DIGESTTYPE #define in the source code in anything else but 2 if you want to keep things realistic. Despite a number of efforts, this program is still not the fastest in the world. COPYRIGHT
Copyright (C) 2010 SIDN. This is free software. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 10 June 2010 ldns-gen-zone(1)
All times are GMT -4. The time now is 07:34 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy