Sponsored Content
Top Forums UNIX for Dummies Questions & Answers File comparison of huge files Post 302588003 by ctsgnb on Friday 6th of January 2012 01:38:28 PM
Old 01-06-2012
Unless you have enough RAM to process such volume of data on the flight, it would be advisable to load all those data into an DBMS (database management system) for further processing.

Otherwise : filter out duplicate lines and save lines that appear only once in file3

Code:
cat file1 file2 | sort | uniq -u >file3

(also see the corresponding -T options of sort command (depending on your OS) for using temporary files and/or directory)

Then process file3 (i didn't test it but maybe something like this :

Code:
awk '{N[$2]++;A[$2]=(A[$2]=="")?$1:((A[$2]~$1)?(A[$2]):(A[$2]","$1))}END{print "signal" OFS "num of mismatch" OFS "time of missmatch";for(i in A) {print i OFS N[i]/2 OFS A[i]}}' OFS="\t" file3

---------- Post updated at 07:38 PM ---------- Previous update was at 07:27 PM ----------

Regarding the file3 generation, you could also give a try to something like

Code:
mknod /tmp/pipeline p
sort -u </tmp/pipeline >file3 &
cat file1 file2>/tmp/pipeline


Last edited by ctsgnb; 01-07-2012 at 06:42 AM.. Reason: Code fixed : uniq -u !!!!
This User Gave Thanks to ctsgnb For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies

2. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

3. Shell Programming and Scripting

Huge File Comparison

Hi i need to compare two fixed length files and produce the differences if any to a seperate file. I have to capture each and every differneces line by line. Ideally my files should not have any differences but if there are any then it should be captured without any miss. Also my files sizes are... (4 Replies)
Discussion started by: naveenn08
4 Replies

4. Shell Programming and Scripting

Splitting the Huge file into several files...

Hi I have to write a script to split the huge file into several pieces. The file columns is | pipe delimited. The data sample is as: 6625060|1420215|07308806|N|20100120|5572477081|+0002.79|+0000.00|0004|0001|......... (3 Replies)
Discussion started by: lakteja
3 Replies

5. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

6. Shell Programming and Scripting

Three Difference File Huge Data Comparison Problem.

I got three different file: Part of File 1 ARTPHDFGAA . . Part of File 2 ARTGHHYESA . . Part of File 3 ARTPOLYWEA . . (4 Replies)
Discussion started by: patrick87
4 Replies

7. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

8. UNIX for Dummies Questions & Answers

Split a huge 7 GB File Based on Pattern into 4 files

Hi, I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each. Please help me as Split command cannot work here as it might miss tags.. Format of the file is as below <!--###### ###### START-->... (6 Replies)
Discussion started by: KishM
6 Replies

9. Shell Programming and Scripting

Aggregation of Huge files

Hi Friends !! I am facing a hash total issue while performing over a set of files of huge volume: Command used: tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f' Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Discussion started by: Ravichander
14 Replies

10. UNIX for Advanced & Expert Users

File comaprsons for the Huge data files ( around 60G) - Need optimized and teh best way to do this

I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat) File 1 - 15 columns File 2 - 15 columns Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies
DirCompare(3pm) 					User Contributed Perl Documentation					   DirCompare(3pm)

NAME
File::DirCompare - Perl module to compare two directories using callbacks. SYNOPSIS
use File::DirCompare; # Simple diff -r --brief replacement use File::Basename; File::DirCompare->compare($dir1, $dir2, sub { my ($a, $b) = @_; if (! $b) { printf "Only in %s: %s ", dirname($a), basename($a); } elsif (! $a) { printf "Only in %s: %s ", dirname($b), basename($b); } else { print "Files $a and $b differ "; } }); # Version-control like Deleted/Added/Modified listing my (@listing, @modified); # use closure to collect results File::DirCompare->compare('old_tree', 'new_tree', sub { my ($a, $b) = @_; if (! $b) { push @listing, "D $a"; } elsif (! $a) { push @listing, "A $b"; } else { if (-f $a && -f $b) { push @listing, "M $b"; push @modified, $b; } else { # One file, one directory - treat as delete + add push @listing, "D $a"; push @listing, "A $b"; } } }); DESCRIPTION
File::DirCompare is a perl module to compare two directories using a callback, invoked for all files that are 'different' between the two directories, and for any files that exist only in one or other directory ('unique' files). File::DirCompare has a single public compare() method, with the following signature: File::DirCompare->compare($dir1, $dir2, $sub, $opts); The first three arguments are required - $dir1 and $dir2 are paths to the two directories to be compared, and $sub is the subroutine reference called for all unique or different files. $opts is an optional hashref of options - see OPTIONS below. The provided subroutine is called for all unique files, and for every pair of 'different' files encountered, with the following signature: $sub->($file1, $file2) where $file1 and $file2 are the paths to the two files. For 'unique' files i.e. where a file exists in only one directory, the subroutine is called with the other argument 'undef' i.e. for: $sub->($file1, undef) $sub->(undef, $file2) the first indicates $file1 exists only in the first directory given ($dir1), and the second indicates $file2 exists only in the second directory given ($dir2). OPTIONS The following optional arguments are supported, passed in using a hash reference after the three required arguments to compare() e.g. File::DirCompare->compare($dir1, $dir2, $sub, { cmp => $cmp_sub, ignore_unique => 1, }); cmp By default, two files are regarded as different if their contents do not match (tested with File::Compare::compare). That default behaviour can be overridden by providing a 'cmp' subroutine to do the file comparison, returning zero if the two files are equal, and non-zero if not. E.g. to compare using modification times instead of file contents: File::DirCompare->compare($dir1, $dir2, $sub, { cmp => sub { -M $_[0] <=> -M $_[1] }, }); ignore_cmp If you want to see all corresponding files, not just 'different' ones, set the 'ignore_cmp' flag to tell File::DirCompare to skip its file comparison checks i.e. File::DirCompare->compare($dir1, $dir2, $sub, { ignore_cmp => 1 }); ignore_unique If you want to ignore files that only exist in one of the two directories, set the 'ignore_unique' flag i.e. File::DirCompare->compare($dir1, $dir2, $sub, { ignore_unique => 1 }); SEE ALSO
File::Dircmp, which provides similar functionality (and whose directory walking code I've adapted for this module), but a simpler reporting-only interface, something like the first example in the SYNOPSIS above. AUTHOR AND CREDITS
Gavin Carr <gavin@openfusion.com.au> Thanks to Robin Barker for a bug report and fix for glob problems with whitespace. COPYRIGHT AND LICENSE
Copyright 2006-2007 by Gavin Carr. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.10.1 2010-03-02 DirCompare(3pm)
All times are GMT -4. The time now is 07:45 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy