File comparison of huge files


 
Thread Tools Search this Thread
Top Forums UNIX for Dummies Questions & Answers File comparison of huge files
# 1  
Old 01-06-2012
File comparison of huge files

Hi all,
I hope you are well. I am very happy to see your contribution. I am eager to become part of it.

I have the following question. I have two huge files to compare (almost 3GB each). The files are simulation outputs. The format of the files are as below

For clear picture, please see attached image

File 1 File 2
----------------- -------------------
Time sig_name sig_val Time sig_name sig_val

0ns sig1 0 0ns sig1 0
0ns sig2 0 0ns sig2 1
0ns sig3 0 0ns sig3 1
0ns sig4 1 0ns sig4 1
1ns sig1 0 1ns sig1 0
1ns sig2 0 1ns sig2 0
1ns sig3 0 1ns sig3 0
1ns sig4 0 1ns sig4 0
2ns sig1 0 2ns sig1 1
2ns sig2 0 2ns sig2 0
2ns sig3 0 2ns sig3 0
2ns sig4 0 2ns sig4 0
3ns sig1 1 3ns sig1 0
3ns sig2 0 3ns sig2 1
3ns sig3 0 3ns sig3 0
3ns sig4 0 3ns sig4 0

Given the two files in the above format, how can i print out the following table from "efficient" file comparison. Efficiency is required as file size is over 3GB

signal number_of_mismatches time_of_mismatch
------ -------------------- ----------------
sig1 2 2ns, 3ns
sig2 2 0ns, 3ns
sig3 1 0ns
sig4 0


I shall really appreciate your response.
File comparison of huge files-filecomparejpg
# 2  
Old 01-06-2012
Unless you have enough RAM to process such volume of data on the flight, it would be advisable to load all those data into an DBMS (database management system) for further processing.

Otherwise : filter out duplicate lines and save lines that appear only once in file3

Code:
cat file1 file2 | sort | uniq -u >file3

(also see the corresponding -T options of sort command (depending on your OS) for using temporary files and/or directory)

Then process file3 (i didn't test it but maybe something like this :

Code:
awk '{N[$2]++;A[$2]=(A[$2]=="")?$1:((A[$2]~$1)?(A[$2]):(A[$2]","$1))}END{print "signal" OFS "num of mismatch" OFS "time of missmatch";for(i in A) {print i OFS N[i]/2 OFS A[i]}}' OFS="\t" file3

---------- Post updated at 07:38 PM ---------- Previous update was at 07:27 PM ----------

Regarding the file3 generation, you could also give a try to something like

Code:
mknod /tmp/pipeline p
sort -u </tmp/pipeline >file3 &
cat file1 file2>/tmp/pipeline


Last edited by ctsgnb; 01-07-2012 at 06:42 AM.. Reason: Code fixed : uniq -u !!!!
This User Gave Thanks to ctsgnb For This Post:
# 3  
Old 01-06-2012
Thanks ctsgnb.


Could you please explain why do you need to sort?

Secondly, could you please explain the details of the awk code?

Thirdly please explain why use mknod?

I shall really appreciate.

Kind Regards,

---------- Post updated at 03:35 PM ---------- Previous update was at 03:01 PM ----------

It produces the WRONG output as follows.Note sig3 has 2.5 as number which makes no sense

signal num of mismatch time of mismatch
sig1 3 0ns,1ns,2ns,3ns
sig2 3 0ns,1ns,2ns,3ns
sig3 2.5 0ns,1ns,2ns,3ns
sig4 2 0ns,1ns,2ns,3ns

instead, the following (correct) output should be produced

signal number_of_mismatches time_of_mismatch
------ -------------------- ----------------
sig1 2 2ns, 3ns
sig2 2 0ns, 3ns
sig3 1 0ns
sig4 0

---------- Post updated at 03:56 PM ---------- Previous update was at 03:35 PM ----------

/********************* IT WORKS AS FOLLOWS *************************/

Here is what works. ctsgnb your awk script is awesome it works perfectly. Here is what you need to modify

cat file1 file2 | sort | uniq -u > file3.txt

Then do awk using your script on file3.txt


Thanks a lot.
# 4  
Old 01-06-2012
Quote:
Originally Posted by kaaliakahn
Thanks ctsgnb.


Could you please explain why do you need to sort?
It's the simplest and most reliable way to tell whether a line's a duplicate or not. sort is actually pretty sophisticated, capable of handling extremely huge files without bogging down. Once it's done so, any duplicate lines will show up all in a row, letting you use uniq to get rid of them.
Quote:
Thirdly please explain why use mknod?
He's actually making a named pipe. On many systems you can just use mkfifo for that.

A named pipe lets you do this:

Code:
mkfifo fifo
cat file > fifo &
cat fifo # the first cat writes to the fifo, the second cat reads from it.
# This is effectively cat file | cat.
# Named pipes can be a handy way to join together programs which
# read from filenames and can't easily handle stdin/stdout communication.

This User Gave Thanks to Corona688 For This Post:
# 5  
Old 01-07-2012
Quote:
Originally Posted by kaaliakahn
Thanks ctsgnb.


1) Could you please explain why do you need to sort?

2) Secondly, could you please explain the details of the awk code?

3) Thirdly please explain why use mknod?

I shall really appreciate.

Kind Regards,

[...]
-
/********************* IT WORKS AS FOLLOWS *************************/

Here is what works. ctsgnb your awk script is awesome it works perfectly. Here is what you need to modify

cat file1 file2 | sort | uniq -u > file3.txt

Then do awk using your script on file3.txt


Thanks a lot.
Hi

Yes you are right i did a little mistake :

sort -u would sort the file and if there are some duplicate lines, it would display them once instead of displaying ONLY lines that DO NOT have duplicate (uniq -u does it).

Note that the uniq command proceed to comparison sequentially so the output need to be sorted at first.

that is why the sort is necessary before extracting the lines that appear only once.

Code:
cat file1 file2 | sort | uniq -u >file3

Then, as i already specified in my previous post, the awk code need to be applied on file3, here are the requested information about the awk code:

awk '{ begining of awk code
N[$2]++same as N[$2]=N[$2]+1 this is building of an array N indexed by column 2 : N[$2] every element of the array will be incremented when the corresponding index is met.
x=(condition)?value if condition is true:value if condition is fasleonce you have understood this syntax, you can break the next line into smaller piece ...
A[$2]=(A[$2]=="")?$1if A[$2] is empty, i assign $1...
Smilie(A[$2]~$1)?(A[$2])SmilieA[$2]","$1))... if A[$2] not empty, then we will concatenate $1 to A[$2] only if $1 is not already present in A[$2] in fact , here, we are building the "0ns, 2ns, ...." part of the record for each $2 (for each sig).
}END{ once the file3 scanning is finished
print "signal" OFS "num of mismatch" OFS "time of missmatch"we print the header line
for(i in A) {then for every signal found
print i OFS N[i]/2 OFS A[i]we display the expected output ... note that N[i] is divided by 2 since for every missmatch, 2 different entries have been scanned (one comming from file1 and one coming from file2
} }' OFS="\t" file3end of awk code. Here, the output field separator has been setup to be a tabulation, but you can adapt it to your needs.
This User Gave Thanks to ctsgnb For This Post:
# 6  
Old 01-07-2012
Thanks a million ctsgnb and all others.

I am having problems on large files. I am able to get this thing working on 3GB files but when the file size is 11GB, i get this message

sort: write failed: /tmp/sortZgF9MD: No space left on device

despite the fact that i have good hard disk space

Any solution to this.

Please suggest



Really appreciate your help

Kind Regards,
kaaliakahn

Last edited by kaaliakahn; 01-07-2012 at 05:50 PM..
# 7  
Old 01-07-2012
I think ctsgnb already mentioned this in his post, but try to see if the sort on your OS, support the "-T" option.
Also post output from
PHP Code:
df -/tmp 
. Looks like you are running out of space in /tmp. So basically 2 options, see if your sort can use -T and specify an alternate location for tmp-files or split your file into smaller pieces and do a sort on the smaller split files.
This User Gave Thanks to dude2cool For This Post:
 
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

File comaprsons for the Huge data files ( around 60G) - Need optimized and teh best way to do this

I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat) File 1 - 15 columns File 2 - 15 columns Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies

2. Shell Programming and Scripting

Aggregation of Huge files

Hi Friends !! I am facing a hash total issue while performing over a set of files of huge volume: Command used: tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f' Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Discussion started by: Ravichander
14 Replies

3. UNIX for Dummies Questions & Answers

Split a huge 7 GB File Based on Pattern into 4 files

Hi, I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each. Please help me as Split command cannot work here as it might miss tags.. Format of the file is as below <!--###### ###### START-->... (6 Replies)
Discussion started by: KishM
6 Replies

4. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

5. Shell Programming and Scripting

Three Difference File Huge Data Comparison Problem.

I got three different file: Part of File 1 ARTPHDFGAA . . Part of File 2 ARTGHHYESA . . Part of File 3 ARTPOLYWEA . . (4 Replies)
Discussion started by: patrick87
4 Replies

6. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

7. Shell Programming and Scripting

Splitting the Huge file into several files...

Hi I have to write a script to split the huge file into several pieces. The file columns is | pipe delimited. The data sample is as: 6625060|1420215|07308806|N|20100120|5572477081|+0002.79|+0000.00|0004|0001|......... (3 Replies)
Discussion started by: lakteja
3 Replies

8. Shell Programming and Scripting

Huge File Comparison

Hi i need to compare two fixed length files and produce the differences if any to a seperate file. I have to capture each and every differneces line by line. Ideally my files should not have any differences but if there are any then it should be captured without any miss. Also my files sizes are... (4 Replies)
Discussion started by: naveenn08
4 Replies

9. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

10. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies
Login or Register to Ask a Question