Perl: Need help comparing huge files


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Perl: Need help comparing huge files
# 1  
Old 07-12-2012
Perl: Need help comparing huge files

What do i need to do have the below perl program load 205 million record files into the hash. It currently works on smaller files, but not working on huge files. Any idea what i need to do to modify to make it work with huge files:

Code:
#!/usr/bin/perl
$ot1=$ARGV[2];
$ot2=$ARGV[3];
open(mfileot1, ">$ot1");
open(mfileot2, ">$ot2");
use strict;
#----------------
# Hash Definition
#----------------
my %HashArray;
my @file1Line;
my @file2Line;
#--------------------
# Subroutine
#--------------------
sub comp_file{
  my ($FILE1, $FILE2) = @_;
  open (R, $FILE1) or die ("Can't open file $FILE1");
  foreach my $FP1(<R>){
    chomp($FP1);
    my ($k, $l) = split(/\s+/,$FP1);
    push @{$HashArray{'$FP1'}{$k}},$l;
  }
  close (R);
  open (P, $FILE2) or die ("Can't open file $FILE2");
  foreach my $FP2(<P>){
    chomp($FP2);
    my ($k, $l) = split(/\s+/,$FP2);
    push @{$HashArray{'$FP2'}{$k}},$l;
  }
  close (P);
  foreach my $key(keys %{$HashArray{'$FP1'}}){
    if (!exists $HashArray{'$FP2'}{$key}){
      foreach my $last(@{$HashArray{'$FP1'}{$key}}){
        push (@file1Line,"$key$last");
      }
    }
  }
  print mfileot1 "$_\n" for (sort @file1Line);
  close(mfileot1);
  foreach my $key(keys %{$HashArray{'$FP2'}}){
    if (!exists $HashArray{'$FP1'}{$key}){
      foreach my $last(@{$HashArray{'$FP2'}{$key}}){
        push (@file2Line,"$key$last");
      }
    }
  }
  print mfileot2 "$_\n" for (sort @file2Line);
  close(mfileot2);
}
############MAIN MENU####################################
# Pre-check Condition
# if the input doesn't contain two(2) files, return help
# USAGE: hash2files.pl FILE1 FILE2 FILE3 FILE4
#########################################################

if ($#ARGV != 3){
  print "USAGE: $0 <FILE1> <FILE2> <FILE3> <FILE4>\n";
  exit;
}
else {
  my ($FILE1, $FILE2, $OT1, $OT2)= @ARGV;
  &comp_file($FILE1, $FILE2);
}

Image

Last edited by Scott; 07-12-2012 at 02:13 PM.. Reason: Please use code tags and indent code. Thanks.
# 2  
Old 07-12-2012
What exactly does your program do? Show a sample of input and output.
# 3  
Old 07-12-2012
Basically to run it: hash2files.pl inputfile1 inputfile2 outputfile1 outputfile2

Inputfile1 contains nuneric id's:
Code:
1233
2345
3456
4444
7777

To be compared against Inpufile2 which also has id's:
Code:
1244
2345
3456
9898
9999

The outputfile1 will contain all the id's in inputfile1 that are not found in inputfile2
In this case the result would be;
Code:
1233
4444
7777

Outputfile2 will have all the id's in inputfile2 not found in inputfile1. In this case:
Code:
9898
9999

It works really well with average size file. But it it can not handle loading 2 huge files (inputfile1 and 2) into the hash memory and it stops after a while w/o any error msgs oither than it does it produce the results. It terminates basically.

How can I make this work for huge files. The inputfile1 is about 204 million records and almost the same amount of records in inputfile2? I kniow it needs to be modified to somehow load one of them such as inputfile2 into the hash memory and not both, and do a compare on the id by reading one line from inputfile1 and if found in the has just delete it from the hash one at a time since we do not care about the matched one's at this point. What should remain in the hash is all not found id's and write them to a file. But i do not knoq how to do that !!

I hope helps explaining my issue.

Last edited by Scott; 07-13-2012 at 02:00 PM.. Reason: Blah blah blah blah and blah blah. Thanks.
# 4  
Old 07-12-2012
Hi mrn6430,

Value 1233 isn't found in inputfile2, and similar issue for 1244. Did you forget it or did I miss anything?
# 5  
Old 07-13-2012
Quote:
Originally Posted by birei
Hi mrn6430,

Value 1233 isn't found in inputfile2, and similar issue for 1244. Did you forget it or did I miss anything?


Yes. I updated my reply to include it. Besides the point, need a way to deal with such huge files. That is the mean issue. Thanks
# 6  
Old 07-13-2012
Try:
Code:
$ cat inputfile1
1233
2345
3456
4444
7777
$ cat inputfile2
1244
2345
3456
9898
9999
$ cat script.pl
use warnings;
use strict;

my (%hash);

die qq|Usage: $0 <inputfile-1> <inputfile-2> <outputfile-1> <outputfile-2>\n| 
        unless @ARGV == 4;

open my $ifh1, q|<|, shift or die;
open my $ifh2, q|<|, shift or die;
open my $ofh1, q|>|, shift or die;
open my $ofh2, q|>|, shift or die;

while ( <$ifh1> ) {
        chomp;
        $hash{ $_ } = 1;
}

while ( <$ifh2> ) {
        chomp;
        if ( exists $hash{ $_ } ) {
                delete $hash{ $_ };
                next;
        }

        printf $ofh2 qq|%d\n|, $_;
}

for ( sort { $a <=> $b } keys %hash ) {
        printf $ofh1 qq|%d\n|, $_;
}
$ perl script.pl inputfile1 inputfile2 outputfile1 outputfile2
$ cat outputfile1
1233
4444
7777
$ cat outputfile2
1244
9898
9999

# 7  
Old 07-13-2012
Thank you so much. I will test it. Do you know if there is any limitation of how many records max to load into hash using perl? I have a 205million records to load.

Thanks
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Need help in comparing two files using shell or Perl

I have these two file that I am trying to compare using shell arrays. I need to find out the changed or the missing enteries from File2. For example. The line "f nsd1" in file2 is different from file1 and the line "g nsd6" is missing from file2. I dont want to use "for loop" because my files... (2 Replies)
Discussion started by: sags007_99
2 Replies

2. Shell Programming and Scripting

Removing Dupes from huge file- awk/perl/uniq

Hi, I have the following command in place nawk -F, '!a++' file > file.uniq It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error: bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies

3. Shell Programming and Scripting

Perl: Comparing to two files and displaying the differences

Hi, I'm new to perl and i have to write a perl script that will compare to log/txt files and display the differences. Unfortunately I'm not allowed to use any complied binaries or applications like diff or comm. So far i've across a code like this: use strict; use warnings; my $list1;... (2 Replies)
Discussion started by: dont_be_hasty
2 Replies

4. Shell Programming and Scripting

Comparing 2 huge text files

I have this 2 files: k5login sanwar@systems.nyfix.com jjamnik@systems.nyfix.com nisha@SYSTEMS.NYFIX.COM rdpena@SYSTEMS.NYFIX.COM service/backups-ora@SYSTEMS.NYFIX.COM ivanr@SYSTEMS.NYFIX.COM nasapova@SYSTEMS.NYFIX.COM tpulay@SYSTEMS.NYFIX.COM rsueno@SYSTEMS.NYFIX.COM... (11 Replies)
Discussion started by: linuxgeek
11 Replies

5. Shell Programming and Scripting

Comparing two huge files on field basis.

Hi all, I have two large files and i want a field by field comparison for each record in it. All fields are tab seperated. file1: Email SELVAKUMAR RAMACHANDRAN Email SHILPA SAHU Web NIYATI SONI Web NIYATI SONI Email VIINII DOSHI Web RAJNISH KUMAR Web ... (4 Replies)
Discussion started by: Suman Singh
4 Replies

6. Shell Programming and Scripting

Problem running Perl Script with huge data files

Hello Everyone, I have a perl script that reads two types of data files (txt and XML). These data files are huge and large in number. I am using something like this : foreach my $t (@text) { open TEXT, $t or die "Cannot open $t for reading: $!\n"; while(my $line=<TEXT>){ ... (4 Replies)
Discussion started by: ad23
4 Replies

7. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

8. Shell Programming and Scripting

Perl script error to split huge data one by one.

Below is my perl script: #!/usr/bin/perl open(FILE,"$ARGV") or die "$!"; @DATA = <FILE>; close FILE; $join = join("",@DATA); @array = split( ">",$join); for($i=0;$i<=scalar(@array);$i++){ system ("/home/bin/./program_name_count_length MULTI_sequence_DATA_FILE -d... (5 Replies)
Discussion started by: patrick87
5 Replies

9. Shell Programming and Scripting

Comparing two huge files

Hi, I have two files file A and File B. File A is a error file and File B is source file. In the error file. First line is the actual error and second line gives the information about the record (client ID) that throws error. I need to compare the first field (which doesnt start with '//') of... (11 Replies)
Discussion started by: kmkbuddy_1983
11 Replies

10. UNIX for Dummies Questions & Answers

comparing Huge Files - Performance is very bad

Hi All, Can you please help me in resolving the following problem? My requirement is like this: 1) I have two files YESTERDAY_FILE and TODAY_FILE. Each one is having nearly two million data. 2) I need to check each record of TODAY_FILE in YESTERDAY_FILE. If exists we can skip that by... (5 Replies)
Discussion started by: madhukalyan
5 Replies
Login or Register to Ask a Question