Matching 10 Million file records with 10 Million in other file


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Matching 10 Million file records with 10 Million in other file
# 1  
Old 06-12-2012
Matching 10 Million file records with 10 Million in other file

Dear All,

I have two files both containing 10 Million records each separated by comma(csv fmt).
One file is input.txt other is status.txt.

Input.txt-> contains fields with one unique id field (primary key we can say)
Status.txt -> contains two fields only:1. unique id and 2. status

problem: match id from input.txt to id from status.txt and update/log the status accordingly in output file.

requirement: need efficient algo for getting the solution in minimal time. tried perl, but system hangs during processing. Pls suggest if there's a workable way to do the same. Is it doable in perl or c/c++/java ?

Thanks.
# 2  
Old 06-12-2012
What Operating System and version are you running?

How big are the files?
Are either or both of the files in sorted order?
Do the records in each file match one-for-one?
Does the output order matter?

Can you post sample input and matching output?

Is this an extract from a database where it might be easier to work on the data while it is still in the database?
# 3  
Old 06-12-2012
Additional questions:

Do you need to run this match frequently or is it a once-off job?
How frequently are the data files updated?
Are there just new records appended to the files or are they completly re-written?
# 4  
Old 06-13-2012
The OS is linux, it's a one time job(occasionally). these are offline files and not being updated. Need to make a process for future requirements.

Its not in DB.. actually these are application log files.
The size of files are 1.5G approx. Right now only thinking of the best way/approach to complete the task...
Had tried using perl hashes(didn't work), i guess keeping that much data in memory is not possible... hence algorithm has to be really efficient here.Smilie


Sample files:
Input.txt
Code:
20.04.2012 11.08.44;RECV;APPNAME@HOSTNAME06:11496059192;processed;Location;contact;status;email_id;2
20.04.2012 11.08.44;RECV;APPNAME@HOSTNAME06:11496059168;processed;Location;contact;status;email_id;1
20.04.2012 11.08.44;RECV;APPNAME@HOSTNAME06:11496059220;processed;Location;contact;status;email_id;2

Status.txt
Code:
APPNAME@HOSTNAME06:11496059192;SUCCESS
APPNAME@HOSTNAME06:11496059224;SUCCESS
APPNAME@HOSTNAME06:11496059168;FAILURE
APPNAME@HOSTNAME06:11496059220;FAILURE
APPNAME@HOSTNAME06:11496059193;SUCCESS

need to update the status field in input.txt with the status(success/failure) in status.txt

Last edited by Franklin52; 06-13-2012 at 08:47 AM.. Reason: Please use code tags for data and code samples
# 5  
Old 06-13-2012
Any comment about the order of the data in the files and whether there is a one-for-one match between the two files (in which case the paste command might be suitable?

Edit: Posts crossed. I can see that neither file is in any particular order and that your sample does not show a one-for-one match.

It's going to be necessary to sort both files. Does the order of the final output data matter?

Last edited by methyl; 06-13-2012 at 08:35 AM..
# 6  
Old 06-13-2012
Quote:
Originally Posted by methyl
Any comment about the order of the data in the files and whether there is a one-for-one match between the two files (in which case the paste command might be suitable?

Edit: Posts crossed. I can see that neither file is in any particular order and that your sample does not show a one-for-one match.

It's going to be necessary to sort both files. Does the order of the final output data matter?
There is one-for-one match and its not necessary that the id from input.txt is always found in status.txt, if its found then there is only one match.
No the order doesn't matter here.
# 7  
Old 06-13-2012
Quote:
Originally Posted by vguleria
Code:
APPNAME@HOSTNAME06:11496059192;SUCCESS
APPNAME@HOSTNAME06:11496059224;SUCCESS

Are these :11496059224; numbers unique identifiers? Or can there be two or more lines with the same number? If they're unique identifiers, I think you could try with this thing.

First of all, if status.txt is too big, let's split it in many "tiny" files:

Code:
split -l 100000 status.txt tinyfile

Then, here we go:

Code:
IFS=";:"
declare -a status
for file in tinyfile*; do
  while read -r x y z; do
     status[$y]=$z
     done < $file
  while read a b c d e f g h i l; do
     h=${status[$d]}
     [[ $h = "" ]] || printf '%s;%s;%s:%s;%s;%s;%s;%s;%s;%s\n' "$a" "$b" "$c" "$d" "$e" "$f" "$g" "$h" "$i" "$l"
     done  < input.txt >> output.txt
  unset status
  done

I haven't tried it, so I don't know how fast or slow it can be.
Login or Register to Ask a Question

Previous Thread | Next Thread

7 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Add 1 million columns

Hi, here is my problem: I've got a file with 6 columns (file1): a b c d e f a b c d e f a b c d e f a b c d e f I need to add 1 million columns to this file, each column needs to be a zero. Here is how the result file (file2) should look like (for the sake of the example, I've only... (7 Replies)
Discussion started by: zajtat
7 Replies

2. UNIX for Dummies Questions & Answers

Deleting a million of files ..

Hi, Which way is faster rm -rf /path/ or find / -name -exec rm {} \; and why? (7 Replies)
Discussion started by: cain82
7 Replies

3. UNIX for Dummies Questions & Answers

Pls. help with script to remove million files

Hi, one of the server, log directory was never cleaned up. We have so many files. I want to remove all the files that starts with dfr* but I get error message when I use the *. rm qfr* bash: /usr/bin/rm: Arg list too long I am trying to write this script but not working. ... (4 Replies)
Discussion started by: samnyc
4 Replies

4. Shell Programming and Scripting

Tail 86000 lines from 1.2 million line file?

I have a log file that is about 1.2 million lines long and about 300MB. we need a way to clean up this file and only keep the last few thousand lines. if i use tail command we run our of memory as the file is too big. I do have a key word to match on. example, we want to keep every line... (8 Replies)
Discussion started by: robsonde
8 Replies

5. What is on Your Mind?

Pick a Number Between 0 and 20 for 1 Million Bits

Here is an easy game! I wrote a number between 0 and 20 (that can include 0 and 20) on a piece of paper. I am staring at it now, imagining the number so you can read my mind ;) Reply once, and only once, with a number from 0 to 20 and the first person to guess it wins 1,000,000 Bits. ... (24 Replies)
Discussion started by: Neo
24 Replies

6. Shell Programming and Scripting

sort a file which has 3.7 million records

hi, I'm trying to sort a file which has 3.7 million records an gettign the following error...any help is appreciated... sort: Write error while merging. Thanks (6 Replies)
Discussion started by: greenworld
6 Replies

7. Shell Programming and Scripting

Extract data from large file 80+ million records

Hello, I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file. What will be the besat and fastest way to extract the ne file. sample file format :--... (2 Replies)
Discussion started by: learner16s
2 Replies
Login or Register to Ask a Question