Matching 10 Million file records with 10 Million in other file
Dear All,
I have two files both containing 10 Million records each separated by comma(csv fmt).
One file is input.txt other is status.txt.
Input.txt-> contains fields with one unique id field (primary key we can say)
Status.txt -> contains two fields only:1. unique id and 2. status
problem: match id from input.txt to id from status.txt and update/log the status accordingly in output file.
requirement: need efficient algo for getting the solution in minimal time. tried perl, but system hangs during processing. Pls suggest if there's a workable way to do the same. Is it doable in perl or c/c++/java ?
Do you need to run this match frequently or is it a once-off job?
How frequently are the data files updated?
Are there just new records appended to the files or are they completly re-written?
The OS is linux, it's a one time job(occasionally). these are offline files and not being updated. Need to make a process for future requirements.
Its not in DB.. actually these are application log files.
The size of files are 1.5G approx. Right now only thinking of the best way/approach to complete the task...
Had tried using perl hashes(didn't work), i guess keeping that much data in memory is not possible... hence algorithm has to be really efficient here.
Sample files:
Input.txt
Status.txt
need to update the status field in input.txt with the status(success/failure) in status.txt
Last edited by Franklin52; 06-13-2012 at 08:47 AM..
Reason: Please use code tags for data and code samples
Any comment about the order of the data in the files and whether there is a one-for-one match between the two files (in which case the paste command might be suitable?
Edit: Posts crossed. I can see that neither file is in any particular order and that your sample does not show a one-for-one match.
It's going to be necessary to sort both files. Does the order of the final output data matter?
Any comment about the order of the data in the files and whether there is a one-for-one match between the two files (in which case the paste command might be suitable?
Edit: Posts crossed. I can see that neither file is in any particular order and that your sample does not show a one-for-one match.
It's going to be necessary to sort both files. Does the order of the final output data matter?
There is one-for-one match and its not necessary that the id from input.txt is always found in status.txt, if its found then there is only one match.
No the order doesn't matter here.
Are these :11496059224; numbers unique identifiers? Or can there be two or more lines with the same number? If they're unique identifiers, I think you could try with this thing.
First of all, if status.txt is too big, let's split it in many "tiny" files:
Then, here we go:
I haven't tried it, so I don't know how fast or slow it can be.
Hi,
here is my problem:
I've got a file with 6 columns (file1):
a b c d e f
a b c d e f
a b c d e f
a b c d e f
I need to add 1 million columns to this file, each column needs to be a zero.
Here is how the result file (file2) should look like (for the sake of the example, I've only... (7 Replies)
Hi,
one of the server, log directory was never cleaned up. We have so many files. I want to remove all the files that starts with dfr* but I get error message when I use the *.
rm qfr*
bash: /usr/bin/rm: Arg list too long
I am trying to write this script but not working.
... (4 Replies)
I have a log file that is about 1.2 million lines long and about 300MB.
we need a way to clean up this file and only keep the last few thousand lines.
if i use tail command we run our of memory as the file is too big.
I do have a key word to match on.
example, we want to keep every line... (8 Replies)
Here is an easy game!
I wrote a number between 0 and 20 (that can include 0 and 20) on a piece of paper. I am staring at it now, imagining the number so you can read my mind ;)
Reply once, and only once, with a number from 0 to 20 and the first person to guess it wins 1,000,000 Bits.
... (24 Replies)
hi,
I'm trying to sort a file which has 3.7 million records an gettign the following error...any help is appreciated...
sort: Write error while merging.
Thanks (6 Replies)
Hello,
I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file.
What will be the besat and fastest way to extract the ne file.
sample file format :--... (2 Replies)