06-13-2010
26,
0
Join Date: Feb 2008
Last Activity: 21 June 2010, 4:55 AM EDT
Posts: 26
Thanks Given: 4
Thanked 0 Times in 0 Posts
Another Idea for the same solution..
Hi
Thanks for the solution.. We had come up with a solution for comparing the huge data..
Since we are comparing huge data of flat file records, the follwing can be done
A hash function may be used like you mentioned below for each rows on the flat files, making the comparison easier.
But Is there a utility hash function in unix same as that of orahash in oracle that wud encrypt each new row uniquely within a few set of characters or numbers.
Then we cud use only those hashed codes to compare with the old hash codes of the prev day file and which wud make processing faster too...