Thank you dazdseg, with this I can find the uniq records/matching records in both the files, is there a way I can move diffrence records/unmatched records in a new file.
Last edited by Scott; 06-25-2010 at 08:50 AM..
Reason: Code tags, PLEASE!
Sorry to be a wet blanket but neither the grep nor the uniq approach will fulfill the requirement, even if the data was in sorted order (which it isn't).
1) Do both files have exactly the same number of records and are you just looking for records which have changed? Does the order of the output into file3 matter?
2) If there can be more or less records in file2 than file1, does the order of the output into file3 matter?
Are you also interested in records which exist in file1 but do not exist in file2?
3) What percentage of differences do you expect? (This is really a performance question because some approaches would involve multiple lookups).
4) If this proves too difficult for shell programming, do you have a mainstream database engine?
---------- Post updated at 15:05 ---------- Previous update was at 14:20 ----------
One shell approach if the order of the output does not matter.
Tried with two approx 5 million record files of 500 Mb each. Took about 5 mins to run and the output only shows the mismatched records from file2. Actual performance will depend on how fast you computer is and how much memory you can give to sort.
When sorting large files be sure to set $TMPDIR to somewhere with enough space for at least twice the size of the file being sorted.
Thanks for your time on this, its much appreciated
1) Do both files have exactly the same number of records and are you just looking for records which have changed? Does the order of the output into file3 matter?
File1 has 1803077 records
file2 has 1795370 records
2) If there can be more or less records in file2 than file1, does the order of the output into file3 matter?
I would prefer 1st row in file3 from file1 and 2nd row from file2 and so on
Are you also interested in records which exist in file1 but do not exist in file2?
Yes, and viceversa also, it would be good if we can copy the records to diffrent files say recordsonlyonfile1.txt and recordsonlyonfile2.txt
3) What percentage of differences do you expect? (This is really a performance question because some approaches would involve multiple lookups).
there are huge changes in the file it could be over 50%
4) If this proves too difficult for shell programming, do you have a mainstream database engine?
I have informix database I am not sure if this would not help me as there is no uniq key in the records
---------- Post updated at 15:05 ---------- Previous update was at 14:20 ----------
One shell approach if the order of the output does not matter.
Tried with two approx 5 million record files of 500 Mb each. Took about 5 mins to run and the output only shows the mismatched records from file2. Actual performance will depend on how fast you computer is and how much memory you can give to sort.
When sorting large files be sure to set $TMPDIR to somewhere with enough space for at least twice the size of the file being sorted.[/QUOTE]
The database approach is winning because of the requirement to retain the original random order and to get enough speed to check each record in turn. Can't see an obvious way to keep any other form of compare in step when there is such a high volume of differences.
1) Load each external flat file into a separate table, using the whole record as the key.
2) Re-read each external flat file in turn checking the result of a seek of each record in the opposing file, outputting non-matches to a respective further external file.
Hi All,
i am trying to compare two files in Centos 6.
F1: /tmp/d21
NAME="xvda" TYPE="disk" SIZE="40G" OWNER="root" GROUP="disk" MODE="brw-rw----" MOUNTPOINT=""
NAME="xvda1" TYPE="part" SIZE="500M" OWNER="root" GROUP="disk" MODE="brw-rw----" MOUNTPOINT="/boot"
NAME="xvda2" TYPE="part"... (2 Replies)
Hi,
I want to compare two columns from file1 with another two column of file2 and print matched and unmatched column like this
File1
1 rs1 abc
3 rs4 xyz
1 rs3 stu
File2
1 kkk rs1 AA 10
1 aaa rs2 DD 20
1 ccc ... (2 Replies)
Hi,
I have multiple files that each contain one column of strings:
File1:
123abc
456def
789ghi
File2:
123abc
456def
891jkl
File3:
234mno
123abc
456def
In total I have 25 of these type of file. (5 Replies)
compare to flat files using awk .but in 4th field contains non ordered substring. how to do that.
file1.txt
john|0.0|4|**:25;JP:50;UY:25
file2.txt
andy|0.0|4|JP:50;**:25;UY:25 (4 Replies)
Hello. I have two files. FILE1 was extracted from FILE2 and modified thanks to help from this post. Now I need to replace the extracted, modified lines into the original file (FILE2) to produce the FILE3.
FILE1
1466 55.27433 14.72050 -2.52E+03 3.00E-01 1.05E+04 2.57E+04
1467 55.27433... (1 Reply)
All,
PLease can you help me with a shell script which can compare two xml files and print the difference to a output file.
I have attached one such file for you reference.
<Group>
<Member ID=":Year_Quad:41501" childCount="4" fullPath="PEPSICO Year-Quad-Wk : FOLDER.52 Weeks Ending Dec... (2 Replies)
You have two files to compare by searching keyword from one file into another file
File A
23 >pp_ANSWER
24 >aa hello
25 >jau head wear
66 >jss oops
872 >aqq olps ploww oww sss
722 >GG_KILLER
..... large files
File B
Beta done
KILLER
John Mayor
calix meyers
... (5 Replies)
Hi guys,
I need some help to come out with a solution . I have seven such files but I am showing only three for convenience.
filea
a5 20
a8 16
fileb
a3 42
a7 14
filec
a5 23
a3 07
The output file shoud contain the data in table form showing first field of... (7 Replies)
hey guys, I have two files both with two columns, I have already created an
awk code to ignore certain lines (e.g lines that start with 963) as they wou
ld begin with a certain string, however, the rest I have added together and
calculated the average.
At the moment the code also displays... (3 Replies)