Using an awk script to identify dupes in two files


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Using an awk script to identify dupes in two files
# 1  
Old 02-22-2011
Using an awk script to identify dupes in two files

Hello,

I have two files. File1 or the master file contains two columns separated by a delimiter:
Code:
a=b
b=d
e=f
g=h

File 2 which is the file to be processed has only a single column
Code:
a
h
c
b

What I need is an awk script to identify unique names from file 2 which are not found in the master file

The desired output would be:
Code:
h
c

The master file is huge around 1,50,000 lines
The script I had written bu which did not work satisfactorily is as under:

# Use to match items present in 2 text databases
Code:
BEGIN {FS="="}
{
#Master file
if (ARGIND==1) {count[$1]=$2;}    
# Second file
if (ARGIND==2) {count2[$1]=$0;}   

}
END
{

for (i in count) 
#To show matching items. Commented
#if (count2[i] != "") {print count2[i]} 
To show items that do not match the master database
#if (count2[i] == "") {print count2[i]} 
}

It does show the uniks but also spews out the non-unique words.
Any help would be most appreciated. Please help since my dictionary work is halted

Many thanks in advance,

GIMLEY

Last edited by Franklin52; 02-23-2011 at 03:26 AM.. Reason: Please use code tags, thank you
# 2  
Old 02-22-2011
Code:
awk -F= 'NR==FNR{O[$1]++;next} !($1 in O)' file1 file2

# 3  
Old 02-22-2011
file 2 is also huge?

---------- Post updated at 11:32 AM ---------- Previous update was at 11:23 AM ----------

Quote:
Originally Posted by Chubler_XL
Code:
awk -F= 'NR==FNR{O[$1]++;next} !($1 in O)' file1 file2

file1 is big,so here O array is really a big array
This User Gave Thanks to justlooks For This Post:
# 4  
Old 02-22-2011
I tested with a file of 3.8million records with each key 12 chars long and it took 11 seconds to run, on a thinkpad laptop:

Code:
$ head file1.txt
32129329086
48625634304
22144404670
68186949150
73047198101
75779955958
49331642369
02427749207
60560456186
38039769462

$ wc -l file1.txt
3870483 file1.txt
 
$ time awk -F= 'NR==FNR{O[$1]++;next} !($1 in O)' file1.txt file2
a
h
c
b
real    0m11.093s
user    0m9.827s
sys     0m0.171s

# 5  
Old 02-23-2011
Hello,
Many thanks for the solution and also the prompt reply.
The script worked beautifully with the sample data. But with real world data it did not work. Should I have mentioned that the right hand side of the master file is in upper ascii or possibly in Unicode code pages other than Latin 1
Code:
aabad=¥ÊÚÄ
aabadaabu=¥ÊÚÄÚÊÞ
aabadabu=¥ÊÚÄÚÊÞ
aabbey=¥ÊèÊá
aabedali=¥ÊáĤÑÜ
aabedin=¥ÊáÄÛÆ
aabeed=¥ÊÜÄ
aabel=¥ÊáÑ
aabelia=¥ÊáÑÛÍÚ
aabelin=¥ÊáÑÜÆ

and the "slave" file has only lower ascii but no upper ascii
Code:
kishor
aabaadabu
aabhar

The output should have been
Code:
kishor
aabhar

Any solutions please,

Best regards and sorry for the hassle,

Gimley

Last edited by Franklin52; 02-24-2011 at 03:22 AM.. Reason: Please use code tags
# 6  
Old 02-23-2011
awk will work if you adjust the locale settings. what does
Code:
locale 
locale -a

show right now? locale gives the current setting, locale -a shows the available ones.
This User Gave Thanks to jim mcnamara For This Post:
# 7  
Old 02-23-2011
Re.:Using an awk script to identify dupes in two files

I am sorry to hassle you guys like this. I am on Windows Vista and locale does not work. Basically my locale is ISO 1252 (ANSI - Latin I), which should handle lower as well as upper ASCII characters.
I still am perplexed why the single chars work whereas the longer strings do not work.
Sorry to be such a bother, but this is a real mystery

Gimley

---------- Post updated at 11:57 AM ---------- Previous update was at 11:49 AM ----------

Sorry Guys my goof-up. In my excitement to get the data working, I had forgotten to put the file separator.
BEGIN {FS="="}
NR==FNR{O[$1]++;next} !($1 in O)
It works beautifully and ran through the records like a breeze.
Please excuse my stupidity.
Best regards and many thanks to all who helped me out.
GIMLEY
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Modify script to remove dupes with two delimiters

Hello, I have a script which removes duplicates in a database with a single delimiter = The script is given below: # script to remove dupes from a row with structure word=word BEGIN{FS="="} {for(i=1;i<=NF;i++){a++;}for(i in a){b=b"="i}{sub("=","",b);$0=b;b="";delete a}}1 How do I modify... (6 Replies)
Discussion started by: gimley
6 Replies

2. Shell Programming and Scripting

Help with Perl script for identifying dupes in column1

Dear all, I have a large dictionary database which has the following structure source word=target word e.g. book=livre Since the database is very large in spite of all the care taken, it so happens that at times the source word is repeated e.g. book=livre book=tome Since I want to... (7 Replies)
Discussion started by: gimley
7 Replies

3. Shell Programming and Scripting

Bash script to find the number of files and identify which ones are 0 bytes.

I am writing a bash script to find out all the files in a directory which are empty. I am running into multiple issues. I will really appreciate if someone can please help me. #!/bin/bash DATE=$(date +%m%d%y) TIME=$(date +%H%M) DIR="/home/statsetl/input/civil/test" ... (1 Reply)
Discussion started by: monasharma13
1 Replies

4. Shell Programming and Scripting

Help in modifying existing Perl Script to produce report of dupes

Hello, I have a large amount of data with the following structure: Word=Transliterated word I have written a Perl Script (reproduced below) which goes through the full file and identifies all dupes on the right hand side. It creates successfully a new file with two headers: Singletons and Dupes.... (5 Replies)
Discussion started by: gimley
5 Replies

5. Shell Programming and Scripting

Removing Dupes from huge file- awk/perl/uniq

Hi, I have the following command in place nawk -F, '!a++' file > file.uniq It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error: bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies

6. Shell Programming and Scripting

Script for identifying and deleting dupes in a line

I am compiling a synonym dictionary which has the following structure Headword=Synonym1,Synonym2 and so on, with each synonym separated by a comma. As is usual in such cases manual preparation of synonyms results in repeating the synonym which results in dupes as in the example below:... (3 Replies)
Discussion started by: gimley
3 Replies

7. Shell Programming and Scripting

Perl Script to identify files without extension and assign a value

Hi, I have a perl script which is a part of a shell script which read lines from a flat file(which is generated as part of a script after a series of bteq/fexp) and assigns a value for each object in the file based on the type of file name. (i.e extensions like .bteq/.ctl/.ksh etc) For example,... (1 Reply)
Discussion started by: yohasini
1 Replies

8. Shell Programming and Scripting

Shell script to identify the number of files and to append data

Hi I am having a question where I have to 1) Identify the number of files in a directory with a specific format and if the count is >1 we need to concatenate those two files into one file and remember that in the second file the header should not be copied. it should be form first file.... (4 Replies)
Discussion started by: pradkumar
4 Replies

9. Shell Programming and Scripting

Need help to identify files

Hi, I have one problem. I want to identify all those files which are named according to the format <name>_<date>. I have tried using awk and grep in bash but i m not able to get it correct. Can someone please help? It's urgent !! (10 Replies)
Discussion started by: udayan_goswami
10 Replies

10. UNIX for Dummies Questions & Answers

how to identify files I cannot access

I want to use find (or something else) to give me a list of all files in a directory tree where the group access is not rwx or rw-. I'm trying to archive the whole directory tree, but it won't archive any files where I do not have at least read access. I have tried: find . ! -perm -060 but... (4 Replies)
Discussion started by: wvdeijk
4 Replies
Login or Register to Ask a Question