Help with Perl script for identifying dupes in column1


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Help with Perl script for identifying dupes in column1
# 1  
Old 01-23-2015
Help with Perl script for identifying dupes in column1

Dear all,
I have a large dictionary database which has the following structure
Code:
source word=target word
e.g.
book=livre

Since the database is very large in spite of all the care taken, it so happens that at times the source word is repeated
Code:
e.g.
book=livre
book=tome

Since I want to keep only unique words in the database and remove all dupes, I wrote the following perl script to solve the problem. The script accesses a database and writes to a separate file the singletons followed by the dupes.
Code:
#!/usr/bin/perl

$dupes = $singletons = "";		# This goes at the head of the file

do {
    $dupefound = 0;			# These go at the head of the loop
    $text = $line = $prevline = $name = $prevname = "";
    do {
	$line = <>;
	$line =~ /^(.+)\=.+$/ and $name = $1;
	$prevline =~ /^(.+)\=.+$/ and $prevname = $1;
	if ($name eq $prevname) { $dupefound += 1 }
	$text .= $line;
	$prevline = $line;
    } until ($dupefound > 0 and $text !~ /^(.+?)\=.*?\n(?:\1=.*?\n)+\z/m) or eof;
    if ($text =~ s/(^(.+?)\=.*?\n(?:\2=.*?\n)+)//m) { $dupes .= $1 }
    $singletons .= $text;
} until eof;
print "SINGLETONS\n$singletons\n\DUPES\n$dupes";

While this works well for a small sized database, the script does not identify all dupes in one pass and I have to repeat the passes. All tweaks to the script have not solved the problem.
Could someone please suggest what error has gone in the script? And also please explain the correction made.
Many thanks in advance for your help and all good wishes for the New year, since this is my first post for 2015
# 2  
Old 01-23-2015
Did you really need to do this in perl. Have you tried using the sort commands? You would sort uniquely based on the key (column field) that is suppose to be unique. It not clear by what you means are singletons or dups. In your example, you show
Code:
book=livre
book=tome

what is the output suppose to look like in the case of this example?
# 3  
Old 01-24-2015
I am sorry for responding so late but my router was down and did not have access to the net.
Basically my main aim was as under:
1. I have a large data base (a dictionary) of around 200,000 entries.
2. Each entry has as I mentioned in my post a structure as under
Code:
Source language=target language

3. Since at times an entry (word or expression) in the source language maps to more than one glosses in the target language as in the French example which I provided, the entry on the left hand side (source language) is repeated twice.
4. I wrote the script to identify such duplicate entries. The script reads through the database and spews out a file which is divided into tw0 header:
Code:
Singletons and Dupes

However when I run it on such a voluminous database the script does not identify all such dupes.
The Singletons contain in fact dupes, showing that the script is not functioning positively.
I wanted to know where I goofed up and how the script can be modified to make it perform as it should in one single run.
I hope I have explained the situation clearly and once more excuses for the delay in responding.
# 4  
Old 01-25-2015
I wont be able to help in perl, maybe this will help

Code:
awk -F"=" '{print $1}'  dictionary_file | sort | uniq -c | awk '{print $2 >> "numrepeats_"$1"_list"}'

This will spit out as many files depending on the number of repeats of your left side of =.

All the singletons will be in the file numrepeats_1_list, all the dupes will be in numrepeats_2_list ,, triplets in numrepeats_3_list and so on...

you can use paste command on all these output files to have a single file with multiple columns...


Alternatively, if you want to find only duplicate entries , as a list

Code:
awk -F"=" '{print $1}'  dictionary_file | sort | uniq -d

If sorting is an issue, try to find dupes by

Code:
awk -F"="  '
{s[$1]++}
END {
  for(i in s) {
    if(s[i]>1) {
      print i
    }
  }
}' dictionary_file


Last edited by senhia83; 01-25-2015 at 12:29 AM..
This User Gave Thanks to senhia83 For This Post:
# 5  
Old 01-25-2015
Many thanks. I will try it out and get back to you.

---------- Post updated at 11:30 PM ---------- Previous update was at 11:22 PM ----------

It worked. Many thanks. I had to modify the script slightly since I work in a Windows environment
Many thanks. However I am still curious why my perl script failed.
# 6  
Old 01-25-2015
Try also
Code:
awk -F= '$1 in DUP      {next}

         $1 in SNG      {DUP[$1]
                         delete SNG[$1]
                         next}

                        {SNG[$1]}

         END            {print "SINGLES"
                         for (i in SNG) print i
                         print "DUPLICATES"
                         for (i in DUP) print i}
        ' file

This User Gave Thanks to RudiC For This Post:
# 7  
Old 01-25-2015
In Perl...
Code:
#! /usr/bin/perl
use strict;
use warnings;
open (my $words, '<', $ARGV[0]);
my (%seen,@unique,@dupe);
while($words){
        if (/^(\w+)=\w+$/){
                if ($seen{$1}<1){
                        $seen{$1}++;
                        push (@unique,$_);
                }
                else{
                        push @dupe, $_;
                }
        }
        else{
                print "Not a word definition:$_"
        }
}
print "UNIQUE WORDS\n\n", join ('',@unique),
        "DUPLICATES\n\n", join ('',@dupe);

This User Gave Thanks to Skrynesaver For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Modify script to remove dupes with two delimiters

Hello, I have a script which removes duplicates in a database with a single delimiter = The script is given below: # script to remove dupes from a row with structure word=word BEGIN{FS="="} {for(i=1;i<=NF;i++){a++;}for(i in a){b=b"="i}{sub("=","",b);$0=b;b="";delete a}}1 How do I modify... (6 Replies)
Discussion started by: gimley
6 Replies

2. Programming

Perl script to merge cells in column1 which has same strings, for all sheets in a excel workbook

Perl script to merge cells ---------- Post updated at 12:59 AM ---------- Previous update was at 12:54 AM ---------- I am using below code to read files from a dir and print to excel. open(my $in, '<', $file) or die "Could not open file: $!"; my $rowCount = 0; my $colCount = 0;... (11 Replies)
Discussion started by: Jack_Bruce
11 Replies

3. Shell Programming and Scripting

Identifying dupes within a database and creating unique sub-sets

Hello, I have a database of name variants with the following structure: variant=variant=variant The number of variants can be as many as thirty to forty. Since the database is quite large (at present around 60,000 lines) duplicate sets of variants creep in. Thus John=Johann=Jon and... (2 Replies)
Discussion started by: gimley
2 Replies

4. Shell Programming and Scripting

Awk - Script assistance on identifying non matching fields

Hoping for some assistance. my source file consists of: os, ip, username win7, 123.56.78, john win7, 123.56.78, paul win7, 10.1.1.1, john win7, 10.2.2.3, joe I've been trying to run a script that will only return ip and username where the IP address is the same and the username is... (3 Replies)
Discussion started by: tekvaio
3 Replies

5. Shell Programming and Scripting

Help in modifying existing Perl Script to produce report of dupes

Hello, I have a large amount of data with the following structure: Word=Transliterated word I have written a Perl Script (reproduced below) which goes through the full file and identifies all dupes on the right hand side. It creates successfully a new file with two headers: Singletons and Dupes.... (5 Replies)
Discussion started by: gimley
5 Replies

6. Shell Programming and Scripting

Removing Dupes from huge file- awk/perl/uniq

Hi, I have the following command in place nawk -F, '!a++' file > file.uniq It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error: bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies

7. Shell Programming and Scripting

Script for identifying and deleting dupes in a line

I am compiling a synonym dictionary which has the following structure Headword=Synonym1,Synonym2 and so on, with each synonym separated by a comma. As is usual in such cases manual preparation of synonyms results in repeating the synonym which results in dupes as in the example below:... (3 Replies)
Discussion started by: gimley
3 Replies

8. Shell Programming and Scripting

Using an awk script to identify dupes in two files

Hello, I have two files. File1 or the master file contains two columns separated by a delimiter: a=b b=d e=f g=h File 2 which is the file to be processed has only a single column a h c b What I need is an awk script to identify unique names from file 2 which are not found in the... (6 Replies)
Discussion started by: gimley
6 Replies

9. Red Hat

identifying SU mode by shell script

the following bash script is not working in fedora-11 Could anyone help me please? #!/bin/bash if -ne 0];then echo " you are in root" else echo " you must be in root -su mode??" fi exit (2 Replies)
Discussion started by: Turtel
2 Replies

10. Shell Programming and Scripting

Identifying prior date in shell script

Hi guys, I'm a newbie to shell script. I have to write a shell script that is supposed to give me a date which should be older than the current date when I supply "no. of days" as a command line parameter to the script. (i.e)., if I am giving the no. of days value as 305, the script should... (5 Replies)
Discussion started by: royalibrahim
5 Replies
Login or Register to Ask a Question