Visit The New, Modern Unix Linux Community


Script for identifying and deleting dupes in a line


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Script for identifying and deleting dupes in a line
# 1  
Script for identifying and deleting dupes in a line

I am compiling a synonym dictionary which has the following structure
Headword=Synonym1,Synonym2 and so on, with each synonym separated by a comma.
As is usual in such cases manual preparation of synonyms results in repeating the synonym which results in dupes as in the example below:
Code:
arrogance=affectation,affected manners,airs,array,boastfulness,boasting,bombast,braggadocio,bravado,brazenness,bumptiousness,conceit,contempt,contemptuousness,contumeliousness,contumely,coxcombry,crowing,dandyism,dash,disdain,disdainfulness,display,egotism,fanfare,fanfaronade,fatuousness,flourish,foppery,foppishness,frills and furbelows,frippery,gall,getting on one's high horse,glitter,gloating,haughtiness,hauteur,high notions,highfalutin' ways,loftiness,nerve,ostentation,overconfidence,pageantry,panache,parade,pomp,pomposity,pompousness,presumption,presumptuousness,pretension,pretentiousness,pride,putting on the dog,putting one's nose in the air,scorn,scornfulness,self-importance,shamelessness,show,showiness,affected manners,airs,array,snobbery,snobbishness,superciliousness,swagger,vainglory,vanity,affected manners

As can be seen
affected manners
is repeated and so are quite a few other synonyms.
I had written a script which basically does the following:
places each synonym on a line by replacing the comma by a CR/LF
sorting the synonym set
replacing the sorted unique synonyms in the structure Headword=syn1,syn2 etc.
Although it works, it is expensive and time consuming considering that the number of synonym sets is around 100,00
A perl or awk script which does the job faster would be really appreciated. Please note that a given headword can admit upto 100 synonyms, each separated by a comma.
Many thanks for a faster solution.
# 2  
This should do it:
Code:
#!/usr/bin/perl -w

use strict;
use warnings;

while ( my $line = <> ) {
    chomp $line;
    my ( $key, $value ) = ( $line =~ /^(.*?)=(.*)$/g );
    my %hash = map { $_ => 1 } split( /,/, $value );
    print $key, "=", join( ',', sort keys %hash ), "\n";
}

Run as /path/to/script synonym.in > synonym.out
This User Gave Thanks to pludi For This Post:
# 3  
Many thanks. It worked like a charm. Handled over 100,000 synsets in just 12 seconds on my machine running VISTA under windows
# 4  
Hello,
I wonder if it would be possible to add to Gimley's program. I had written a perl script to identify duplicates in a large file which has a structure similar to Gimley's.
Quote:
word=word1,word2,word3
where Word is the headword and word1, word2, word3 are all equivalents of the word.
It so happens that some times two entries for the same headword can be present.
Quote:
word=word1,word2,word3
word=word1,word4,word5
I have written a program in PERL which identifies such dupes and spews them out in a file where singletons and dupes are clearly identified.
However I have not been able to add to it the added functionality of merging the duplicates into one single entry.
Thus the dupes mentioned above should merge to one single entry:
Quote:
word=word1,word2,word3,word4,word5
Any help given would be greatly appreciated.

Code:
#!/usr/bin/perl

$dupes = $singletons = "";		# This goes at the head of the file

do {
    $dupefound = 0;			# These go at the head of the loop
    $text = $line = $prevline = $name = $prevname = "";
    do {
	$line = <>;
	$line =~ /^(.+)\=.+$/ and $name = $1;
	$prevline =~ /^(.+)\=.+$/ and $prevname = $1;
	if ($name eq $prevname) { $dupefound += 1 }
	$text .= $line;
	$prevline = $line;
    } until ($dupefound > 0 and $text !~ /^(.+?)\=.*?\n(?:\1=.*?\n)+\z/m) or eof;
    if ($text =~ s/(^(.+?)\=.*?\n(?:\2=.*?\n)+)//m) { $dupes .= $1 }
    $singletons .= $text;
} until eof;
print "SINGLETONS\n$singletons\n\DUPES\n$dupes";


Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #557
Difficulty: Easy
The most expert of software developers never make mistakes.
True or False?

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

sed command within script wrongly deleting the last line

Hi, I have a shell script which has a for loop that scans list of files and do find and replace few variables using sed command. While doing this, it deletes the last line of all input file which is something wrong. how to fix this. please suggest. When i add an empty line in all my input file,... (5 Replies)
Discussion started by: rbalaj16
5 Replies

2. Shell Programming and Scripting

Modify script to remove dupes with two delimiters

Hello, I have a script which removes duplicates in a database with a single delimiter = The script is given below: # script to remove dupes from a row with structure word=word BEGIN{FS="="} {for(i=1;i<=NF;i++){a++;}for(i in a){b=b"="i}{sub("=","",b);$0=b;b="";delete a}}1 How do I modify... (6 Replies)
Discussion started by: gimley
6 Replies

3. Shell Programming and Scripting

Help with Perl script for identifying dupes in column1

Dear all, I have a large dictionary database which has the following structure source word=target word e.g. book=livre Since the database is very large in spite of all the care taken, it so happens that at times the source word is repeated e.g. book=livre book=tome Since I want to... (7 Replies)
Discussion started by: gimley
7 Replies

4. Shell Programming and Scripting

Identifying dupes within a database and creating unique sub-sets

Hello, I have a database of name variants with the following structure: variant=variant=variant The number of variants can be as many as thirty to forty. Since the database is quite large (at present around 60,000 lines) duplicate sets of variants creep in. Thus John=Johann=Jon and... (2 Replies)
Discussion started by: gimley
2 Replies

5. UNIX for Dummies Questions & Answers

Deleting a pattern in UNIX without deleting the entire line

Hi I have a file: r58778.3|SOURCES={KEY=f665931a...,fw,221-705}|ERRORS={16_1:T,30_1:T,56_1:C,57_1:T,59_1:A,101_1:A,115:-,158_1:C,186_1:A,204:-,271_1:T,305:-,350_1:C,368_1:G,442_1:C,472_1:G,477_1:A}|SOURCE_1="Contig_1092402550638"(f665931a359e36cea0976db191ff60ff09cc816e) I want to retain... (15 Replies)
Discussion started by: Alyaa
15 Replies

6. Shell Programming and Scripting

deleting dupes in a row

Hello, I have a large database in which name homonyms are arranged in a row. Since the database is large and generated by hand, very often dupes creep in. I want to remove the dupes either using an awk or perl script. An input is given below The expected output is given below: As can be... (2 Replies)
Discussion started by: gimley
2 Replies

7. Shell Programming and Scripting

Using an awk script to identify dupes in two files

Hello, I have two files. File1 or the master file contains two columns separated by a delimiter: a=b b=d e=f g=h File 2 which is the file to be processed has only a single column a h c b What I need is an awk script to identify unique names from file 2 which are not found in the... (6 Replies)
Discussion started by: gimley
6 Replies

8. Shell Programming and Scripting

Deleting a line from a flatfile using Shell Script

Hi All, Can Anyone please tell me,how can I delete a line from a file. I am reading the file line by line using whil loop and validating each line..Suppose in the middle i found a particular line is invalid,i need to delete that particular line. Can anyone please help. Thanks in advance,... (14 Replies)
Discussion started by: dinesh1985
14 Replies

9. Shell Programming and Scripting

Shell Script for deleting the first line in a file

Hi, Could any one please post the shell script for deleting the first line in a file? (3 Replies)
Discussion started by: badrimohanty
3 Replies

10. UNIX for Dummies Questions & Answers

identifying duplicates line & reporting their line number

I need to find to find duplicate lines in a document and then print the line numbers of the duplicates The files contain multiple lines with about 100 numbers on each line I need something that will output the line numbers where duplicates were found ie 1=5=7, 2=34=76 Any suggestions would be... (5 Replies)
Discussion started by: stresslog
5 Replies

Featured Tech Videos