Sponsored Content
Full Discussion: finding duplicates with perl
Top Forums Shell Programming and Scripting finding duplicates with perl Post 33953 by criglerj on Monday 27th of January 2003 01:03:41 PM
Old 01-27-2003
Without more specifics about your problem, I think a hash might be more appropriate than an array. Then you can keep a count of where each filename is called out, or a list of callouts or whatever. If you need to preserve the order of the filenames, store the record number each filename was first found in, say, then sort on the record number. But a hash is a fundamental perl idiom for detecting duplicates. It'll work in ruby, too, BTW.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

finding duplicates in columns and removing lines

I am trying to figure out how to scan a file like so: 1 ralphs office","555-555-5555","ralph@mail.com","www.ralph.com 2 margies office","555-555-5555","ralph@mail.com","www.ralph.com 3 kims office","555-555-5555","kims@mail.com","www.ralph.com 4 tims... (17 Replies)
Discussion started by: totus
17 Replies

2. Shell Programming and Scripting

Finding duplicates from positioned substring across lines

I have million's of records each containing exactly 50 characters and have to check the uniqueness of 4 character substring of 50 character (postion known prior) and report if any duplicates are found. Eg. data... AAAA00000000000000XXXX0000 0000000000... upto50 chars... (2 Replies)
Discussion started by: gapprasath
2 Replies

3. Shell Programming and Scripting

Help finding non duplicates

I am currently creating a script to find filenames that are listed once in an input file (find non duplicates). I then want to report those single files in another file. Here is the function that I have so far: function dups_filenames { file2="" file1="" file="" dn="" ch="" pn="" ... (6 Replies)
Discussion started by: chipblah84
6 Replies

4. Shell Programming and Scripting

finding duplicates in csv based on key columns

Hi team, I have 20 columns csv files. i want to find the duplicates in that file based on the column1 column10 column4 column6 coulnn8 coulunm2 . if those columns have same values . then it should be a duplicate record. can one help me on finding the duplicates, Thanks in advance. ... (2 Replies)
Discussion started by: baskivs
2 Replies

5. UNIX for Dummies Questions & Answers

Finding duplicates then copying, almost there, maybe?

Hi everyone. I'm trying to help my wife with a project, she has exported 200 images from many different folders, unfortunately there was a problem with the export and I need to find the master versions so that she doesn't have to go through and select them again. I need to: For each image in... (2 Replies)
Discussion started by: Rhinoskin
2 Replies

6. Shell Programming and Scripting

Perl, sorting and eliminating duplicates

Hi guys! I'm trying to eliminate some duplicates from a file but I'm like this :wall: !!! My file looks like this: ID_1 0.02 ID_2 2.4e-2 ID_2 4.3.e-9 ID_3 0.003 ID_4 0.2 ID_4 0.05 ID_5 1.2e-3 What I need is to eliminate all the duplicates considering the first column (in this... (6 Replies)
Discussion started by: gabrysfe
6 Replies

7. Shell Programming and Scripting

Finding duplicates in a file excluding specific pattern

I have unix file like below >newuser newuser <hello hello newone I want to find the unique values in the file(excluding <,>),so that the out put should be >newuser <hello newone can any body tell me what is command to get this new file. (7 Replies)
Discussion started by: shiva2985
7 Replies

8. Shell Programming and Scripting

PERL "filtering the log file removing the duplicates

Hi folks, I have a log file in the below format and trying to get the output of the unique ones based on mnemonic IN PERL. Could any one please let me know with the code and the logic ? Severity Mnemonic Log Message 7 CLI_SCHEDULER Logfile for scheduled CLI... (3 Replies)
Discussion started by: scriptscript
3 Replies

9. Shell Programming and Scripting

UNIX scripting for finding duplicates and null records in pk columns

Hi, I have a requirement.for eg: i have a text file with pipe symbol as delimiter(|) with 4 columns a,b,c,d. Here a and b are primary key columns.. i want to process that file to find the duplicates and null values are in primary key columns(a,b) . I want to write the unique records in which... (5 Replies)
Discussion started by: praveenraj.1991
5 Replies

10. Shell Programming and Scripting

Help in modifying a PERL script to sort Singletons and Duplicates

I have a large database which has the following structure a=b where a is one language and b is the other and = is the delimiter Since the data treats of language, homographs occur i.e. the same word on the left hand side can map in two different entries to two different glosses on the right... (3 Replies)
Discussion started by: gimley
3 Replies
DPKG-RUBY(1)						      General Commands Manual						      DPKG-RUBY(1)

NAME
dpkg-ruby - Utility to read a dpkg style db file, dpkg-awk clone SYNOPSIS
dpkg-ruby [(-f|--file) filename] [(-d|--debug) ##] [(-s|--sort) list] [(-n|--numeric) list] [(-rs|--rec_sep) ??] '<fieldname>:<regex>' ... -- <out_fieldname> .. DESCRIPTION
dpkg-ruby Parses a dpkg status file(or other similarly formated file) and outputs the resulting records. It can use regex on the field values to limit the returned records, and it can also be told which fields to output. As another option, it can sort the matched fields. OPTIONS
-f filename --file filename The file to parse. The default is /var/lib/dpkg/status. -d [#] --debug [#] Each time this is specified, it increased the debug level. -s field(s) --sort field(s) A space or comma separated list of fields to sort on. -n field(s) --numeric field(s) A space or comma separated list of fields that should be interpreted as numeric in value. -rs ?? --rec_sep ?? Output this string at the end of each output paragraph. -h --help Display some help. fieldname The fields from the file, that are matched with the regex given. The fieldnames are case insensitive. out_fieldname The fields from the file, that are outputted for each record. If the first field listed is begins with ^, then the list that fol- lows are fields NOT to be outputted. BUGS
Be warned that the author has only a shallow understanding of the dpkg packaging system, so there are probably tons of bugs in this pro- gram. This program comes with no warranties. If running this program causes fire and brimstone to rain down upon the earth, you will be on our own. This program accesses the dpkg database directly in places, querying for data that cannot be gotten via dpkg. AUTHOR
Fumitoshi UKAI <ukai@debian.or.jp>. This manual page are based on (or almost copy from :) dpkg-awk(1) manual written by Adam Heath <doo- gie@debian.org> DEBIAN
Debian Utilities DPKG-RUBY(1)
All times are GMT -4. The time now is 12:44 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy