Sponsored Content
Top Forums Shell Programming and Scripting Help with merge and remove duplicates Post 302896872 by roy121 on Wednesday 9th of April 2014 02:54:16 PM
Old 04-09-2014
The script is given in the 1st post.......................
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Remove duplicates

Hello Experts, I have two files named old and new. Below are my example files. I need to compare and print the records that only exist in my new file. I tried the below awk script, this script works perfectly well if the records have exact match, the issue I have is my old file has got extra... (4 Replies)
Discussion started by: forumthreads
4 Replies

2. Shell Programming and Scripting

Merge Two Tables with duplicates in first table

Hi.. File 1: 1 aa rep 1 dd rep 1 kk rep 2 bb sad 2 ss sad 3 ee dam File 2 1 apple fruit 2 mango tree 3 lilly flower output: 1 aaple fruit aa,dd,kk rep (7 Replies)
Discussion started by: empyrean
7 Replies

3. Shell Programming and Scripting

bash - remove duplicates

I need to use a bash script to remove duplicate files from a download list, but I cannot use uniq because the urls are different. I need to go from this: http://***/fae78fe/file1.wmv http://***/39du7si/file1.wmv http://***/d8el2hd/file2.wmv http://***/h893js3/file2.wmv to this: ... (2 Replies)
Discussion started by: locoroco
2 Replies

4. Shell Programming and Scripting

Find duplicates in column 1 and merge their lines (awk?)

Hi, I have a file (sorted by sort) with 8 tab delimited columns. The first column contains duplicated fields and I need to merge all these identical lines. My input file: comp100002 aaa bbb ccc ddd eee fff ggg comp100003 aba aba aba aba aba aba aba comp100003 fff fff fff fff fff fff fff... (5 Replies)
Discussion started by: falcox
5 Replies

5. Shell Programming and Scripting

Merge files without duplicates

Hi all, In a directory of many files, I need to merge only files which do not have identical lines and also the resulatant merge file should not be more than 50000 lines. Basically I need to cover up all text files in that directory and turn them to Merge files.txt with 50000 lines each ... (2 Replies)
Discussion started by: pravfraz
2 Replies

6. UNIX for Dummies Questions & Answers

Remove duplicates from a file

Can u tell me how to remove duplicate records from a file? (11 Replies)
Discussion started by: saga20
11 Replies

7. Shell Programming and Scripting

Remove duplicates

I have a file with the following format: fields seperated by "|" title1|something class|long...content1|keys title2|somhing class|log...content1|kes title1|sothing class|lon...content1|kes title3|shing cls|log...content1|ks I want to remove all duplicates with the same "title field"(the... (3 Replies)
Discussion started by: dtdt
3 Replies

8. Shell Programming and Scripting

Remove top 3 duplicates

hello , I have a requirement with input in below format abc 123 xyz bcd 365 kii abc 987 876 cdf 987 uii abc 456 yuu bcd 654 rrr Expecting Output abc 456 yuu bcd 654 rrr cdf 987 uii (1 Reply)
Discussion started by: Tomlight
1 Replies

9. Shell Programming and Scripting

Sort and Remove duplicates

Here is my task : I need to sort two input files and remove duplicates in the output files : Sort by 13 characters from 97 Ascending Sort by 1 characters from 96 Ascending If duplicates are found retain the first value in the file the input files are variable length, convert... (4 Replies)
Discussion started by: ysvsr1
4 Replies

10. Shell Programming and Scripting

Remove duplicates

Hi I have a below file structure. 200,1245,E1,1,E1,,7611068,KWH,30, ,,,,,,,, 200,1245,E1,1,E1,,7611070,KWH,30, ,,,,,,,, 300,20140223,0.001,0.001,0.001,0.001,0.001 300,20140224,0.001,0.001,0.001,0.001,0.001 300,20140225,0.001,0.001,0.001,0.001,0.001 300,20140226,0.001,0.001,0.001,0.001,0.001... (1 Reply)
Discussion started by: tejashavele
1 Replies
Array::Unique(3pm)					User Contributed Perl Documentation					Array::Unique(3pm)

NAME
Array::Unique - Tie-able array that allows only unique values SYNOPSIS
use Array::Unique; tie @a, 'Array::Unique'; Now use @a as a regular array. DESCRIPTION
This package lets you create an array which will allow only one occurrence of any value. In other words no matter how many times you put in 42 it will keep only the first occurrence and the rest will be dropped. You use the module via tie and once you tied your array to this module it will behave correctly. Uniqueness is checked with the 'eq' operator so among other things it is case sensitive. As a side effect the module does not allow undef as a value in the array. EXAMPLES
use Array::Unique; tie @a, 'Array::Unique'; @a = qw(a b c a d e f); push @a, qw(x b z); print "@a "; # a b c d e f x z DISCUSSION
When you are collecting a list of items and you want to make sure there is only one occurrence of each item, you have several option: 1) using an array and extracting the unique elements later You might use a regular array to hold this unique set of values and either remove duplicates on each update by that keeping the array always unique or remove duplicates just before you want to use the uniqueness feature of the array. In either case you might run a function you call @a = unique_value(@a); The problem with this approach is that you have to implement the unique_value function (see later) AND you have to make sure you don't forget to call it. I would say don't rely on remembering this. There is good discussion about it in the 1st edition of the Perl Cookbook of O'Reilly. I have copied the solutions here, you can see further discussion in the book. Extracting Unique Elements from a List (Section 4.6 in the Perl Cookbook 1st ed.) # Straightforward %seen = (); @uniq = (); foreach $item (@list) [ unless ($seen{$item}) { # if we get here we have not seen it before $seen{$item} = 1; push (@uniq, $item); } } # Faster %seen = (); foreach $item (@list) { push(@uniq, $item) unless $seen{$item}++; } # Faster but different %seen; foreach $item (@list) { $seen{$item}++; } @uniq = keys %seen; # Faster and even more different %seen; @uniq = grep {! $seen{$_}++} @list; 2) using a hash Some people use the keys of a hash to keep the items and put an arbitrary value as the values of the hash: To build such a list: %unique = map { $_ => 1 } qw( one two one two three four! ); To print it: print join ", ", sort keys %unique; To add values to it: $unique{$_}=1 foreach qw( one after the nine oh nine ); To remove values: delete @unique{ qw(oh nine) }; To check if a value is there: $unique{ $value }; # which is why I like to use "1" as my value (thanks to Gaal Yahas for the above examples) There are three drawbacks I see: 1) You type more. 2) Your reader might not understand at first why did you use hash and what will be the values. 3) You lose the order. Usually non of them is critical but when I saw this the 10th time in a code I had to understand with 0 documentation I got frustrated. 3) using Array::Unique So I decided to write this module because I got frustrated by my lack of understanding what's going on in that code I mentioned. In addition I thought it might be interesting to write this and then benchmark it. Additionally it is nice to have your name displayed in bright lights all over CPAN ... or at least in a module. Array::Unique lets you tie an array to hmmm, itself (?) and makes sure the values of the array are always unique. Since writing this I am not sure if I really recommend its usage. I would say stick with the hash version and document that the variable is aggregating a unique list of values. 4) Using real SET There are modules on CPAN that let you create and maintain SETs. I have not checked any of those but I guess they just as much of an overkill for this functionality as Unique::Array. BUGS
use Array::Unique; tie @a, 'Array::Unique'; @c = @a = qw(a b c a d e f b); @c will contain the same as @a AND two undefs at the end because @c you get the same length as the right most list. TODO
Test: Change size of the array Elements with false values ('', '0', 0) splice: splice @a; splice @a, 3; splice @a, -3; splice @a, 3, 5; splice @a, 3, -5; splice @a, -3, 5; splice @a, -3, -5; splice @a, ?, ?, @b; Benchmark speed Add faster functions that don't check uniqueness so if I know part of the data that comes from a unique source then I can speed up the process, In short shoot myself in the leg. Enable optional compare with other functions Write even better implementations. AUTHOR
Gabor Szabo <gabor@pti.co.il> LICENSE
Copyright (C) 2002-2008 Gabor Szabo <gabor@pti.co.il> All rights reserved. http://www.pti.co.il/ You may distribute under the terms of either the GNU General Public License or the Artistic License, as specified in the Perl README file. No WARRANTY whatsoever. CREDITS
Thanks for suggestions and bug reports to Szabo Balazs (dLux) Shlomo Yona Gaal Yahas Jeff 'japhy' Pinyan Werner Weichselberger VERSION
Version: 0.08 Date: 2008 June 04 perl v5.10.0 2009-03-06 Array::Unique(3pm)
All times are GMT -4. The time now is 02:48 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy