Sponsored Content
Top Forums Shell Programming and Scripting Find Duplicate records in first Column in File Post 302409275 by durden_tyler on Wednesday 31st of March 2010 02:20:19 PM
Old 03-31-2010
Code:
$
$ cat -n f4
     1  ANU4501710430989 0000000W20389390
     2  ANU4501130050520 0000000W80838713
     3  ANU4501210170685 0000000W69246611
     4  ANU4501710430989 0000000W67065483
     5  ANU4501710676143 0000000W54010979
     6  ANU4501711037279 0000000W67065293
$
$ sort f4 | perl -lane 'print $F[0] if $F[0] eq $p; $p=$F[0]'
ANU4501710430989
$

tyler_durden
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to find Duplicate Records in a text file

Hi all pls help me by providing soln for my problem I'm having a text file which contains duplicate records . Example: abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452 abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452 tas 3420 3562 ... (1 Reply)
Discussion started by: G.Aavudai
1 Replies

2. Shell Programming and Scripting

find duplicate records... again

Hi all: Let's suppose I have a file like this (but with many more records). XX ME 342 8688 2006 7 6 3c 60.029 -38.568 2901 0001 74 4 7603 8 969.8 958.4 3.6320 34.8630 985.5 973.9 3.6130 34.8600 998.7 986.9 3.6070 34.8610 1003.6 991.7 ... (4 Replies)
Discussion started by: rleal
4 Replies

3. Shell Programming and Scripting

find out duplicate records in file?

Dear All, I have one file which looks like : account1:passwd1 account2:passwd2 account3:passwd3 account1:passwd4 account5:passwd5 account6:passwd6 you can see there're two records for account1. and is there any shell command which can find out : account1 is the duplicate record in... (3 Replies)
Discussion started by: tiger2000
3 Replies

4. UNIX for Dummies Questions & Answers

CSV file:Find duplicates, save original and duplicate records in a new file

Hi Unix gurus, Maybe it is too much to ask for but please take a moment and help me out. A very humble request to you gurus. I'm new to Unix and I have started learning Unix. I have this project which is way to advanced for me. File format: CSV file File has four columns with no header... (8 Replies)
Discussion started by: arvindosu
8 Replies

5. Shell Programming and Scripting

Removing duplicate records in a file based on single column

Hi, I want to remove duplicate records including the first line based on column1. For example inputfile(filer.txt): ------------- 1,3000,5000 1,4000,6000 2,4000,600 2,5000,700 3,60000,4000 4,7000,7777 5,999,8888 expected output: ---------------- 3,60000,4000 4,7000,7777... (5 Replies)
Discussion started by: G.K.K
5 Replies

6. Shell Programming and Scripting

Removing duplicate records in a file based on single column explanation

I was reading this thread. It looks like a simpler way to say this is to only keep uniq lines based on field or column 1. https://www.unix.com/shell-programming-scripting/165717-removing-duplicate-records-file-based-single-column.html Can someone explain this command please? How are there no... (5 Replies)
Discussion started by: cokedude
5 Replies

7. Shell Programming and Scripting

Deleting duplicate records from file 1 if records from file 2 match

I have 2 files "File 1" is delimited by ";" and "File 2" is delimited by "|". File 1 below (3 record shown): Doc1;03/01/2012;New York;6 Main Street;Mr. Smith 1;Mr. Jones Doc2;03/01/2012;Syracuse;876 Broadway;John Davis;Barbara Lull Doc3;03/01/2012;Buffalo;779 Old Windy Road;Charles... (2 Replies)
Discussion started by: vestport
2 Replies

8. Shell Programming and Scripting

Find duplicate values in specific column and delete all the duplicate values

Dear folks I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Discussion started by: sajmar
4 Replies

9. Shell Programming and Scripting

Filter duplicate records from csv file with condition on one column

I have csv file with 30, 40 columns Pasting just three column for problem description I want to filter record if column 1 matches CN or DN then, check for values in column 2 if column contain 1235, 1235 then in column 3 values must be sequence of 2345, 2345 and if column 2 contains 6789, 6789... (5 Replies)
Discussion started by: as7951
5 Replies

10. Shell Programming and Scripting

CSV File:Filter duplicate records from column1 & another column having unique record

Hi Experts, I have csv file with 30, 40 columns Pasting just 2 column for problem description. Need to print error if below combination is not present in file check for column-1 (DocumentNumber) and filter columns where value in DocumentNumber field is same. For all such rows, the field... (7 Replies)
Discussion started by: as7951
7 Replies
sort(3pm)						 Perl Programmers Reference Guide						 sort(3pm)

NAME
sort - perl pragma to control sort() behaviour SYNOPSIS
use sort 'stable'; # guarantee stability use sort '_quicksort'; # use a quicksort algorithm use sort '_mergesort'; # use a mergesort algorithm use sort 'defaults'; # revert to default behavior no sort 'stable'; # stability not important use sort '_qsort'; # alias for quicksort my $current; BEGIN { $current = sort::current(); # identify prevailing algorithm } DESCRIPTION
With the "sort" pragma you can control the behaviour of the builtin "sort()" function. In Perl versions 5.6 and earlier the quicksort algorithm was used to implement "sort()", but in Perl 5.8 a mergesort algorithm was also made available, mainly to guarantee worst case O(N log N) behaviour: the worst case of quicksort is O(N**2). In Perl 5.8 and later, quicksort defends against quadratic behaviour by shuffling large arrays before sorting. A stable sort means that for records that compare equal, the original input ordering is preserved. Mergesort is stable, quicksort is not. Stability will matter only if elements that compare equal can be distinguished in some other way. That means that simple numerical and lexical sorts do not profit from stability, since equal elements are indistinguishable. However, with a comparison such as { substr($a, 0, 3) cmp substr($b, 0, 3) } stability might matter because elements that compare equal on the first 3 characters may be distinguished based on subsequent characters. In Perl 5.8 and later, quicksort can be stabilized, but doing so will add overhead, so it should only be done if it matters. The best algorithm depends on many things. On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using "sort()" to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times. You can force the choice of algorithm with this pragma, but this feels heavy-handed, so the subpragmas beginning with a "_" may not persist beyond Perl 5.8. The default algorithm is mergesort, which will be stable even if you do not explicitly demand it. But the stability of the default sort is a side-effect that could change in later versions. If stability is important, be sure to say so with a use sort 'stable'; The "no sort" pragma doesn't forbid what follows, it just leaves the choice open. Thus, after no sort qw(_mergesort stable); a mergesort, which happens to be stable, will be employed anyway. Note that no sort "_quicksort"; no sort "_mergesort"; have exactly the same effect, leaving the choice of sort algorithm open. CAVEATS
As of Perl 5.10, this pragma is lexically scoped and takes effect at compile time. In earlier versions its effect was global and took effect at run-time; the documentation suggested using "eval()" to change the behaviour: { eval 'use sort qw(defaults _quicksort)'; # force quicksort eval 'no sort "stable"'; # stability not wanted print sort::current . " "; @a = sort @b; eval 'use sort "defaults"'; # clean up, for others } { eval 'use sort qw(defaults stable)'; # force stability print sort::current . " "; @c = sort @d; eval 'use sort "defaults"'; # clean up, for others } Such code no longer has the desired effect, for two reasons. Firstly, the use of "eval()" means that the sorting algorithm is not changed until runtime, by which time it's too late to have any effect. Secondly, "sort::current" is also called at run-time, when in fact the compile-time value of "sort::current" is the one that matters. So now this code would be written: { use sort qw(defaults _quicksort); # force quicksort no sort "stable"; # stability not wanted my $current; BEGIN { $current = print sort::current; } print "$current "; @a = sort @b; # Pragmas go out of scope at the end of the block } { use sort qw(defaults stable); # force stability my $current; BEGIN { $current = print sort::current; } print "$current "; @c = sort @d; } perl v5.16.2 2012-08-26 sort(3pm)
All times are GMT -4. The time now is 11:01 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy