Sponsored Content
Top Forums Shell Programming and Scripting Remove duplicates based on the two key columns Post 302464830 by kmsekhar on Thursday 21st of October 2010 04:54:04 AM
Old 10-21-2010
Remove duplicates based on the two key columns

Hi All,
I needs to fetch unique records based on a keycolumn(ie., first column1) and also I needs to get the records which are having max value on column2 in sorted manner... and duplicates have to store in another output file.

Input :

Input.txt
1234,0,x
1234,1,y
5678,10,z
9999,10,k
5678,9,l

Desired Output:

Duplicates.txt
1234,0,x
5678,9,l

Uniqrecords.txt
1234,1,y
5678,10,z
9999,10,k

Regards,
MuniSekhar

Thanks in Advance....
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

removing duplicates based on key

HI I am having a file like this 1234 12345678 1234567890123 4321 43215678 432156789028433435 I want to get ouput as 1234567890123 432156789028433435 based on key position 1-4 I am using ksh can anyone give me an idea Thanks pukars (1 Reply)
Discussion started by: pukars4u
1 Replies

2. Shell Programming and Scripting

Search based on 1,2,4,5 columns and remove duplicates in the same file.

Hi, I am unable to search the duplicates in a file based on the 1st,2nd,4th,5th columns in a file and also remove the duplicates in the same file. Source filename: Filename.csv "1","ccc","information","5000","temp","concept","new" "1","ddd","information","6000","temp","concept","new"... (2 Replies)
Discussion started by: onesuri
2 Replies

3. Shell Programming and Scripting

need to remove duplicates based on key in first column and pattern in last column

Given a file such as this I need to remove the duplicates. 00060011 PAUL BOWSTEIN ad_waq3_921_20100826_010517.txt 00060011 PAUL BOWSTEIN ad_waq3_921_20100827_010528.txt 0624-01 RUT CORPORATION ad_sade3_10_20100827_010528.txt 0624-01 RUT CORPORATION ... (13 Replies)
Discussion started by: script_op2a
13 Replies

4. UNIX for Dummies Questions & Answers

Removing duplicates based on key

Hi, I have the input file with the below data: 12345|12|34 12345|13|23 3456|12|90 15670|12|13 12345|10|14 3456|12|13 I need to remove the duplicates based on the first field only. I need the output like: 12345|12|34 3456|12|90 15670|12|13 The first field needs to be unique . (4 Replies)
Discussion started by: pandeesh
4 Replies

5. Shell Programming and Scripting

finding duplicates in csv based on key columns

Hi team, I have 20 columns csv files. i want to find the duplicates in that file based on the column1 column10 column4 column6 coulnn8 coulunm2 . if those columns have same values . then it should be a duplicate record. can one help me on finding the duplicates, Thanks in advance. ... (2 Replies)
Discussion started by: baskivs
2 Replies

6. Shell Programming and Scripting

Removing duplicates in fixed width file which has multiple key columns

Hi All , I have a requirement where I need to remove duplicates from a fixed width file which has multiple key columns .Also , need to capture the duplicate records into another file . File has 8 columns. Key columns are col1 and col2. Col1 has the length of 8 col 2 has the length of 3. ... (5 Replies)
Discussion started by: saj
5 Replies

7. Shell Programming and Scripting

Remove duplicates based on a field's value

Hi All, I have a text file with three columns. I would like a simple script that removes lines in which column 1 has duplicate entries, but use the largest value in column 3 to decide which one to keep. For example: Input file: 12345a rerere.rerere len=23 11111c fsdfdf.dfsdfdsf len=33 ... (3 Replies)
Discussion started by: anniecarv
3 Replies

8. Shell Programming and Scripting

Remove Duplicates on multiple Key Columns and get the Latest Record from Date/Time Column

Hi Experts , we have a CDC file where we need to get the latest record of the Key columns Key Columns will be CDC_FLAG and SRC_PMTN_I and fetch the latest record from the CDC_PRCS_TS Can we do it with a single awk command. Please help.... (3 Replies)
Discussion started by: vijaykodukula
3 Replies

9. Shell Programming and Scripting

Removing duplicates from delimited file based on 2 columns

Hi guys,Got a bit of a bind I'm in. I'm looking to remove duplicates from a pipe delimited file, but do so based on 2 columns. Sounds easy enough, but here's the kicker... Column #1 is a simple ID, which is used to identify the duplicate. Once dups are identified, I need to only keep the one... (2 Replies)
Discussion started by: kevinprood
2 Replies

10. UNIX for Beginners Questions & Answers

Sort and remove duplicates in directory based on first 5 columns:

I have /tmp dir with filename as: 010020001_S-FOR-Sort-SYEXC_20160229_2212101.marker 010020001_S-FOR-Sort-SYEXC_20160229_2212102.marker 010020001-S-XOR-Sort-SYEXC_20160229_2212104.marker 010020001-S-XOR-Sort-SYEXC_20160229_2212105.marker 010020001_S-ZOR-Sort-SYEXC_20160229_2212106.marker... (4 Replies)
Discussion started by: gnnsprapa
4 Replies
lsort(n)						       Tcl Built-In Commands							  lsort(n)

__________________________________________________________________________________________________________________________________________________

NAME
lsort - Sort the elements of a list SYNOPSIS
lsort ?options? list _________________________________________________________________ DESCRIPTION
This command sorts the elements of list, returning a new list in sorted order. The implementation of the lsort command uses the merge-sort algorithm which is a stable sort that has O(n log n) performance characteristics. By default ASCII sorting is used with the result returned in increasing order. However, any of the following options may be specified before list to control the sorting process (unique abbreviations are accepted): -ascii Use string comparison with Unicode code-point collation order (the name is for backward-compatibility reasons.) This is the default. -dictionary Use dictionary-style comparison. This is the same as -ascii except (a) case is ignored except as a tie-breaker and (b) if two strings contain embedded numbers, the numbers compare as integers, not characters. For example, in -dictionary mode, bigBoy sorts between bigbang and bigboy, and x10y sorts between x9y and x11y. -integer Convert list elements to integers and use integer comparison. -real Convert list elements to floating-point values and use floating comparison. -command command Use command as a comparison command. To compare two elements, evaluate a Tcl script consisting of command with the two elements appended as additional arguments. The script should return an integer less than, equal to, or greater than zero if the first element is to be considered less than, equal to, or greater than the second, respectively. -increasing Sort the list in increasing order ("smallest"items first). This is the default. -decreasing Sort the list in decreasing order ("largest"items first). -indices Return a list of indices into list in sorted order instead of the values themselves. | -index indexList If this option is specified, each of the elements of list must itself be a proper Tcl sublist. Instead of sorting based on whole sublists, lsort will extract the indexList'th element from each sublist (as if the overall element and | the indexList were passed to lindex) and sort based on the given element. For example, lsort -integer -index 1 {{First 24} {Second 18} {Third 30}} returns {Second 18} {First 24} {Third 30}, and lsort -index end-1 {{a 1 e i} {b 2 3 f g} {c 4 5 6 d h}} returns {c 4 5 6 d h} {a 1 e i} {b 2 3 f g}, and | lsort -index {0 1} { | {{b i g} 12345} | {{d e m o} 34512} | {{c o d e} 54321} | } | returns {{d e m o} 34512} {{b i g} 12345} {{c o d e} 54321} (because e sorts before i which sorts before o.) This option is much more efficient than using -command to achieve the same effect. -nocase | Causes comparisons to be handled in a case-insensitive manner. Has no effect if combined with the -dictionary, -inte- | ger, or -real options. -unique If this option is specified, then only the last set of duplicate elements found in the list will be retained. Note that duplicates are determined relative to the comparison used in the sort. Thus if -index 0 is used, {1 a} and {1 b} would be considered duplicates and only the second element, {1 b}, would be retained. NOTES
The options to lsort only control what sort of comparison is used, and do not necessarily constrain what the values themselves actually are. This distinction is only noticeable when the list to be sorted has fewer than two elements. The lsort command is reentrant, meaning it is safe to use as part of the implementation of a command used in the -command option. EXAMPLES
Sorting a list using ASCII sorting: % lsort {a10 B2 b1 a1 a2} B2 a1 a10 a2 b1 Sorting a list using Dictionary sorting: % lsort -dictionary {a10 B2 b1 a1 a2} a1 a2 a10 b1 B2 Sorting lists of integers: % lsort -integer {5 3 1 2 11 4} 1 2 3 4 5 11 % lsort -integer {1 2 0x5 7 0 4 -1} -1 0 1 2 4 0x5 7 Sorting lists of floating-point numbers: % lsort -real {5 3 1 2 11 4} 1 2 3 4 5 11 % lsort -real {.5 0.07e1 0.4 6e-1} 0.4 .5 6e-1 0.07e1 Sorting using indices: % # Note the space character before the c % lsort {{a 5} { c 3} {b 4} {e 1} {d 2}} { c 3} {a 5} {b 4} {d 2} {e 1} % lsort -index 0 {{a 5} { c 3} {b 4} {e 1} {d 2}} {a 5} {b 4} { c 3} {d 2} {e 1} % lsort -index 1 {{a 5} { c 3} {b 4} {e 1} {d 2}} {e 1} {d 2} { c 3} {b 4} {a 5} Stripping duplicate values using sorting: % lsort -unique {a b c a b c a b c} a b c More complex sorting using a comparison function: % proc compare {a b} { set a0 [lindex $a 0] set b0 [lindex $b 0] if {$a0 < $b0} { return -1 } elseif {$a0 > $b0} { return 1 } return [string compare [lindex $a 1] [lindex $b 1]] } % lsort -command compare {{3 apple} {0x2 carrot} {1 dingo} {2 banana}} {1 dingo} {2 banana} {0x2 carrot} {3 apple} SEE ALSO
list(n), lappend(n), lindex(n), linsert(n), llength(n), lsearch(n), lset(n), lrange(n), lreplace(n) KEYWORDS
element, list, order, sort Tcl 8.5 lsort(n)
All times are GMT -4. The time now is 06:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy