Sponsored Content
Top Forums Shell Programming and Scripting Find duplicate based on 'n' fields and mark the duplicate as 'D' Post 302593716 by rdcwayx on Saturday 28th of January 2012 06:34:36 AM
Old 01-28-2012
Code:
awk '{s=$1 FS $2 FS $3} 
     NR==FNR{a[s]++;b[s]=FNR;next}
     FNR==1{print;next} 
     {if (a[s]<2)
           {print}
      else
           {print (b[s]==FNR)?$0 "|C":$0 "|D"}}' FS=\| OFS=\| infile infile

These 2 Users Gave Thanks to rdcwayx For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Extract duplicate fields in rows

I have a input file with formating: 6000000901 ;36200103 ;h3a01f496 ; 2000123605 ;36218982 ;heefa1328 ; 2000273132 ;36246985 ;h08c5cb71 ; 2000041207 ;36246985 ;heef75497 ; Each fields is seperated by semi-comma. Sometime, the second files is... (6 Replies)
Discussion started by: anhtt
6 Replies

2. Shell Programming and Scripting

compare fields in a file with duplicate records

Hi: I've been searching the net but didnt find a clue. I have a file in which, for some records, some fields coincide. I want to compare one (or more) of the dissimilar fields and retain the one record that fulfills a certain condition. For example, on this file: 99 TR 1991 5 06 ... (1 Reply)
Discussion started by: rleal
1 Replies

3. Shell Programming and Scripting

awk 2 fields duplicate and 1 different

I have file that I need to remove the duplicates. The problem is, I need to only keep the one which has a unique 3rd field. Here is a sample file: xxx.xxx:x:CISCO1.CLEVE61W:ERIE.NET:x:x:x:x: xxx.xxx:x:CISCO2.CLEVE62W:OHIO.NET:x:x:x:x: xxx.xxx:x:CISCO2.CLEVE62W:NORTH.NET:x:x:x:x:... (1 Reply)
Discussion started by: numele
1 Replies

4. Shell Programming and Scripting

Filter or remove duplicate block of text without distinguishing marks or fields

Hello, Although I have found similar questions, I could not find advice that could help with our problem. The issue: We have several hundreds text files containing repeated blocks of text (I guess back at the time they were prepared like that to optmize printing). The block of texts... (13 Replies)
Discussion started by: samask
13 Replies

5. Shell Programming and Scripting

Remove duplicate based on Group

Hi, How can I remove duplicates from a file based on group on other column? for example: Test1|Test2|Test3|Test4|Test5 Test1|Test6|Test7|Test8|Test5 Test1|Test9|Test10|Test11|Test12 Test1|Test13|Test14|Test15|Test16 Test17|Test18|Test19|Test20|Test21 Test17|Test22|Test23|Test24|Test5 ... (2 Replies)
Discussion started by: yale_work
2 Replies

6. Shell Programming and Scripting

Join fields from files with duplicate lines

I have two files, file1.txt: 1 abc 2 def 2 dgh 3 ijk 4 lmn file2.txt 1 opq 2 rst 3 uvw My desired output is: 1 abc opq 2 def rst 2 dgh rst 3 ijk uvw (2 Replies)
Discussion started by: xan.amini
2 Replies

7. Shell Programming and Scripting

How To Remove Duplicate Based on the Value?

Hi , Some time i got duplicated value in my files , bundle_identifier= B Sometext=ABC bundle_identifier= A bundle_unit=500 Sometext123=ABCD bundle_unit=400 i need to check if there is a duplicated values or not if yes , i need to check if the value is A or B when Bundle_Identified ,... (2 Replies)
Discussion started by: OTNA
2 Replies

8. Shell Programming and Scripting

Remove duplicate lines from file based on fields

Dear community, I have to remove duplicate lines from a file contains a very big ammount of rows (milions?) based on 1st and 3rd columns The data are like this: Region 23/11/2014 09:11:36 41752 Medio 23/11/2014 03:11:38 4132 Info 23/11/2014 05:11:09 4323... (2 Replies)
Discussion started by: Lord Spectre
2 Replies

9. Shell Programming and Scripting

Find duplicate values in specific column and delete all the duplicate values

Dear folks I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Discussion started by: sajmar
4 Replies

10. UNIX for Beginners Questions & Answers

Discarding records with duplicate fields

Hi, My input looks like this (tab-delimited): grp1 name2 firstname M 55 item1 item1.0 grp1 name2 firstname F 55 item1 item1.0 grp2 name1 firstname M 55 item1 item1.0 grp2 name2 firstname M 55 item1 item1.0 Using awk, I am trying to discard the records with common fields 2, 4, 5, 6, 7... (4 Replies)
Discussion started by: beca123456
4 Replies
PERLTRAP(1)						 Perl Programmers Reference Guide					       PERLTRAP(1)

NAME
perltrap - Perl traps for the unwary DESCRIPTION
The biggest trap of all is forgetting to "use warnings" or use the -w switch; see perllexwarn and perlrun. The second biggest trap is not making your entire program runnable under "use strict". The third biggest trap is not reading the list of changes in this version of Perl; see perldelta. Awk Traps Accustomed awk users should take special note of the following: o A Perl program executes only once, not once for each input line. You can do an implicit loop with "-n" or "-p". o The English module, loaded via use English; allows you to refer to special variables (like $/) with names (like $RS), as though they were in awk; see perlvar for details. o Semicolons are required after all simple statements in Perl (except at the end of a block). Newline is not a statement delimiter. o Curly brackets are required on "if"s and "while"s. o Variables begin with "$", "@" or "%" in Perl. o Arrays index from 0. Likewise string positions in substr() and index(). o You have to decide whether your array has numeric or string indices. o Hash values do not spring into existence upon mere reference. o You have to decide whether you want to use string or numeric comparisons. o Reading an input line does not split it for you. You get to split it to an array yourself. And the split() operator has different arguments than awk's. o The current input line is normally in $_, not $0. It generally does not have the newline stripped. ($0 is the name of the program executed.) See perlvar. o $<digit> does not refer to fields--it refers to substrings matched by the last match pattern. o The print() statement does not add field and record separators unless you set $, and "$". You can set $OFS and $ORS if you're using the English module. o You must open your files before you print to them. o The range operator is "..", not comma. The comma operator works as in C. o The match operator is "=~", not "~". ("~" is the one's complement operator, as in C.) o The exponentiation operator is "**", not "^". "^" is the XOR operator, as in C. (You know, one could get the feeling that awk is basically incompatible with C.) o The concatenation operator is ".", not the null string. (Using the null string would render "/pat/ /pat/" unparsable, because the third slash would be interpreted as a division operator--the tokenizer is in fact slightly context sensitive for operators like "/", "?", and ">". And in fact, "." itself can be the beginning of a number.) o The "next", "exit", and "continue" keywords work differently. o The following variables work differently: Awk Perl ARGC scalar @ARGV (compare with $#ARGV) ARGV[0] $0 FILENAME $ARGV FNR $. - something FS (whatever you like) NF $#Fld, or some such NR $. OFMT $# OFS $, ORS $ RLENGTH length($&) RS $/ RSTART length($`) SUBSEP $; o You cannot set $RS to a pattern, only a string. o When in doubt, run the awk construct through a2p and see what it gives you. C/C++ Traps Cerebral C and C++ programmers should take note of the following: o Curly brackets are required on "if"'s and "while"'s. o You must use "elsif" rather than "else if". o The "break" and "continue" keywords from C become in Perl "last" and "next", respectively. Unlike in C, these do not work within a "do { } while" construct. See "Loop Control" in perlsyn. o The switch statement is called "given/when" and only available in perl 5.10 or newer. See "Switch Statements" in perlsyn. o Variables begin with "$", "@" or "%" in Perl. o Comments begin with "#", not "/*" or "//". Perl may interpret C/C++ comments as division operators, unterminated regular expressions or the defined-or operator. o You can't take the address of anything, although a similar operator in Perl is the backslash, which creates a reference. o "ARGV" must be capitalized. $ARGV[0] is C's "argv[1]", and "argv[0]" ends up in $0. o System calls such as link(), unlink(), rename(), etc. return nonzero for success, not 0. (system(), however, returns zero for success.) o Signal handlers deal with signal names, not numbers. Use "kill -l" to find their names on your system. Sed Traps Seasoned sed programmers should take note of the following: o A Perl program executes only once, not once for each input line. You can do an implicit loop with "-n" or "-p". o Backreferences in substitutions use "$" rather than "". o The pattern matching metacharacters "(", ")", and "|" do not have backslashes in front. o The range operator is "...", rather than comma. Shell Traps Sharp shell programmers should take note of the following: o The backtick operator does variable interpolation without regard to the presence of single quotes in the command. o The backtick operator does no translation of the return value, unlike csh. o Shells (especially csh) do several levels of substitution on each command line. Perl does substitution in only certain constructs such as double quotes, backticks, angle brackets, and search patterns. o Shells interpret scripts a little bit at a time. Perl compiles the entire program before executing it (except for "BEGIN" blocks, which execute at compile time). o The arguments are available via @ARGV, not $1, $2, etc. o The environment is not automatically made available as separate scalar variables. o The shell's "test" uses "=", "!=", "<" etc for string comparisons and "-eq", "-ne", "-lt" etc for numeric comparisons. This is the reverse of Perl, which uses "eq", "ne", "lt" for string comparisons, and "==", "!=" "<" etc for numeric comparisons. Perl Traps Practicing Perl Programmers should take note of the following: o Remember that many operations behave differently in a list context than they do in a scalar one. See perldata for details. o Avoid barewords if you can, especially all lowercase ones. You can't tell by just looking at it whether a bareword is a function or a string. By using quotes on strings and parentheses on function calls, you won't ever get them confused. o You cannot discern from mere inspection which builtins are unary operators (like chop() and chdir()) and which are list operators (like print() and unlink()). (Unless prototyped, user-defined subroutines can only be list operators, never unary ones.) See perlop and perlsub. o People have a hard time remembering that some functions default to $_, or @ARGV, or whatever, but that others which you might expect to do not. o The <FH> construct is not the name of the filehandle, it is a readline operation on that handle. The data read is assigned to $_ only if the file read is the sole condition in a while loop: while (<FH>) { } while (defined($_ = <FH>)) { }.. <FH>; # data discarded! o Remember not to use "=" when you need "=~"; these two constructs are quite different: $x = /foo/; $x =~ /foo/; o The "do {}" construct isn't a real loop that you can use loop control on. o Use "my()" for local variables whenever you can get away with it (but see perlform for where you can't). Using "local()" actually gives a local value to a global variable, which leaves you open to unforeseen side-effects of dynamic scoping. o If you localize an exported variable in a module, its exported value will not change. The local name becomes an alias to a new value but the external name is still an alias for the original. As always, if any of these are ever officially declared as bugs, they'll be fixed and removed. perl v5.18.2 2014-01-06 PERLTRAP(1)
All times are GMT -4. The time now is 04:34 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy