Discarding records with duplicate fields


 
Thread Tools Search this Thread
Top Forums UNIX for Beginners Questions & Answers Discarding records with duplicate fields
# 1  
Old 02-03-2020
Discarding records with duplicate fields

Hi,

My input looks like this (tab-delimited):
Code:
grp1	name2	firstname	M	55	item1	item1.0
grp1	name2	firstname	F	55	item1	item1.0
grp2	name1	firstname	M	55	item1	item1.0
grp2	name2	firstname	M	55	item1	item1.0

Using awk, I am trying to discard the records with common fields 2, 4, 5, 6, 7 only for records from 'grp2' (field $1; i.e. records starting with 'grp1' should be kept no matter what) in order to get this output:
Code:
grp1	name2	firstname	M	55	item1	item1.0
grp1	name2	firstname	F	55	item1	item1.0
grp2	name1	firstname	M	55	item1	item1.0

Here is my code, but I don't see what is wrong with it.
Code:
awk '
BEGIN{FS=OFS="\t"}
{
    a[$2 FS $4 FS $5 FS $6 FS $7]

    if($1 ~ /grp2/){
        if(a[$2 FS $4 FS $5 FS $6 FS $7]++==0){
            print $0
         }
    }
    else{
        print $0
    }
}' input.tab

Any help would be greatly appreciated.
# 2  
Old 02-03-2020
Hello beca123456,

Could you please try following.

Code:
awk '/^grp1/{print;a[$2,$4,$5,$6,$7]++;next} !a[$2,$4,$5,$6,$7]++'   Input_file

Output will be as follows.

Code:
grp1    name2   firstname       M       55      item1   item1.0
grp1    name2   firstname       F       55      item1   item1.0
grp2    name1   firstname       M       55      item1   item1.0

Thanks,
R. Singh
# 3  
Old 02-03-2020
Your code works. Thanks !
But I would like to understand why mine does not. It seems fine to me...
# 4  
Old 02-03-2020
Quote:
Originally Posted by beca123456
Your code works. Thanks !
But I would like to understand why mine does not. It seems fine to me...
Hello becal123456,

IMHO problem in your code is that you haven't done any increment on very first occurrence of a[$2 FS $4 FS $5 FS $6 FS $7] and when it goes into condition check happens first and increment for that line's field happens later hence false positive results are coming, so if you change your code to following:

Code:
awk '
BEGIN{FS=OFS="\t"}
{
    a[$2 FS $4 FS $5 FS $6 FS $7]++
    if($1 ~ /grp2/){
        if(a[$2 FS $4 FS $5 FS $6 FS $7]==1){
            print $0
         }
    }
    else{
        print $0
    }
}'   Input_file

Then we are doing increment in array a[$2 FS $4 FS $5 FS $6 FS $7]++ so when it goes to next time in grep2 condition it has those values also which are common in grep1 line(which were getting missed since we haven't put increment in its previous statement).

I hope we should be good here, please feel free to shoot any further queries on same.

NOTE: To say anyone "Thanks" on UNIX.com you can hit THANKS button; present at every post's left corner Smilie

Thanks,
R. Singh
This User Gave Thanks to RavinderSingh13 For This Post:
# 5  
Old 02-03-2020
I see my mistake. Thanks !
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Duplicate records

Gents, Please give a help file --BAD STATUS NOT RESHOOTED-- *** VP 41255/51341 in sw 2973 *** VP 41679/51521 in sw 2973 *** VP 41687/51653 in sw 2973 *** VP 41719/51629 in sw 2976 --BAD COG NOT RESHOOTED-- *** VP 41689/51497 in sw 2974 *** VP 41699/51677 in sw 2974 *** VP... (18 Replies)
Discussion started by: jiam912
18 Replies

2. Shell Programming and Scripting

Duplicate records

Gents, I have a file which contends duplicate records in column 1, but the values in column 2 are different. 3099753489 3 3099753489 5 3101954341 12 3101954341 14 3102153285 3 3102153285 5 3102153297 3 3102153297 5 I will like to get something like this: output desired... (16 Replies)
Discussion started by: jiam912
16 Replies

3. Shell Programming and Scripting

Remove duplicate records

Hi, i am working on a script that would remove records or lines in a flat file. The only difference in the file is the "NOT NULL" word. Please see below example of the input file. INPUT FILE:> CREATE a ( TRIAL_CLIENT NOT NULL VARCHAR2(60), TRIAL_FUND NOT NULL... (3 Replies)
Discussion started by: reignangel2003
3 Replies

4. Shell Programming and Scripting

Deleting duplicate records from file 1 if records from file 2 match

I have 2 files "File 1" is delimited by ";" and "File 2" is delimited by "|". File 1 below (3 record shown): Doc1;03/01/2012;New York;6 Main Street;Mr. Smith 1;Mr. Jones Doc2;03/01/2012;Syracuse;876 Broadway;John Davis;Barbara Lull Doc3;03/01/2012;Buffalo;779 Old Windy Road;Charles... (2 Replies)
Discussion started by: vestport
2 Replies

5. Shell Programming and Scripting

Find duplicate based on 'n' fields and mark the duplicate as 'D'

Hi, In a file, I have to mark duplicate records as 'D' and the latest record alone as 'C'. In the below file, I have to identify if duplicate records are there or not based on Man_ID, Man_DT, Ship_ID and I have to mark the record with latest Ship_DT as "C" and other as "D" (I have to create... (7 Replies)
Discussion started by: machomaddy
7 Replies

6. UNIX for Dummies Questions & Answers

Need to keep duplicate records

Consider my input is 10 10 20 then, uniq -u will give 20 and uniq -dwill return 10. But i need the output as , 10 10 How we can achieve this? Thanks (4 Replies)
Discussion started by: pandeesh
4 Replies

7. UNIX for Dummies Questions & Answers

Getting non-duplicate records

Hi, I have a file with these records abc xyz xyz pqr uvw cde cde In my o/p file , I want all the non duplicate rows to be shown. o/p abc pqr uvw Any suggestions how to do this? Thanks for the help. rs (2 Replies)
Discussion started by: rs123
2 Replies

8. Shell Programming and Scripting

combine duplicate records

I have a .DAT file like below 23666483030000653-B94030001OLFXXX000000120081227 23797049900000654-E71060001OLFXXX000000220081227 23699281320000655 E71060002OLFXXX000000320081227 22885068900000652 B86860003OLFXXX592123320081227 22885068900000652 B86860003ODL-SP592123420081227... (8 Replies)
Discussion started by: kshuser
8 Replies

9. Shell Programming and Scripting

compare fields in a file with duplicate records

Hi: I've been searching the net but didnt find a clue. I have a file in which, for some records, some fields coincide. I want to compare one (or more) of the dissimilar fields and retain the one record that fulfills a certain condition. For example, on this file: 99 TR 1991 5 06 ... (1 Reply)
Discussion started by: rleal
1 Replies

10. Shell Programming and Scripting

Records Duplicate

Hi Everyone, I have a flat file of 1000 unique records like following : For eg Andy,Flower,201-987-0000,12/23/01 Andrew,Smith,101-387-3400,11/12/01 Ani,Ross,401-757-8640,10/4/01 Rich,Finny,245-308-0000,2/27/06 Craig,Ford,842-094-8740,1/3/04 . . . . . . Now I want to duplicate... (9 Replies)
Discussion started by: ganesh123
9 Replies
Login or Register to Ask a Question