Trying to remove duplicates based on field and row


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Trying to remove duplicates based on field and row
# 1  
Old 12-11-2013
Trying to remove duplicates based on field and row

I am trying to see if I can use awk to remove duplicates from a file. This is the file:

Code:
-==> Listvol <==
deleting   /vol/eng_rmd_0941
deleting   /vol/eng_rmd_0943
deleting   /vol/eng_rmd_0943
deleting   /vol/eng_rmd_1006
deleting   /vol/eng_rmd_1012
rearrange  /vol/eng_rmd_0943

However, I am having issues. I want to remove the first volume in the 2nd field if it has another entry under rearrange. So I want the file to look like this:

Correct file:

Code:
-==> Listvol <==
deleting   /vol/eng_rmd_0941
deleting   /vol/eng_rmd_1006
deleting   /vol/eng_rmd_1012
rearrange  /vol/eng_rmd_0943

I have tried the following but I think it is not acknowledging the 2nd occurrence of the volume.

Code:
cat test3 | gawk '{if (Line!=$1$2) print; Line=$1$2}'


Code:
cat  test3 |gawk 'BEGIN{RS="="} $1=$1' FS= '\t"

Also I have found a line that makes an array of the file, by searching for similar issues. I am not sure how this works, but I think it does not consider the "rearrange part"

Code:
 cat test3 |gawk '!arr[$2]++'

The above expression gets rid of the last line, which is NOT what I want. I want only the rearrange for that volume to be outputed. In addition, there is a command "tac" that I have seen some work with, but I don't have it on my distribution.

Does anybody have ideas? I am really a novice at removing duplicates and am not sure how the process works.
# 2  
Old 12-11-2013
If its is OK that the order is not preserved:
Code:
awk '
        /^-/ {
                print $0
        }
        !/^-/ {
                if ( !(A[$2]) )
                        A[$2] = $1
                else if ( $1 == "rearrange" )
                        A[$2] = $1
        }
        END {
                for ( k in A )
                        print A[k], k
        }
' file

# 3  
Old 12-11-2013
Try :
if order doesn't matter
Code:
$ cat <<eof | awk 'NR==1;NR>1{A[$2]=$0}END{for(i in A)print A[i]}'
-==> Listvol <==
deleting   /vol/eng_rmd_0941
deleting   /vol/eng_rmd_0943
deleting   /vol/eng_rmd_0943
deleting   /vol/eng_rmd_1006
deleting   /vol/eng_rmd_1012
rearrange  /vol/eng_rmd_0943
eof

-==> Listvol <==
deleting   /vol/eng_rmd_0941
rearrange  /vol/eng_rmd_0943
deleting   /vol/eng_rmd_1012
deleting   /vol/eng_rmd_1006

for file
Code:
$ awk 'NR==1;NR>1{A[$2]=$0}END{for(i in A)print A[i]}' file


Last edited by Akshay Hegde; 12-11-2013 at 02:30 PM..
This User Gave Thanks to Akshay Hegde For This Post:
# 4  
Old 12-11-2013
Thanks, can you explain

Akshay:

I am relatively new to awk. This worked great! If you have time, could you briefly explain the syntax?

Also, what is the difference between your statement and

Code:
gawk '{if (Line!=$1$2) print; Line=$2}'

Thanks again! This statement was very concise. Just brilliant!
# 5  
Old 12-11-2013
Quote:
Originally Posted by newbie2010
Akshay:

I am relatively new to awk. This worked great! If you have time, could you briefly explain the syntax?

Also, what is the difference between your statement and

Code:
gawk '{if (Line!=$1$2) print; Line=$2}'

Thanks again! This statement was very concise. Just brilliant!


awk 'NR==1; ---> prints your header in line number 1

NR>1{A[$2]=$0}---> line number is greater then 1

NR>1{ then Array A with index of column of $2 will hold line $0 that is A[$2]=$0

END{for(i in A)print A[i]}' --> In END block printing array contents

Code:
gawk '{if (Line!=$1$2) print; Line=$2}'

Line!=$1$2 --> if line is not equal to column 1 and column2 then print line print, this will work for first line since Line is not set, after printing variable Line will be assigned the value of $2 Line=$2, and again check if for 2nd line.

--edit--

your code will not work because it just considers previous line pattern, in between if there is any duplicate it will get printed

and awk '!arr[$2]++' this prints only first found value from field 2 $2this is the reason why rearrange is not getting printed

Code:
$ cat <<eof | awk '!arr[$2]++'                     
-==> Listvol <==
deleting   /vol/eng_rmd_0941
deleting   /vol/eng_rmd_0943
deleting   /vol/eng_rmd_0943
deleting   /vol/eng_rmd_1006
deleting   /vol/eng_rmd_1012
rearrange  /vol/eng_rmd_0943
eof

-==> Listvol <==
deleting   /vol/eng_rmd_0941
deleting   /vol/eng_rmd_0943
deleting   /vol/eng_rmd_1006
deleting   /vol/eng_rmd_1012

Yoda solution keeps track of rearrange in field1, my solution assumes it's sorted so it save last found value, if file is not sorted I think you should go through Yoda's solution.

Last edited by Akshay Hegde; 12-11-2013 at 03:12 PM.. Reason: typo...
# 6  
Old 12-11-2013
Try also (a bit lengthy)
Code:
cat -n file | sort -r |awk '!T[$2,$3]++' | sort | awk '{print $2 "\t" $3}'
-==>    Listvol
deleting    /vol/eng_rmd_0941
deleting    /vol/eng_rmd_0943
deleting    /vol/eng_rmd_1006
deleting    /vol/eng_rmd_1012
rearrange    /vol/eng_rmd_0943

# 7  
Old 01-23-2014
Hello All,

one more approach by using awk as follows.

Input file:
Code:
-==>    Listvol
deleting    /vol/eng_rmd_0941
deleting    /vol/eng_rmd_0943
deleting    /vol/eng_rmd_1006
deleting    /vol/eng_rmd_1012
rearrange    /vol/eng_rmd_0943


Code:
sort -rk2 check_actual_read_opposite | awk '$2  == g {next} {g=$2} 1'


Output will be as follows it will not change the order for column two values with respect to column one values.

Code:
-==>    Listvol
deleting    /vol/eng_rmd_1012
deleting    /vol/eng_rmd_1006
rearrange    /vol/eng_rmd_0943
deleting    /vol/eng_rmd_0941

NOTE: where check_actual_read_opposite is the file name.



Thanks,
R. Singh
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Splitting single row into multiple rows based on for every 10 digits of last field of the row

Hi ALL, We have requirement in a file, i have multiple rows. Example below: Input file rows 01,1,102319,0,0,70,26,U,1,331,000000113200000011920000001212 01,1,102319,0,1,80,20,U,1,241,00000059420000006021 I need my output file should be as mentioned below. Last field should split for... (4 Replies)
Discussion started by: kotra
4 Replies

2. Shell Programming and Scripting

Remove duplicates based on a field's value

Hi All, I have a text file with three columns. I would like a simple script that removes lines in which column 1 has duplicate entries, but use the largest value in column 3 to decide which one to keep. For example: Input file: 12345a rerere.rerere len=23 11111c fsdfdf.dfsdfdsf len=33 ... (3 Replies)
Discussion started by: anniecarv
3 Replies

3. Shell Programming and Scripting

Remove duplicates within row and separate column

Hi all I have following kind of input file ESR1 PA156 leflunomide PA450192 leflunomide CHST3 PA26503 docetaxel Pa4586; thalidomide Pa34958; decetaxel docetaxel docetaxel I want to remove duplicates and I want to separate anything before and after PAxxxx entry into columns or... (1 Reply)
Discussion started by: manigrover
1 Replies

4. Shell Programming and Scripting

Remove duplicates and update last 2 digits of the original row with 0's

Hi, I have a requirement where I have to remove duplicates from a file based on the first 8 chars (It is fixed width file of 10 chars length) and whenever a duplicate row is found, its original row's last 2 chars should be updated to all 0's. I thought of using sort -u -k 1.1,1.8... (4 Replies)
Discussion started by: farawaydsky
4 Replies

5. UNIX for Dummies Questions & Answers

remove duplicates based on a field and criteria

Hi, I have a file with fields like below: A;XYZ;102345;222 B;XYZ;123243;333 C;ABC;234234;444 D;MNO;103345;222 E;DEF;124243;333 desired output: C;ABC;234234;444 D;MNO;103345;222 E;DEF;124243;333 ie, if the 4rth field is a duplicate.. i need only those records where... (5 Replies)
Discussion started by: wanderingmind16
5 Replies

6. Shell Programming and Scripting

CSV with commas in field values, remove duplicates, cut columns

Hi Description of input file I have: ------------------------- 1) CSV with double quotes for string fields. 2) Some string fields have Comma as part of field value. 3) Have Duplicate lines 4) Have 200 columns/fields 5) File size is more than 10GB Description of output file I need:... (4 Replies)
Discussion started by: krishnix
4 Replies

7. Shell Programming and Scripting

Remove the partial duplicates by checking the length of a field

Hi Folks - I'm quite new to awk and didn't come across such issues before. The problem statement is that, I've a file with duplicate records in 3rd and 4th fields. The sample is as below: aaaaaa|a12|45|56 abbbbaaa|a12|45|56 bbaabb|b1|51|45 bbbbbabbb|b2|51|45 aaabbbaaaa|a11|45|56 ... (3 Replies)
Discussion started by: asyed
3 Replies

8. Shell Programming and Scripting

remove duplicates based on single column

Hello, I am new to shell scripting. I have a huge file with multiple columns for example: I have 5 columns below. HWUSI-EAS000_29:1:105 + chr5 76654650 AATTGGAA HHHHG HWUSI-EAS000_29:1:106 + chr5 76654650 AATTGGAA B@HYL HWUSI-EAS000_29:1:108 + ... (4 Replies)
Discussion started by: Diya123
4 Replies

9. Shell Programming and Scripting

Remove duplicates based on the two key columns

Hi All, I needs to fetch unique records based on a keycolumn(ie., first column1) and also I needs to get the records which are having max value on column2 in sorted manner... and duplicates have to store in another output file. Input : Input.txt 1234,0,x 1234,1,y 5678,10,z 9999,10,k... (7 Replies)
Discussion started by: kmsekhar
7 Replies

10. Shell Programming and Scripting

need Shell script for Sort BASED ON FIRST FIELD and PRINT THE WHOLE FILE WITHOUT DUPLICATES

Can some one provide me a shell script. I have file with many columns and many rows. need to sort the first column and then remove the duplicates records if exists.. finally print the full data with first coulm as unique. Sort BASED ON FIRST FIELD and remove the duplicates if exists... (2 Replies)
Discussion started by: tuffEnuff
2 Replies
Login or Register to Ask a Question