Visit Our UNIX and Linux User Community


how to delete duplicate rows based on last column


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting how to delete duplicate rows based on last column
# 8  
Old 08-25-2009
Question

Its not exactly working..
To tell
My data has different values in the first column not all are same as i had mentioned in question &
data in my file looks some what lik this
Code:
1900  2  7  0   9.5000  76.5000 4.30
1900  2  7  0   9.5000  76.5000 6.00
1901  2 15  0  26.0000 100.0000 6.00
1901  4 27  0  12.0000  75.0000 5.00
1901  4 17 21  40.0000  71.0000 5.90
1902  4 17 21  40.0000  71.0000 5.90
1902  8 12 17  39.5000  68.5000 6.20
1902  8 22  3  40.0000  77.0000 8.60
1902  8 22  3  40.0000  76.5000 8.20
1902  8 22  3  40.0000  76.5000 8.30
1902  8 22  3  40.0000  77.0000 8.20
1903  8 30 21  37.0000  71.0000 7.70
1904  9 20  6  38.5000  67.0000 6.30

The output which i need is exactly lik this....

Code:
1900  2  7  0   9.5000  76.5000 6.00
1901  2 15  0  26.0000 100.0000 6.00
1901  4 27  0  12.0000  75.0000 5.00
1901  4 17 21  40.0000  71.0000 5.90
1902  8 12 17  39.5000  68.5000 6.20
1902  8 22  3  40.0000  77.0000 8.60
1902  8 22  3  40.0000  76.5000 8.30
1903  8 30 21  37.0000  71.0000 7.70
1904  9 20  6  38.5000  67.0000 6.30


Last edited by vgersh99; 08-25-2009 at 11:21 AM.. Reason: code tags, PLEASE!
# 9  
Old 08-25-2009
To keep the forums high quality for all users, please take the time to format your posts correctly.

First of all, use Code Tags when you post any code or data samples so others can easily read your code. You can easily do this by highlighting your code and then clicking on the # in the editing menu. (You can also type code tags [code] and [/code] by hand.)

Second, avoid adding color or different fonts and font size to your posts. Selective use of color to highlight a single word or phrase can be useful at times, but using color, in general, makes the forums harder to read, especially bright colors like red.

Third, be careful when you cut-and-paste, edit any odd characters and make sure all links are working property.

Thank You.

The UNIX and Linux Forums
# 10  
Old 08-25-2009
Reva,

it's working properly for me , of course with sort a you can make the sequence in order.

something like this :
Code:
awk '{ va2=$NF;va1=$(NF-1);va=$(NF-2);$NF="";$(NF-1)="";$(NF-2)="";if ($0 in a) { if (va" "va1" "va2 >a[$0] ){a[$0]=va" "va1" "va2}} else {a[$0]=va" "va1" "va2}} END { for ( i in a ) print i" "a[i] }'  file_name.txt | sort +1n


Last edited by panyam; 08-25-2009 at 12:10 PM..
# 11  
Old 08-26-2009
ya i will follow from next post...

---------- Post updated 08-26-09 at 04:34 AM ---------- Previous update was 08-25-09 at 08:49 AM ----------

Thanks for the help i got it...

---------- Post updated at 04:45 AM ---------- Previous update was at 04:34 AM ----------

Hiii
now if i have data like shown below.how to sort it out. i mean delete duplicate entries in such a way that it must take the largest value in last column & it must choose a row which has many sets of values in the row.
For example the data in my file is
HTML Code:
1900  2  7  0   9.5000  76.5000 0.00 4.30 0.00 0.00 0.00 4.30
1900  2  7  0  10.8000  76.8000 0.00 6.00 0.00 0.00 0.00 6.00
1901 12  1  0  37.8000  66.0000 0.00 5.00 0.00 0.00 0.00 5.00
1901 12  1  0  37.8000  66.0000 0.00 4.60 3.00 3.50 3.50 4.60
1902  4 17 21  40.0000  71.0000 0.00 5.80 0.00 5.90 5.70 5.90
1902  8 12 17  39.5000  68.5000 0.00 6.00 0.00 6.20 5.90 6.20
1902  8 22  3  40.0000  77.0000 0.00 0.00 0.00 8.00 8.60 8.60
1902  8 22  3  40.0000  76.5000 0.00 0.00 0.00 0.00 8.20 8.20
1902  8 22  3  40.0000  76.5000 0.00 0.00 0.00 0.00 8.30 8.30
1903  5 16  6   5.3600  80.0000 0.00 4.50 0.00 5.00 0.00 5.00
1903  5 16  6   5.3600  80.0000 0.00 4.30 0.00 3.00 0.00 4.30
The output for it is
HTML Code:
1900  2  7  0  10.8000  76.8000 0.00 6.00 0.00 0.00 0.00 6.00
1901 12  1  0  37.8000  66.0000 0.00 4.60 3.00 0.00 3.50 4.60
1902  4 17 21  40.0000  71.0000 0.00 5.80 0.00 5.90 5.70 5.90
1902  8 12 17  39.5000  68.5000 0.00 6.00 0.00 6.20 5.90 6.20
1902  8 22  3  40.0000  77.0000 0.00 0.00 0.00 8.00 8.60 8.60
1903  5 16  6   5.3600  80.0000 0.00 4.50 0.00 5.00 0.00 5.00
Here it removes duplicates & checks for longest row with many values & largest value in last column.
If any one has an idea help me out..

Last edited by reva; 08-26-2009 at 01:06 PM..
# 12  
Old 08-26-2009
From where you got :

Code:
1901 12  1  0  37.8000  66.2000 0.00 4.60 3.00 0.00 3.50 4.60

in the output you mentioned.


I hope with the code that we have given you can try further a bit to achieve your task.
# 13  
Old 08-26-2009
ya i have corrected my output just check now once..

Last edited by reva; 09-01-2009 at 01:35 AM..
# 14  
Old 09-01-2009
If i have 19 columns & i need to just check duplicates for column 1,2,3,4 & tak the largest value of column 18.Then how to use awk..help me out & try explaining the code also i am very new to unix to tell.
Thanks in advance

Previous Thread | Next Thread
Test Your Knowledge in Computers #620
Difficulty: Medium
Python implicitly assumes that each physical line corresponds to a logical line.
True or False?

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Extract and exclude rows based on duplicate values

Hello I have a file like this: > cat examplefile ghi|NN603762|eee mno|NN607265|ttt pqr|NN613879|yyy stu|NN615002|uuu jkl|NN607265|rrr vwx|NN615002|iii yzA|NN618555|ooo def|NN190486|www BCD|NN628717|ppp abc|NN190486|qqq EFG|NN628717|aaa HIJ|NN628717|sss > I can sort the file by... (5 Replies)
Discussion started by: CHoggarth
5 Replies

2. Shell Programming and Scripting

Remove duplicate rows based on one column

Dear members, I need to filter a file based on the 8th column (that is id), and does not mather the other columns, because I want just one id (1 line of each id) and remove the duplicates lines based on this id (8th column), and does not matter wich duplicate will be removed. example of my file... (3 Replies)
Discussion started by: clarissab
3 Replies

3. Shell Programming and Scripting

awk to sum a column based on duplicate strings in another column and show split totals

Hi, I have a similar input format- A_1 2 B_0 4 A_1 1 B_2 5 A_4 1 and looking to print in this output format with headers. can you suggest in awk?awk because i am doing some pattern matching from parent file to print column 1 of my input using awk already.Thanks! letter number_of_letters... (5 Replies)
Discussion started by: prashob123
5 Replies

4. UNIX for Dummies Questions & Answers

merging rows into new file based on rows and first column

I have 2 files, file01= 7 columns, row unknown (but few) file02= 7 columns, row unknown (but many) now I want to create an output with the first field that is shared in both of them and then subtract the results from the rest of the fields and print there e.g. file 01 James|0|50|25|10|50|30... (1 Reply)
Discussion started by: A-V
1 Replies

5. Shell Programming and Scripting

Delete duplicate rows

Hi, This is a followup to my earlier post him mno klm 20 76 . + . klm_mango unix_00000001; alp fdc klm 123 456 . + . klm_mango unix_0000103; her tkr klm 415 439 . + . klm_mango unix_00001043; abc tvr klm 20 76 . + . klm_mango unix_00000001; abc def klm 83 84 . + . klm_mango... (5 Replies)
Discussion started by: jacobs.smith
5 Replies

6. UNIX for Dummies Questions & Answers

Remove duplicate rows when >10 based on single column value

Hello, I'm trying to delete duplicates when there are more than 10 duplicates, based on the value of the first column. e.g. a 1 a 2 a 3 b 1 c 1 gives b 1 c 1 but requires 11 duplicates before it deletes. Thanks for the help Video tutorial on how to use code tags in The UNIX... (11 Replies)
Discussion started by: informaticist
11 Replies

7. Ubuntu

delete duplicate rows with awk files

Hi every body I have some text file with a lots of duplicate rows like this: 165.179.568.197 154.893.836.174 242.473.396.153 165.179.568.197 165.179.568.197 165.179.568.197 154.893.836.174 how can I delete the repeated rows? Thanks Saeideh (2 Replies)
Discussion started by: sashtari
2 Replies

8. UNIX for Dummies Questions & Answers

forming duplicate rows based on value of a key

if the key (A or B or ...others) has 4 in its 3rd column the 1st A row has to form 4 dupicates along with the all the values of A in 4th column (2.9, 3.8, 4.2) . Hope I explain the question clearly. Cheers Ruby input "A" 1 4 2.9 "A" 2 5 ... (7 Replies)
Discussion started by: ruby_sgp
7 Replies

9. UNIX for Dummies Questions & Answers

Remove duplicate rows of a file based on a value of a column

Hi, I am processing a file and would like to delete duplicate records as indicated by one of its column. e.g. COL1 COL2 COL3 A 1234 1234 B 3k32 2322 C Xk32 TTT A NEW XX22 B 3k32 ... (7 Replies)
Discussion started by: risk_sly
7 Replies

10. Shell Programming and Scripting

how to delete duplicate rows in a file

I have a file content like below. "0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","","" "0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","","" "0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""... (5 Replies)
Discussion started by: vamshikrishnab
5 Replies

Featured Tech Videos