Filtering duplicate lines

Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Filtering duplicate lines
# 1  
Old 02-08-2002
Question Filtering duplicate lines

Does anybody know a command that filters duplicate lines out of a file. Similar to the uniq command but can handle duplicate lines no matter where they occur in a file?
# 2  
Old 02-08-2002
Check the man page on sort (sort -u )
# 3  
Old 02-08-2002
Thanks that does almost want I want. However, is there a way I can do it wile preserving the original order of the data?
# 4  
Old 02-08-2002
What is the original order of the file? Is it order or chaos?

If the file has an order (by date time, by nodename, by some field) then you can also sort by that field (check the man page)

If it is chaos - meaning no specific order (it just came that way!) then I believe you would need to write a script (Perl ) or a program (your preference of lanuage) to get what you are trying to do.
# 5  
Old 02-08-2002
The order is indeed chaos. The information is to be ploted out. If the input order is lost then the plot loses meaning.

' ' 3031487.7 379165.3
' ' 3032181.8 379848.9
' ' 3005331.9 348245.4
' ' 3006027.4 348927.5
' ' 3006724.5 349610.6
' ' 3007420.4 350291.5
' ' 3008116.8 350974.5

I only need the first instance of a line begining with "VEL", however, if I sort it the attached information becomes jumbled.

# 6  
Old 02-08-2002
If the file is ordered by VEL(some number) and then the plots, an even simplier script could be written to read each line. Save the VEL info into a variable
Read the first line - if it has a VEL in it
compare it to the VEL variable.
If it is different , write it to the new file and
save it into the VEL variable
if it is the same read the next line.
if it has no VEL in it, write it to the new file

Unless there is something else in the file that would mess with this, it should work.
# 7  
Old 02-08-2002
Hammer & Screwdriver

Cheers, I think this is the inevitable conlusion/solution. I was hoping to get away with a ready made unix command. Uniq showed such promise.

I have a few other things to be doing till I have to cross this particular bridge again.

Thanks again for the ideas.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Filtering log file with lines older than 10 days.

Hi, I am trying to compare epoch time in a huge log file (2 million lines) with todays date. I have to create two files one which has lines older than 10 days and another file with less than 10 days. I am using while do but it takes forever to complete the script. It would be helpful if you can... (12 Replies)
Discussion started by: shunya
12 Replies

2. Shell Programming and Scripting

Awk/sed : help on:Filtering multiple lines to one:

Experts Good day, I want to filter multiple lines of same error of same day , to only 1 error of each day, the first line from the log. Here is the file: May 26 11:29:19 cmihpx02 vmunix: NFS write failed for server cmiauxe1: error 5 (RPC: Timed out) May 26 11:29:19 cmihpx02 vmunix: NFS... (4 Replies)
Discussion started by: rveri
4 Replies

3. Shell Programming and Scripting

Filtering out lines in a .csv file

Hi Guys, Would need your expert help with the following situation.. I have a comma seperated .csv file, with a header row and data as follows H1,H2,H3,H4,H5..... (header row) 0,0,0,0,0,1,2.... (data rows follow) 0,0,0,0,0,0,1 ......... ......... i need a code... (10 Replies)
Discussion started by: dev.devil.1983
10 Replies

4. UNIX for Dummies Questions & Answers

Filtering data -extracting specific lines

I have a table to data which one of the columns include string of text from within that, I am searching to include few lines but not others for example I want to to include some combination of word address such as (address.| address? |the address | your address) but not (ip address | email... (17 Replies)
Discussion started by: A-V
17 Replies

5. Shell Programming and Scripting

Perl: filtering lines based on duplicate values in a column

Hi I have a file like this. I need to eliminate lines with first column having the same value 10 times. 13 18 1 + chromosome 1, 122638287 AGAGTATGGTCGCGGTTG 13 18 1 + chromosome 1, 128904080 AGAGTATGGTCGCGGTTG 13 18 1 - chromosome 14, 13627938 CAACCGCGACCATACTCT 13 18 1 + chromosome 1,... (5 Replies)
Discussion started by: polsum
5 Replies

6. UNIX for Advanced & Expert Users

In a huge file, Delete duplicate lines leaving unique lines

Hi All, I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space. I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Discussion started by: krishnix
16 Replies

7. Homework & Coursework Questions

Filtering Unique Lines

Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted! 1. The problem statement, all variables and given/known data: The uniq command excludes consecutive duplicate lines. It has a -c option to display a count of the number... (1 Reply)
Discussion started by: billydeanmak
1 Replies

8. Shell Programming and Scripting

filtering out duplicate substrings, regex string from a string

My input contains a single word lines. From each line data.txt prjtestBlaBlatestBlaBla prjthisBlaBlathisBlaBla prjthatBlaBladpthatBlaBla prjgoodBlaBladpgoodBlaBla prjgood1BlaBla123dpgood1BlaBla123 Desired output --> data_out.txt prjtestBlaBla prjthisBlaBla... (8 Replies)
Discussion started by: kchinnam
8 Replies

9. UNIX for Dummies Questions & Answers

Filtering similar lines in a big list

I received this question for homework: We have to write our program into a .sh file, with "#!/bin/bash" as the first line. And we have the list of access logs in a file, looking like this (it's nearly 10,000 lines long): - - "GET /~user0/cgg/msg08400.html HTTP/1.0" 304 -... (1 Reply)
Discussion started by: Andrew9191
1 Replies

10. Shell Programming and Scripting

Issues with filtering duplicate records using gawk script

Hi All, I have huge trade file with milions of trades.I need to remove duplicate records (e.g I have following records) 30/10/2009,trdeId1,..,.. 26/10/2009.tradeId1,..,..,, 30/10/2009,tradeId2,.. In the above case i need to filter duplicate recods and I should get following output.... (2 Replies)
Discussion started by: nmumbarkar
2 Replies
Login or Register to Ask a Question