i have a file like below,
==========================================
in the file,
line no 1 is repeated in the line 7 and
line no 3 is repeated in the line 9 and 10
my requirement is to remove the duplicate lines and keep only the last occurrence of it.
The suggested solutions remove the sequent duplicates, keeping the first instance.
The requirement, keeping the last instance, is far more complex.
The most comprehensive solution is perl:
Another one is awk | sort | cut:
Another less efficient solution would be tac | awk 'remove sequent duplicates' | tac:
This User Gave Thanks to MadeInGermany For This Post:
Location: Saint Paul, MN USA / BSD, CentOS, Debian, OS X, Solaris
Posts: 2,288
Thanks Given: 430
Thanked 480 Times in 395 Posts
Hi.
If you were to run out of memory, you could use tac file | awk '!($0 in S) {print; S[$0]}' | tac posted by MadeInGermany.
A similar code in shell, with filename in variable FILE:
Line numbers are added, then the body is sorted, with the secondary sort being reverse numeric. The GNU uniq allows fields to be skipped, the result sorted in numeric order, thus retaining the original order, after which the line number is stripped. Before stripping, this looks like:
Pipelines are useful for doing large-granularity parallel computing, and the disk is not touched because the pipes are simply buffers (usually 65K). The tee in the above is to allow intermediate results to be seen.
I have run across some uniq versions that keep the most recent version of a duplicate ( Solaris if memory serves ).
i hav two files like
i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3.I have tried previous post also,but in that complete line must be similar.In this case i have to verify first column only regardless what is the content in succeeding columns. (3 Replies)
I have an input file abc.txt with info like:
abcd
rateuse
inklite
robet
rateuse
abcd
I need to remove duplicates from the file (eg: abcd,rateuse) from the file and need to place the contents in same file abc.txt if needed can be placed in another file.
can anyone help me in this :( (4 Replies)
i want to remove all the duplictaes in a file.I dont want even a single entry.
For the input data:
12345|12|34
12345|13|23
3456|12|90
15670|12|13
12345|10|14
3456|12|13
i need the below data in one file
15670|12|13
and the below data in another file (9 Replies)
I have a test file with the following 2 columns:
Col 1 | Col 2
T1 | 1 <= remove
T5 | 1
T4 | 2
T1 | 3
T3 | 3
T4 | 1 <= remove
T1 | 2 <= remove
T3 ... (7 Replies)
Hi All
In unix ,we have a file ,there we have to remove the duplicates by using one specific column.
Can any body tell me the command.
ex:
file1
id,name
1,ww
2,qwq
2,asas
3,asa
4,asas
4,asas
o/p:
1,ww
2,qwq
3,asa (7 Replies)
Hi,
I have a file in the below format.,
test test (10)
to to (25)
see see (45)
and i need the output in the format of
test 10
to 25
see 45
Some one help me? (6 Replies)
Hi I have a file that are a list of people & their credentials i recieve frequently The issue is that whne I catnet this list that duplicat entries exists & are NOT CONSECUTIVE (i.e. uniq -1 may not weork here )
I'm trying to write a scrip that will remove duplicate entries
the script can... (5 Replies)
Hello experts,
I am trying to remove all lines in a csv file where the 2nd columns is a duplicate. I am try to use sort with the key parameter
sort -u -k 2,2 File.csv > Output.csv
File.csv
File Name|Document Name|Document Title|Organization
Word Doc 1.doc|Word Document|Sample... (3 Replies)
Hi, I've been trying to removed duplicates lines with similar columns in a fixed width file and it's not working.
I've search the forum but nothing comes close.
I have a sample file:
27147140631203RA CCD *
27147140631203RA PPN *
37147140631207RD AAA
47147140631203RD JNA... (12 Replies)