01-16-2008
Quote:
Originally Posted by
orahi001
Hello,
I can remove duplicate entries in a file by:
sort File1 | uniq > File2
but how can I remove duplicates without sorting the file?
I tried cat File1 | uniq > File2 but it doesn't work
thanks
Many ways using awk or perl. Search this site using the duplicates keyword.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I have searched the FAQ - by using sort, duplicates, etc.... but I didn't get any articles or results on it.
Currently, I am using:
sort -u file1 > file2 to remove duplicates. For a file size of 1giga byte approx. time taken to remove duplicates is 1hr 21 mins.
Is there any other faster way... (15 Replies)
Discussion started by: radhika
15 Replies
2. Shell Programming and Scripting
Hello Experts,
I have two files named old and new. Below are my example files. I need to compare and print the records that only exist in my new file. I tried the below awk script, this script works perfectly well if the records have exact match, the issue I have is my old file has got extra... (4 Replies)
Discussion started by: forumthreads
4 Replies
3. Shell Programming and Scripting
Hi
I need a script that removes the duplicate records and write it to a new file
for example I have a file named test.txt and it looks like
abcd.23
abcd.24
abcd.25
qwer.25
qwer.26
qwer.98
I want to pick only $1 and compare with the next record and the output should be
abcd.23... (6 Replies)
Discussion started by: antointoronto
6 Replies
4. Shell Programming and Scripting
Hi,
I'm using the below command to sort and remove duplicates in a file. But, i need to make this applied to the same file instead of directing it to another.
Thanks (6 Replies)
Discussion started by: dvah
6 Replies
5. Shell Programming and Scripting
Hi guys! I'm trying to eliminate some duplicates from a file but I'm like this :wall: !!!
My file looks like this:
ID_1 0.02
ID_2 2.4e-2
ID_2 4.3.e-9
ID_3 0.003
ID_4 0.2
ID_4 0.05
ID_5 1.2e-3
What I need is to eliminate all the duplicates considering the first column (in this... (6 Replies)
Discussion started by: gabrysfe
6 Replies
6. Shell Programming and Scripting
I need to use a bash script to remove duplicate files from a download list, but I cannot use uniq because the urls are different.
I need to go from this:
http://***/fae78fe/file1.wmv
http://***/39du7si/file1.wmv
http://***/d8el2hd/file2.wmv
http://***/h893js3/file2.wmv
to this:
... (2 Replies)
Discussion started by: locoroco
2 Replies
7. Shell Programming and Scripting
Hi All,
I have searched many threads for possible close solution. But I was unable to get simlar scenario.
I would like to print all duplicate based on 3rd column except the first occurance. Also would like to print if it is single entry(non-duplicate).
i/P file
12 NIL ABD LON
11 NIL ABC... (6 Replies)
Discussion started by: sybadm
6 Replies
8. Shell Programming and Scripting
I have a file with the following format:
fields seperated by "|"
title1|something class|long...content1|keys
title2|somhing class|log...content1|kes
title1|sothing class|lon...content1|kes
title3|shing cls|log...content1|ks
I want to remove all duplicates with the same "title field"(the... (3 Replies)
Discussion started by: dtdt
3 Replies
9. Shell Programming and Scripting
Here is my task :
I need to sort two input files and remove duplicates in the output files :
Sort by 13 characters from 97 Ascending
Sort by 1 characters from 96 Ascending
If duplicates are found retain the first value in the file
the input files are variable length, convert... (4 Replies)
Discussion started by: ysvsr1
4 Replies
10. Shell Programming and Scripting
Hi I have a below file structure.
200,1245,E1,1,E1,,7611068,KWH,30, ,,,,,,,,
200,1245,E1,1,E1,,7611070,KWH,30, ,,,,,,,,
300,20140223,0.001,0.001,0.001,0.001,0.001
300,20140224,0.001,0.001,0.001,0.001,0.001
300,20140225,0.001,0.001,0.001,0.001,0.001
300,20140226,0.001,0.001,0.001,0.001,0.001... (1 Reply)
Discussion started by: tejashavele
1 Replies
JOIN(1) General Commands Manual JOIN(1)
NAME
join - relational database operator
SYNOPSIS
join [ options ] file1 file2
DESCRIPTION
Join forms, on the standard output, a join of the two relations specified by the lines of file1 and file2. If file1 is `-', the standard
input is used.
File1 and file2 must be sorted in increasing ASCII collating sequence on the fields on which they are to be joined, normally the first in
each line.
There is one line in the output for each pair of lines in file1 and file2 that have identical join fields. The output line normally con-
sists of the common field, then the rest of the line from file1, then the rest of the line from file2.
Fields are normally separated by blank, tab or newline. In this case, multiple separators count as one, and leading separators are dis-
carded.
These options are recognized:
-an In addition to the normal output, produce a line for each unpairable line in file n, where n is 1 or 2.
-e s Replace empty output fields by string s.
-jn m Join on the mth field of file n. If n is missing, use the mth field in each file.
-o list
Each output line comprises the fields specifed in list, each element of which has the form n.m, where n is a file number and m is a
field number.
-tc Use character c as a separator (tab character). Every appearance of c in a line is significant.
SEE ALSO
sort(1), comm(1), awk(1)
BUGS
With default field separation, the collating sequence is that of sort -b; with -t, the sequence is that of a plain sort.
The conventions of join, sort, comm, uniq, look and awk(1) are wildly incongruous.
JOIN(1)