06-09-2012
To do this a bit more information is needed:
1) is the file sorted, or are the lines you wish to 'keep' adjacent to each other in the file?
2) is the order of the output important? Do the lines 'kept' need to be in the same order that they appeared in the input?
3) do some lines appear more than twice, and should those be kept as well, or do you want to keep the lines that appear exactly twice?
4) how big is the file in terms of number of lines?
Last edited by agama; 06-09-2012 at 02:57 PM..
Reason: typo
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I am doing KSH script to remove duplicate lines in a file. Let say the file has format below.
FileA
1253-6856
3101-4011
1827-1356
1822-1157
1822-1157
1000-1410
1000-1410
1822-1231
1822-1231
3101-4011
1822-1157
1822-1231
and I want to simply it with no duplicate line as file... (5 Replies)
Discussion started by: Teh Tiack Ein
5 Replies
2. UNIX for Advanced & Expert Users
Hi,
I have a file with duplicate lines in it. I want to keep only the duplicate lines and delete the non duplicates. Can some one please help me?
Regards
Narayana Gupta (3 Replies)
Discussion started by: guptan
3 Replies
3. UNIX for Dummies Questions & Answers
Hi,
I am trying to remove duplicate lines from a file. For example the contents of example.txt is:
this is a test
2342
this is a test
34343
this is a test
43434
and i want to remove the "this is a test" lines only and end up with the numbers in the file, that is, end up with:
2342... (4 Replies)
Discussion started by: ocelot
4 Replies
4. UNIX for Dummies Questions & Answers
Hi,
I am having a file which contains many duplicate lines. I wanted to redirect these duplicate lines into another file.
Suppose I have a file called file_dup.txt which contains some line as
file_dup.txt
A100-R1
ACCOUNTING-CONTROL
ACTONA-ACTASTOR
ADMIN-AUTH-STATS
ACTONA-ACTASTOR... (3 Replies)
Discussion started by: zing_foru
3 Replies
5. UNIX for Dummies Questions & Answers
I have a log file "logreport" that contains several lines as seen below:
04:20:00 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
06:38:08 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
07:11:05 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but... (18 Replies)
Discussion started by: Nysif Steve
18 Replies
6. Shell Programming and Scripting
Hi All,
I am trying to remove the duplicate entries in a file and print them just once. For example, if my input file has:
00:44,37,67,56,15,12
00:44,34,67,56,15,12
00:44,58,67,56,15,12
00:44,35,67,56,15,12
00:59,37,67,56,15,12
00:59,34,67,56,15,12
00:59,35,67,56,15,12... (7 Replies)
Discussion started by: faiz1985
7 Replies
7. UNIX for Advanced & Expert Users
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Discussion started by: krishnix
16 Replies
8. Shell Programming and Scripting
Hey guys, need some help to fix this script. I am trying to remove all the duplicate lines in this file.
I wrote the following script, but does not work. What is the problem?
The output file should only contain five lines:
Later! (5 Replies)
Discussion started by: Ernst
5 Replies
9. UNIX for Dummies Questions & Answers
I have a file with following data
A
B
C
I would like to print like this n times(For eg:5 times)
A
B
C
A
B
C
A
B
C
A
B
C
A (7 Replies)
Discussion started by: nsuresh316
7 Replies
10. Shell Programming and Scripting
Hi,
I have a csv file which contains some millions of lines in it.
The first line(Header) repeats at every 50000th line. I want to remove all the duplicate headers from the second occurance(should not remove the first line).
I don't want to use any pattern from the Header as I have some... (7 Replies)
Discussion started by: sudhakar T
7 Replies
UNIQ(1) BSD General Commands Manual UNIQ(1)
NAME
uniq -- report or filter out repeated lines in a file
SYNOPSIS
uniq [-cdu] [-f fields] [-s chars] [input_file [output_file]]
DESCRIPTION
The uniq utility reads the standard input comparing adjacent lines, and writes a copy of each unique input line to the standard output. The
second and succeeding copies of identical adjacent input lines are not written. Repeated lines in the input will not be detected if they are
not adjacent, so it may be necessary to sort the files first.
The following options are available:
-c Precede each output line with the count of the number of times the line occurred in the input, followed by a single space.
-d Don't output lines that are not repeated in the input.
-f fields
Ignore the first fields in each input line when doing comparisons. A field is a string of non-blank characters separated from adja-
cent fields by blanks. Field numbers are one based, i.e. the first field is field one.
-s chars
Ignore the first chars characters in each input line when doing comparisons. If specified in conjunction with the -f option, the
first chars characters after the first fields fields will be ignored. Character numbers are one based, i.e. the first character is
character one.
-u Don't output lines that are repeated in the input.
If additional arguments are specified on the command line, the first such argument is used as the name of an input file, the second is used
as the name of an output file.
The uniq utility exits 0 on success, and >0 if an error occurs.
COMPATIBILITY
The historic +number and -number options have been deprecated but are still supported in this implementation.
SEE ALSO
sort(1)
STANDARDS
The uniq utility is expected to be IEEE Std 1003.2 (``POSIX.2'') compatible.
BSD
January 6, 2007 BSD