UNIQ(1) General Commands Manual UNIQ(1)NAME
uniq - report repeated lines in a file
SYNOPSIS
uniq [ -udc [ +n ] [ -n ] ] [ input [ output ] ]
DESCRIPTION
Uniq reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are removed;
the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found; see sort(1). If the -u flag
is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of just the repeated
lines is to be written. The normal mode output is the union of the -u and -d mode outputs.
The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of
times it occurred.
The n arguments specify skipping an initial portion of each line in the comparison:
-n The first n fields together with any blanks before each are ignored. A field is defined as a string of non-space, non-tab charac-
ters separated by tabs and spaces from its neighbors.
+n The first n characters are ignored. Fields are skipped before characters.
SEE ALSO sort(1), comm(1)UNIQ(1)
Check Out this Related Man Page
uniq(1) General Commands Manual uniq(1)Name
uniq - report repeated lines in a file
Syntax
uniq [-udc[+n][-n]] [input[output]]
Description
The command reads the input file comparing adjacent lines. In the normal case, the second and succeeding copies of repeated lines are
removed; the remainder is written on the output file. Note that repeated lines must be adjacent in order to be found. For further infor-
mation, see
Options
The n arguments specify skipping an initial portion of each line in the comparison:
-n Skips specified number of fields. A field is defined as a string of non-space, non-tab characters separated by tabs and spaces from its
neighbors.
+n Skips specified number of characters in addition to fields. Fields are skipped before characters.
-c Displays number of repetitions, if any, for each line.
-d Displays only lines that were repeated.
-u Displays only unique (nonrepeated) lines.
If the -u flag is used, just the lines that are not repeated in the original file are output. The -d option specifies that one copy of
just the repeated lines is to be written. The normal mode output is the union of the -u and -d mode outputs.
The -c option supersedes -u and -d and generates an output report in default style but with each line preceded by a count of the number of
times it occurred.
See Alsocomm(1), sort(1)uniq(1)
Hi all,
I have a file that contains a list of codes (shown below).
I want to 'uniq' the file using only the first field. Anyone know an easy way of doing it?
Cheers,
Dave
##### Input File #####
1xr1 1xws 1yxt 1yxu 1yxv 1yxx 2o3p 2o63 2o64 2o65
1xr1 1xws 1yxt 1yxv 1yxx 2o3p 2o63 2o64... (8 Replies)
infile:
z y x
c b a
desired output:
x y z
a b c
I don't want to sort the lines into this:
a b c
x y z
nor this:
c b a
z y x
The number of fields per line and number of lines is indeterminate. The field separator is always a space.
Thanks for the use of your collective brains.... (11 Replies)
I've got file A with (say) 1M lines in it ... ascii text, space delimited ...
I've got file B with (say) 10M lines in it ... same structure.
I want to remove any lines from A that appear (identically) in B and print the remaining (say) 900K lines. (And I want to do it in zero time of... (14 Replies)
Hi everybody:
Could anybody tell me how I can delete repeated rows from a file?, this is, for exemple I have a file like this:
0.490 958.73 281.85 6.67985 0.002481
0.490 954.833 283.991 8.73019 0.002471
0.590 950.504 286.241 6.61451 0.002461
0.690 939.323 286.112 6.16451 0.00246
0.790... (8 Replies)
AWK help:
I have a file with following format. I need to remove any entries which are repeated based on first 3 characters. So from the following files I need to remove any entries start with "mas".
mas01bct
mas02bct
mas03bct
mas01bct
mas01bct
mas01bct
mas11bct
mas01bct
mas01bct... (11 Replies)
Dear experts,
Ive been trying to figure this out for a while, but i cant. Please help.
I have a file, with approx 1 million lines. The contents are separated with "----------". Please see example below
So my problem is, i need to find all texts that have the keyword "GAA", but i need... (14 Replies)
Anyone can help for filter the uniq record for below example? Thank you very much
Input file
20090503011111|test|abc
20090503011112|tet1|abc|def
20090503011112|test1|bcd|def
20090503011131|abc|abc
20090503011131|bbc|bcd
20090503011152|bcd|abc
20090503011151|abc|abc... (8 Replies)
I am new to Perl and in text file of around 1000 lines having around 500 repeated line which I felt is no use and want to remove these line.so can somebody help in same for providing sample code how can i remove these repeated line in a file. (11 Replies)
Hello,
I need to insert varying lines (i.e. these lines are an output of another script) between lines starting with certain fields.
An example to make it more clear.
This is the file where I wanna insert lines:
(save it as "input.txt")
ContrInMi_c_mir 2 10066 181014 200750... (12 Replies)
I've got scripts trawling the network and dumping parsed text into files with an Epoch timestamp in column 1. I append the old data to the new data then just want to keep the top entry if there is an identical duplicate below (column 1 needs to be ignored).
sort -ur +1 works a treat on a Solaris... (16 Replies)
Hi below is the input file, i need to find repeated words and sum up the values of it which is second field from the repeated work.Im trying but getting no where close to it.Kindly give me a hint on how to go about it
Input
fruits,apple,20,fruits,mango,20,veg,carrot,12,veg,raddish,30... (11 Replies)
Gents.
Please a help.
I have this input
I will like to count how many times is repeated the record which have always the values 14 and 98 only.
1000 1
1000 1
1001 1
1001 1
1002 98
1002 98 ... (9 Replies)
I have the following code to count the number of how many times the name occurred in one file. The code is working fine and the output is exactly what I want. The problem is the real code has more than 50 names in function listname which cause function name to have more than 50 case ,and function... (14 Replies)
I have a file where every line includes four expressions with a caret in the middle (plus some other "words" or fields, always separated by spaces). I would like to extract from this file, all those lines such that each of the four expressions containing a caret appears in at least four different... (9 Replies)
Remove duplicate lines which has been repeated 4 times attached test.txt
below command tried and not getting expect output.
for i in `cat test.txt | uniq`
do
num=`cat test.txt | grep $i | wc -l`
echo $i $num
done
test.txt
... (17 Replies)