09-13-2008
You Could use uniq command to avoid duplicates...
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi I have a file that are a list of people & their credentials i recieve frequently The issue is that whne I catnet this list that duplicat entries exists & are NOT CONSECUTIVE (i.e. uniq -1 may not weork here )
I'm trying to write a scrip that will remove duplicate entries
the script can... (5 Replies)
Discussion started by: stevie_velvet
5 Replies
2. Shell Programming and Scripting
Hi,
I have a text file with 2000 rows and 2000 columns (number of columns might vary from row to row) and "comma" is the delimiter.
In every row, there maybe few duplicates and we need to remove those duplicates and "shift left" the consequent values.
ex:
111 222 111 555
444 999 666... (6 Replies)
Discussion started by: prvnrk
6 Replies
3. Shell Programming and Scripting
I have input file like below.
I00789524 0213 5212
D00789524 0213 5212
I00778787 2154 5412
The first two records are same(Duplicates) except I & D in the first character. I want non duplicates(ie. 3rd line) to be output. How can we get this . Can you help. Is there any single AWK or SED... (3 Replies)
Discussion started by: awk_beginner
3 Replies
4. Shell Programming and Scripting
Hi,
I have this large file and sometimes there are duplicates and I want to basically find them and figure how many there are.
So I have a file with multiple columns and the last column (9) has the duplicates.
eg.
yan
tar
tar
man
ban
tan
tub
tub
tub
Basically what I want to... (6 Replies)
Discussion started by: kylle345
6 Replies
5. UNIX for Dummies Questions & Answers
Hi,
How to eliminate the duplicate values in unix? I have a excel file which contains duplicate values.
Need to use this in a script.
Thanks in advance. (3 Replies)
Discussion started by: venkatesht
3 Replies
6. UNIX for Advanced & Expert Users
Hi All
In unix ,we have a file ,there we have to remove the duplicates by using one specific column.
Can any body tell me the command.
ex:
file1
id,name
1,ww
2,qwq
2,asas
3,asa
4,asas
4,asas
o/p:
1,ww
2,qwq
3,asa (7 Replies)
Discussion started by: raju4u
7 Replies
7. Emergency UNIX and Linux Support
i want to remove all the duplictaes in a file.I dont want even a single entry.
For the input data:
12345|12|34
12345|13|23
3456|12|90
15670|12|13
12345|10|14
3456|12|13
i need the below data in one file
15670|12|13
and the below data in another file (9 Replies)
Discussion started by: pandeesh
9 Replies
8. AIX
Hello,
I'm moving some disks from the rootvg on AIX 5.3.
# replacepv hdiskOLD hdiskNEW
I have for example hdisk12 and hdisk13 with hd5 (boot) LV and want to move hdisk13 So 1st I'm excluding it from the bootlist:
# bootlist -om normal hdisk12
then
# replacepv hdisk13... (7 Replies)
Discussion started by: emoubi
7 Replies
9. Shell Programming and Scripting
Hi guys,
I am trying to identify the number of duplicate entries in a string inputed by the user. Here is a command I use:
$ user_input="M T T"
$echo "${user_input}" | awk '{for(i=0;i<=NF;i++) print $i }'| sort | uniq -d
The above works fine for string with multiple letters. The problem is... (2 Replies)
Discussion started by: aoussenko
2 Replies
10. Shell Programming and Scripting
Mac OS 10.9
Let me preface this by saying this is not for marketing or spamming purposes.
I have a script that scans all the email messages in a directory (~/Library/Mail/Mailboxes) and outputs a single column list of email addresses. This will run multiple times a day and append the output... (3 Replies)
Discussion started by: sudo
3 Replies
UNIQ(1) BSD General Commands Manual UNIQ(1)
NAME
uniq -- report or filter out repeated lines in a file
SYNOPSIS
uniq [-c | -d | -u] [-i] [-f num] [-s chars] [input_file [output_file]]
DESCRIPTION
The uniq utility reads the specified input_file comparing adjacent lines, and writes a copy of each unique input line to the output_file. If
input_file is a single dash ('-') or absent, the standard input is read. If output_file is absent, standard output is used for output. The
second and succeeding copies of identical adjacent input lines are not written. Repeated lines in the input will not be detected if they are
not adjacent, so it may be necessary to sort the files first.
The following options are available:
-c Precede each output line with the count of the number of times the line occurred in the input, followed by a single space.
-d Only output lines that are repeated in the input.
-f num Ignore the first num fields in each input line when doing comparisons. A field is a string of non-blank characters separated from
adjacent fields by blanks. Field numbers are one based, i.e., the first field is field one.
-s chars
Ignore the first chars characters in each input line when doing comparisons. If specified in conjunction with the -f option, the
first chars characters after the first num fields will be ignored. Character numbers are one based, i.e., the first character is
character one.
-u Only output lines that are not repeated in the input.
-i Case insensitive comparison of lines.
ENVIRONMENT
The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of uniq as described in environ(7).
EXIT STATUS
The uniq utility exits 0 on success, and >0 if an error occurs.
COMPATIBILITY
The historic +number and -number options have been deprecated but are still supported in this implementation.
SEE ALSO
sort(1)
STANDARDS
The uniq utility conforms to IEEE Std 1003.1-2001 (``POSIX.1'') as amended by Cor. 1-2002.
HISTORY
A uniq command appeared in Version 3 AT&T UNIX.
BSD
December 17, 2009 BSD