01-24-2008
Quote:
Originally Posted by
coolkid
Hey Guys
I need to eleminate duplicate IP's from a text file using bash.Any suggestions.Appreciate your help guys.
--CoolKid
Hey,
uniq
This filter removes duplicate lines from a sorted file. It is often seen in a pipe coupled with sort.
cat list−1 list−2 list−3 | sort | uniq > final.list
# Concatenates the list files,
# sorts them,
# removes duplicate lines,
# and finally writes the result to an output file.
Try this logic, hope you'll get the desired one.
Thanks
Varun.
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hi,
I want to fetch duplicate records from an external table to a text file.
Pls suggest me.
Thanks (1 Reply)
Discussion started by: shilendrajadon
1 Replies
2. Shell Programming and Scripting
i would like to scan file in for duplicate lines, and print the duplicates to another file,
oh and it has to be case insensitive.
example
line1
line2
line2
line3
line4
line4
outputfile:
line2
line4
any ideas (5 Replies)
Discussion started by: nixguy
5 Replies
3. Shell Programming and Scripting
Hi all
pls help me by providing soln for my problem
I'm having a text file which contains duplicate records .
Example:
abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452
abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452
tas 3420 3562 ... (1 Reply)
Discussion started by: G.Aavudai
1 Replies
4. Shell Programming and Scripting
I have many pdf's scattered across 4 machines. There is 1 location where I have other Pdf's maintained. But the issues it the 4 machines may have duplicate pdf's among themselves, but I want just 1 copy of each so that they can be transfered to that 1 location.
What I have thought is:
1) I have... (11 Replies)
Discussion started by: deaddevil
11 Replies
5. Linux
Hello to the unix.com community.
I have a mess of text. What I would like to do I pluck out IP addresses and CIDR notations only.
I thought I would try something like this
/usr/bin/grep -o '\{1,3\}\.\{1,3\}\.\{1,3\}\.\{1,3\}\\' /Path/To/File
But there are a few problems wit this.
... (3 Replies)
Discussion started by: TroubleNow345
3 Replies
6. Shell Programming and Scripting
notes: i am using cygwin and notepad++ only for checking this and my OS is XP.
#!/bin/bash
typeset -i totalvalue=(wc -w /cygdrive/c/cygwinfiles/database.txt)
typeset -i totallines=(wc -l /cygdrive/c/cygwinfiles/database.txt)
typeset -i columnlines=`expr $totalvalue / $totallines`
awk -F' ' -v... (5 Replies)
Discussion started by: whitecross
5 Replies
7. Shell Programming and Scripting
Hello all,
I am searching for a solution to the following problem:
Given input such as this:
I would like to find a way to output this:
Thanks in advance! (4 Replies)
Discussion started by: hydrabane
4 Replies
8. Shell Programming and Scripting
Hi All
I have a list of files which will have duplicate list of blocks of text. Following is a sample of the file, I have removed the sensitive information from the file.
All the code samples starts from <TR BGCOLOR="white"> and Ends with IP address and two html tags like this.
10.14.22.22... (3 Replies)
Discussion started by: mahasona
3 Replies
9. Shell Programming and Scripting
Hi. I've tried several different programs to try and solve this problem, but none of them seem to have done exactly what I want (and I need the file in a very specific format). I have a large file of DNA sequences in a multifasta file like this, with around 15 000 genes:
... (2 Replies)
Discussion started by: 4galaxy7
2 Replies
10. Shell Programming and Scripting
I need to sum values in text file in case duplicate row are present with same name and different value below is example of data in file i have and format i need.
Data in text file
20170308
PM,U,2
PM,U,113
PM,I,123
DA,U,135
DA,I,113
DA,I,1
20170309
PM,U,2
PM,U,1
PM,I,123
PM,I,1... (3 Replies)
Discussion started by: Adfire
3 Replies
UNIQ(1) BSD General Commands Manual UNIQ(1)
NAME
uniq -- report or filter out repeated lines in a file
SYNOPSIS
uniq [-c | -d | -u] [-i] [-f num] [-s chars] [input_file [output_file]]
DESCRIPTION
The uniq utility reads the specified input_file comparing adjacent lines, and writes a copy of each unique input line to the output_file. If
input_file is a single dash ('-') or absent, the standard input is read. If output_file is absent, standard output is used for output. The
second and succeeding copies of identical adjacent input lines are not written. Repeated lines in the input will not be detected if they are
not adjacent, so it may be necessary to sort the files first.
The following options are available:
-c Precede each output line with the count of the number of times the line occurred in the input, followed by a single space.
-d Only output lines that are repeated in the input.
-f num Ignore the first num fields in each input line when doing comparisons. A field is a string of non-blank characters separated from
adjacent fields by blanks. Field numbers are one based, i.e., the first field is field one.
-s chars
Ignore the first chars characters in each input line when doing comparisons. If specified in conjunction with the -f option, the
first chars characters after the first num fields will be ignored. Character numbers are one based, i.e., the first character is
character one.
-u Only output lines that are not repeated in the input.
-i Case insensitive comparison of lines.
ENVIRONMENT
The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of uniq as described in environ(7).
EXIT STATUS
The uniq utility exits 0 on success, and >0 if an error occurs.
COMPATIBILITY
The historic +number and -number options have been deprecated but are still supported in this implementation.
SEE ALSO
sort(1)
STANDARDS
The uniq utility conforms to IEEE Std 1003.1-2001 (``POSIX.1'') as amended by Cor. 1-2002.
HISTORY
A uniq command appeared in Version 3 AT&T UNIX.
BSD
July 3, 2004 BSD