...
if original input is "I love yahoo"
I need remove duplicate words
so it will become
I love yah
You mention "duplicate words", but you have removed duplicate characters in your expected output.
So I shall assume you want to remove duplicate characters, and not words.
i have a text its contain many record, but its written in one line,
i want to remove from that line the duplicate record,
not record have fixed width ex: width = 4
inputfile test.txt =abc cdf abc abc cdf fgh fgh abc abc
i want the outputfile =abc cdf fgh
only those records
can any one help... (4 Replies)
Hi all,
I have a out.log file
CARR|02/26/2006 10:58:30.107|CDxAcct=1405157051
CARR|02/26/2006 11:11:30.107|CDxAcct=1405157051
CARR|02/26/2006 11:18:30.107|CDxAcct=7659579782
CARR|02/26/2006 11:28:30.107|CDxAcct=9534922327
CARR|02/26/2006 11:38:30.107|CDxAcct=9534922327
CARR|02/26/2006... (3 Replies)
Hi all,
I have a text file fileA.txt
DXRV|02/28/2006 11:36:49.049|SAC||||CDxAcct=2420991350
DXRV|02/28/2006 11:37:06.404|SAC||||CDxAcct=6070970034
DXRV|02/28/2006 11:37:25.740|SAC||||CDxAcct=2420991350
DXRV|02/28/2006 11:38:32.633|SAC||||CDxAcct=6070970034
DXRV|02/28/2006... (2 Replies)
Hi,
I have a list of numbers stored in an array as below.
5 7 10 30 30 40 50
Please advise how could I remove the duplicate value in the array ?
Thanks in advance. (5 Replies)
Input file
data_1 10 US
data_1 2 US
data_1 5 UK
data_2 20 ENGLAND
data_2 12 KOREA
data_3 4 CHINA
.
.
data_60 123 US
data_60 23 UK
data_60 45 US
Desired output file
data_1 10 US
data_1 5 UK
data_2 20 ENGLAND
data_2 12 KOREA (2 Replies)
HI
I have file contains 1000'f of duplicate id's with (upper and lower first character) as below
i/p:
a411532A411532a508661A508661c411532C411532
Requirement: But i need to ignore lowercase id's and need only below id's
o/p:
A411532
A508661
C411532 (9 Replies)
Hi ,
I have a pipe seperated file repo.psv where i need to remove duplicates based on the 1st column only. Can anyone help with a Unix script ?
Input:
15277105||Common Stick|ESHR||Common Stock|CYRO AB
15277105||Common Stick|ESHR||Common Stock|CYRO AB
16111278||Common Stick|ESHR||Common... (12 Replies)
Hi All,
I am storing the result in the variable result_text using the below code.
result_text=$(printf "$result_text\t\n$name") The result_text is having the below text. Which is having duplicate lines.
file and time for the interval 03:30 - 03:45
file and time for the interval 03:30 - 03:45 ... (4 Replies)
Discussion started by: nalu
4 Replies
LEARN ABOUT REDHAT
uniq
UNIQ(1) FSF UNIQ(1)NAME
uniq - remove duplicate lines from a sorted file
SYNOPSIS
uniq [OPTION]... [INPUT [OUTPUT]]
DESCRIPTION
Discard all but one of successive identical lines from INPUT (or standard input), writing to OUTPUT (or standard output).
Mandatory arguments to long options are mandatory for short options too.
-c, --count
prefix lines by the number of occurrences
-d, --repeated
only print duplicate lines
-D, --all-repeated[=delimit-method] print all duplicate lines
delimit-method={none(default),prepend,separate} Delimiting is done with blank lines.
-f, --skip-fields=N
avoid comparing the first N fields
-i, --ignore-case
ignore differences in case when comparing
-s, --skip-chars=N
avoid comparing the first N characters
-u, --unique
only print unique lines
-w, --check-chars=N
compare no more than N characters in lines
--help display this help and exit
--version
output version information and exit
A field is a run of whitespace, then non-whitespace characters. Fields are skipped before chars.
AUTHOR
Written by Richard Stallman and David MacKenzie.
REPORTING BUGS
Report bugs to <bug-coreutils@gnu.org>.
COPYRIGHT
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU-
LAR PURPOSE.
SEE ALSO
The full documentation for uniq is maintained as a Texinfo manual. If the info and uniq programs are properly installed at your site, the
command
info uniq
should give you access to the complete manual.
uniq (coreutils) 4.5.3 February 2003 UNIQ(1)