11-13-2009
Remove duplicates from end of file
1/p
----
A
B
C
A
C
o/p
---
B
A
C
From input file it should remove duplicates from end without changing order
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
How can i remove the duplicate lines from a file, for example
sample123456Sample
testing123456testing
XXXXX131323XXXXX
YYYYY423432YYYYY
fsdfdsf123456gsdfdsd
all the duplicates from column 6-12 , must be deleted. I want to consider the first row, if same comes in the given range i want to... (1 Reply)
Discussion started by: gopikgunda
1 Replies
2. Shell Programming and Scripting
hi.. i have a file in the following format :-
name-a
age -12
address-123
age-12
phone-22222
============
name-ab
age -11
address-123
age-11
phone-222223
=============
name-abc
age -12
address-1234
age-12
phone-2222223
============= (2 Replies)
Discussion started by: nipun_garg
2 Replies
3. Shell Programming and Scripting
Hi,
I am writing a shell script that needs to remove duplicate lines within a file by category.
example:
section a
a
c
b
a
section b
a
b
a
c
I need to remove the duplicates within th category with out removing the duplicates from the 2 different sections (one of the a's in section... (1 Reply)
Discussion started by: RichElks
1 Replies
4. Shell Programming and Scripting
Hi,
I need to remove duplicates from a file. The file will be like this
0003 10101 20100120 abcdefghi
0003 10101 20100121 abcdefghi
0003 10101 20100122 abcdefghi
0003 10102 20100120 abcdefghi
0003 10103 20100120 abcdefghi
0003 10103 20100121 abcdefghi
Here if the first colum and... (6 Replies)
Discussion started by: gpaulose
6 Replies
5. Shell Programming and Scripting
Hi,
I am unable to search the duplicates in a file based on the 1st,2nd,4th,5th columns in a file and also remove the duplicates in the same file.
Source filename: Filename.csv
"1","ccc","information","5000","temp","concept","new"
"1","ddd","information","6000","temp","concept","new"... (2 Replies)
Discussion started by: onesuri
2 Replies
6. Shell Programming and Scripting
All,
I have a file 1181CUSTOMER-L061411_003500.dat.Z having duplicate records in it.
bash-2.05$ zcat 1181CUSTOMER-L061411_003500.dat.Z|grep "90876251S"
90876251S|ABG, AN ADAYANA COMPANY|3550 DEPAUW BLVD|||US|IN|INDIANAPOLIS||DAL|46268||||||GEN|||||||USD|||ABG, AN ADAYANA... (3 Replies)
Discussion started by: Oracle_User
3 Replies
7. UNIX for Dummies Questions & Answers
Can u tell me how to remove duplicate records from a file? (11 Replies)
Discussion started by: saga20
11 Replies
8. UNIX for Dummies Questions & Answers
Hi,
I have a tablular separated file and I want to remove all the rows that have duplicates. The diuplicates I need to check are in column 13.
I have tried to use awk but I have no Idea how to keep the duplicate file.
awk 'FNR==NR{a++;next}(a> 1)' tomodify.txt tomodify.txt > new.txt
... (4 Replies)
Discussion started by: flacchy
4 Replies
9. Shell Programming and Scripting
Hi some one please help me to remove duplicates from a pipe delimited file based on first two columns.
123|asdf|sfsd|qwrer
431|yui|qwer|opws
123|asdf|pol|njio
Here My first record and last record are duplicates.As per my requirement I want all the latest records into one file.
I want the... (12 Replies)
Discussion started by: ginrkf
12 Replies
10. UNIX for Advanced & Expert Users
Hi all,
I have a issues while loading a flat file to the DB. It is taking much time.
When analyzed i found out that there are duplicates entry in the flat file.
There are 2 type of Duplicate entry.
1) is entire row is duplicate. ( i can use sort | uniq) to remove the duplicated entry.
2) the... (4 Replies)
Discussion started by: samjoshuab
4 Replies
LEARN ABOUT NETBSD
msguniq
MSGUNIQ(1) GNU MSGUNIQ(1)
NAME
msguniq - unify duplicate translations in message catalog
SYNOPSIS
msguniq [OPTION] [INPUTFILE]
DESCRIPTION
Unifies duplicate translations in a translation catalog. Finds duplicate translations of the same message ID. Such duplicates are invalid
input for other programs like msgfmt, msgmerge or msgcat. By default, duplicates are merged together. When using the --repeated option,
only duplicates are output, and all other messages are discarded. Comments and extracted comments will be cumulated, except that if
--use-first is specified, they will be taken from the first translation. File positions will be cumulated. When using the --unique
option, duplicates are discarded.
Mandatory arguments to long options are mandatory for short options too.
Input file location:
INPUTFILE
input PO file
-D, --directory=DIRECTORY
add DIRECTORY to list for input files search
If no input file is given or if it is -, standard input is read.
Output file location:
-o, --output-file=FILE
write output to specified file
The results are written to standard output if no output file is specified or if it is -.
Message selection:
-d, --repeated
print only duplicates
-u, --unique
print only unique messages, discard duplicates
Input file syntax:
-P, --properties-input
input file is in Java .properties syntax
--stringtable-input
input file is in NeXTstep/GNUstep .strings syntax
Output details:
-t, --to-code=NAME
encoding for output
--use-first
use first available translation for each message, don't merge several translations
-e, --no-escape
do not use C escapes in output (default)
-E, --escape
use C escapes in output, no extended chars
--force-po
write PO file even if empty
-i, --indent
write the .po file using indented style
--no-location
do not write '#: filename:line' lines
-n, --add-location
generate '#: filename:line' lines (default)
--strict
write out strict Uniforum conforming .po file
-p, --properties-output
write out a Java .properties file
--stringtable-output
write out a NeXTstep/GNUstep .strings file
-w, --width=NUMBER
set output page width
--no-wrap
do not break long message lines, longer than the output page width, into several lines
-s, --sort-output
generate sorted output
-F, --sort-by-file
sort output by file location
Informative output:
-h, --help
display this help and exit
-V, --version
output version information and exit
AUTHOR
Written by Bruno Haible.
REPORTING BUGS
Report bugs to <bug-gnu-gettext@gnu.org>.
COPYRIGHT
Copyright (C) 2001-2005 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU-
LAR PURPOSE.
SEE ALSO
The full documentation for msguniq is maintained as a Texinfo manual. If the info and msguniq programs are properly installed at your
site, the command
info msguniq
should give you access to the complete manual.
GNU gettext-tools 0.14.4 April 2005 MSGUNIQ(1)