I need to remove duplicates from a file. The file will be like this
Here if the first colum and second column is repeating i need to pick the first record. If not repeating i need to pick the record.
The output shd be.
Thanks in advance for the help. The script could be in Perl or Unix.
Hello,
i found this code on a web page which is said to be valid for only gnu linux and delete all lines except duplicate ones, i hope it works (sorry im using solaris10, couldnt try)
# delete all lines except duplicate lines (emulates "uniq -d").
Yes,
first of all, the code is wrong
It should be:
... not:
So the script becomes:
First the command line switches:
Quote:
-a turns on autosplit mode when used with a -n or -p. An implicit
split command to the @F array is done as the first thing inside
the implicit while loop produced by the -n or -p.
Quote:
-e commandline
may be used to enter one line of program. If -e is given, Perl
will not look for a filename in the argument list. Multiple -e
commands may be given to build up a multi-line script. Make sure
to use semicolons where you would in a normal program.
Quote:
-n causes Perl to assume the following loop around your program,
which makes it iterate over filename arguments somewhat like sed
-n or awk:
LINE:
while (<>) {
... # your program goes here
}
Note that the lines are not printed by default. See -p to have
lines printed. If a file named by an argument cannot be opened
for some reason, Perl warns you about it and moves on to the next
file.
So we have the input file read line by line and the @F array automatically populated.
Print the current record unless the expression $_{$F[0],$F[1]}++ returns true in boolean context. We build the hash %_ whose keys (the concatenation of the first two fields with the subscript separator) are associated with auto-incremented integers. When we see a given key ($F[0] $; $F[1] - the first and the second fields) for the first time, because of the post-incrementing (k++ and not ++k ) its value is 0, i.e. false, so it prints the record.
Hi all,
I have a issues while loading a flat file to the DB. It is taking much time.
When analyzed i found out that there are duplicates entry in the flat file.
There are 2 type of Duplicate entry.
1) is entire row is duplicate. ( i can use sort | uniq) to remove the duplicated entry.
2) the... (4 Replies)
Hi some one please help me to remove duplicates from a pipe delimited file based on first two columns.
123|asdf|sfsd|qwrer
431|yui|qwer|opws
123|asdf|pol|njio
Here My first record and last record are duplicates.As per my requirement I want all the latest records into one file.
I want the... (12 Replies)
Hi,
I have a tablular separated file and I want to remove all the rows that have duplicates. The diuplicates I need to check are in column 13.
I have tried to use awk but I have no Idea how to keep the duplicate file.
awk 'FNR==NR{a++;next}(a> 1)' tomodify.txt tomodify.txt > new.txt
... (4 Replies)
All,
I have a file 1181CUSTOMER-L061411_003500.dat.Z having duplicate records in it.
bash-2.05$ zcat 1181CUSTOMER-L061411_003500.dat.Z|grep "90876251S"
90876251S|ABG, AN ADAYANA COMPANY|3550 DEPAUW BLVD|||US|IN|INDIANAPOLIS||DAL|46268||||||GEN|||||||USD|||ABG, AN ADAYANA... (3 Replies)
Hi,
I am unable to search the duplicates in a file based on the 1st,2nd,4th,5th columns in a file and also remove the duplicates in the same file.
Source filename: Filename.csv
"1","ccc","information","5000","temp","concept","new"
"1","ddd","information","6000","temp","concept","new"... (2 Replies)
Hi,
I am writing a shell script that needs to remove duplicate lines within a file by category.
example:
section a
a
c
b
a
section b
a
b
a
c
I need to remove the duplicates within th category with out removing the duplicates from the 2 different sections (one of the a's in section... (1 Reply)
hi.. i have a file in the following format :-
name-a
age -12
address-123
age-12
phone-22222
============
name-ab
age -11
address-123
age-11
phone-222223
=============
name-abc
age -12
address-1234
age-12
phone-2222223
============= (2 Replies)
How can i remove the duplicate lines from a file, for example
sample123456Sample
testing123456testing
XXXXX131323XXXXX
YYYYY423432YYYYY
fsdfdsf123456gsdfdsd
all the duplicates from column 6-12 , must be deleted. I want to consider the first row, if same comes in the given range i want to... (1 Reply)