Sponsored Content
Top Forums Shell Programming and Scripting Remove all instances of duplicate records from the file Post 302150568 by vukkusila on Tuesday 11th of December 2007 08:25:36 PM
Old 12-11-2007
Remove all instances of duplicate records from the file

Hi experts,
I am new to scripting. I have a requirement as below.

File1:

A|123|NAME1
A|123|NAME2
B|123|NAME3

File2:

C|123|NAME4
C|123|NAME5
D|123|NAME6

1) I have 2 merge both the files.
2) need to do a sort ( key fields are first and second field)
3) remove all the instances of duplicate records from the merged file and write write all these duplicate instances into one file.
4) rest of the records which are unique in the original source files, needs to be written into another file

outfiles:

file3:
A|123|NAME1
A|123|NAME2
C|123|NAME4
C|123|NAME5

File4:

B|123|NAME3
D|123|NAME6

Please help me with the solution as I am in real urgent. Appreciate your help.

Thank you
 

10 More Discussions You Might Find Interesting

1. Solaris

How to remove duplicate records with out sort

Can any one give me command How to delete duplicate records with out sort. Suppose if the records like below: 345,bcd,789 123,abc,456 234,abc,456 712,bcd,789 out tput should be 345,bcd,789 123,abc,456 Key for the records is 2nd and 3rd fields.fields are seperated by colon(,). (2 Replies)
Discussion started by: svenkatareddy
2 Replies

2. Shell Programming and Scripting

How to remove duplicate records with out sort

Can any one give me command How to delete duplicate records with out sort. Suppose if the records like below: 345,bcd,789 123,abc,456 234,abc,456 712,bcd,789 out tput should be 345,bcd,789 123,abc,456 Key for the records is 2nd and 3rd fields.fields are seperated by colon(,). (19 Replies)
Discussion started by: svenkatareddy
19 Replies

3. Shell Programming and Scripting

How to find Duplicate Records in a text file

Hi all pls help me by providing soln for my problem I'm having a text file which contains duplicate records . Example: abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452 abc 1000 3452 2463 2343 2176 7654 3452 8765 5643 3452 tas 3420 3562 ... (1 Reply)
Discussion started by: G.Aavudai
1 Replies

4. Shell Programming and Scripting

find out duplicate records in file?

Dear All, I have one file which looks like : account1:passwd1 account2:passwd2 account3:passwd3 account1:passwd4 account5:passwd5 account6:passwd6 you can see there're two records for account1. and is there any shell command which can find out : account1 is the duplicate record in... (3 Replies)
Discussion started by: tiger2000
3 Replies

5. Shell Programming and Scripting

Remove duplicate records

I want to remove the records based on duplicate. I want to remove if two or more records exists with combination fields. Those records should not come once also file abc.txt ABC;123;XYB;HELLO; ABC;123;HKL;HELLO; CDE;123;LLKJ;HELLO; ABC;123;LSDK;HELLO; CDF;344;SLK;TEST key fields are... (7 Replies)
Discussion started by: svenkatareddy
7 Replies

6. Shell Programming and Scripting

Remove Duplicate Records

Hi frinds, Need your help. item , color ,desc ==== ======= ==== 1,red ,abc 1,red , a b c 2,blue,x 3,black,y 4,brown,xv 4,brown,x v 4,brown, x v I have to elemnet the duplicate rows on the basis of item. the final out put will be 1,red ,abc (6 Replies)
Discussion started by: imipsita.rath
6 Replies

7. UNIX for Dummies Questions & Answers

Using sed command to remove multiple instances of repeating headers in one file?

Hi, I have catenated multiple output files (from a monte carlo run) into one big output file. Each individual file has it's own two line header. So when I catenate, there are multiple two line headers (of the same wording) within the big file. How do I use the sed command to search for the... (1 Reply)
Discussion started by: rebazon
1 Replies

8. Shell Programming and Scripting

Remove somewhat Duplicate records from a flat file

I have a flat file that contains records similar to the following two lines; 1984/11/08 7 700000 123456789 2 1984/11/08 1941/05/19 7 700000 123456789 2 The 123456789 2 represents an account number, this is how I identify the duplicate record. The ### signs represent... (4 Replies)
Discussion started by: jolney
4 Replies

9. Shell Programming and Scripting

Deleting duplicate records from file 1 if records from file 2 match

I have 2 files "File 1" is delimited by ";" and "File 2" is delimited by "|". File 1 below (3 record shown): Doc1;03/01/2012;New York;6 Main Street;Mr. Smith 1;Mr. Jones Doc2;03/01/2012;Syracuse;876 Broadway;John Davis;Barbara Lull Doc3;03/01/2012;Buffalo;779 Old Windy Road;Charles... (2 Replies)
Discussion started by: vestport
2 Replies

10. Shell Programming and Scripting

Remove duplicate records

Hi, i am working on a script that would remove records or lines in a flat file. The only difference in the file is the "NOT NULL" word. Please see below example of the input file. INPUT FILE:> CREATE a ( TRIAL_CLIENT NOT NULL VARCHAR2(60), TRIAL_FUND NOT NULL... (3 Replies)
Discussion started by: reignangel2003
3 Replies
Net::Whois::Parser(3pm) 				User Contributed Perl Documentation				   Net::Whois::Parser(3pm)

NAME
Net::Whois::Parser - module for parsing whois information SYNOPSIS
use Net::Whois::Parser; my $info = parse_whois( domain => $domain ); my $info = parse_whois( raw => $whois_raw_text, domain => $domain ); my $info = parse_whois( raw => $whois_raw_text, server => $whois_server ); $info = { nameservers => [ { domain => 'ns.example.com', ip => '123.123.123.123' }, { domain => 'ns.example.com' }, ], emails => [ 'admin@example.com' ], domain => 'example.com', somefield1 => 'value', somefield2 => [ 'value', 'value2' ], ... }; # Your own parsers sub my_parser { my ( $text ) = @_; return { nameservers => [ { domain => 'ns.example.com', ip => '123.123.123.123' }, { domain => 'ns.example.com' }, ], emails => [ 'admin@example.com' ], somefield => 'value', somefield2 => [ 'value', 'value2' ], }; } $Net::Whois::Parser::PARSERS{'whois.example.com'} = &my_parser; $Net::Whois::Parser::PARSERS{'DEFAULT'} = &my_default_parser; # If you want to get all values of fields from all whois answers $Net::Whois::Parser::GET_ALL_VALUES = 1; # example # Net::Whois::Raw returns 2 answers $raw = [ { text => 'key: value1' }, { text => 'key: value2'}]; $data = parse_whois(raw => $raw); # If flag is off parser returns # { key => 'value2' }; # If flag is on parser returns # { key => [ 'value1', 'value2' ] }; # If you want to convert some field name to another: $Net::Whois::Parser::FIELD_NAME_CONV{'Domain name'} = 'domain'; # If you want to format some fields. # I think it is very useful for dates. $Net::Whois::Parser::HOOKS{'expiration_date'} = [ &format_date ]; DESCRIPTION
Net::Whois::Parser module provides Whois data parsing. You can add your own parsers for any whois server. FUNCTIONS
parse_whois(%args) Returns hash of whois data. Arguments: 'domain' - domain 'raw' - raw whois text 'server' - whois server 'which_whois' - option for Net::Whois::Raw::whois. Default value is QRY_ALL CHANGES
See file "Changes" in the distribution AUTHOR
Ivan Sokolov, "<ivsokolov@cpan.org>" COPYRIGHT &; LICENSE Copyright 2009 Ivan Sokolov This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.14.2 2012-01-20 Net::Whois::Parser(3pm)
All times are GMT -4. The time now is 07:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy