05-21-2009
awk script to remove duplicate rows in line
i have the long file more than one ns and www and mx in the line like .
i need the first ns record and first www and first mx from line .
the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution.
NS,ns2.fastpark.net,216.8.177.29,windsor,on,can,-83.017,42.3;NS,ns1.fastpark.net,206.130.11.197,toronto,on,can,-79.443,43.751;WWW,
www.0--00--0.com,216.8.179.24,windsor,on,can,-83.017,42.3
WW,
www.0--5.com,63.251.171.80,vancouver,wa,usa,-122.728,45.681;WWW,www.0--5.com,63.251.171.81,vancouver,wa,usa,-122.728,45.681;WWW,www.0--5.com,66.150.161.136,vancouver,wa,usa,-122.728,45.681;WWW,www.0--5.com,66.150.161.140,vancouver,wa,usa,-122.728,45.681;WWW,www.0--5.com,66.150.161.141,vancouver,wa,usa,-122.728,45.681;WWW,www.0--5.com,69.25.27.170,vancouver,wa,usa,-122.728,45.681;WWW,www.0--5.com,69.25.27.173,vancouver,wa,usa,-122.728,45.681;MX,m1.dnsix.com,63.251.171.169,vancouver,wa,usa,-122.728,45.681
I need out put like these
NS,ns2.fastpark.net,216.8.177.29,windsor,on,can,-83.017,42.3;
WWW,
www.0--00--0.com,216.8.179.24,windsor,on,can,-83.017,42.3
WW,
www.0--5.com,63.251.171.80,vancouver,wa,usa,-122.728,45.681;
MX,m1.dnsix.com,63.251.171.169,vancouver,wa,usa,-122.728,45.681
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi,
I have a scenario here where I have created a flatfile with the below mentioned information. File as you can see is dispalyed in three columns
1st column is FileNameString
2nd column is Report_Name (this has spaces)
3rd column is Flag
Result file needed is, removal of duplicate... (1 Reply)
Discussion started by: Student37
1 Replies
2. UNIX for Dummies Questions & Answers
Hi,
I am processing a file and would like to delete duplicate records as indicated by one of its column. e.g.
COL1 COL2 COL3
A 1234 1234
B 3k32 2322
C Xk32 TTT
A NEW XX22
B 3k32 ... (7 Replies)
Discussion started by: risk_sly
7 Replies
3. Shell Programming and Scripting
Hi,
I have a log file having size of 48mb.
For such a large log file. I want to get the message in a particular format which includes only unique error and exception messages.
The following things to be done :
1) To remove all the date and time from the log file
2) To remove all the... (1 Reply)
Discussion started by: Pank10
1 Replies
4. Ubuntu
Hi every body
I have some text file with a lots of duplicate rows like this:
165.179.568.197
154.893.836.174
242.473.396.153
165.179.568.197
165.179.568.197
165.179.568.197
154.893.836.174
how can I delete the repeated rows?
Thanks
Saeideh (2 Replies)
Discussion started by: sashtari
2 Replies
5. Shell Programming and Scripting
I have some data that looks like,
1 3300665.mol 3300665 5177008 102.093
2 3300665.mol 3300665 5177008 102.093
3 3294015.mol 3294015 5131552 102.114
4 3294015.mol 3294015 5131552 102.114
5 3293734.mol 3293734 5129625 104.152
6 3293734.mol ... (13 Replies)
Discussion started by: LMHmedchem
13 Replies
6. UNIX for Dummies Questions & Answers
Hello, I'm trying to delete duplicates when there are more than 10 duplicates, based on the value of the first column.
e.g.
a 1
a 2
a 3
b 1
c 1
gives
b 1
c 1
but requires 11 duplicates before it deletes.
Thanks for the help
Video tutorial on how to use code tags in The UNIX... (11 Replies)
Discussion started by: informaticist
11 Replies
7. Shell Programming and Scripting
Dear members, I need to filter a file based on the 8th column (that is id), and does not mather the other columns, because I want just one id (1 line of each id) and remove the duplicates lines based on this id (8th column), and does not matter wich duplicate will be removed.
example of my file... (3 Replies)
Discussion started by: clarissab
3 Replies
8. Shell Programming and Scripting
I want to duplicate each row in my file
Egfile.txt
Name State Age
Jack NJ 34
John MA 23
Jessica FL 45
I want the code to produce this output
Name State Age
Jack NJ 34
Jack NJ 34
John MA 23
John MA 23
Jessica FL 45
Jessica FL 45 (6 Replies)
Discussion started by: sidnow
6 Replies
9. UNIX for Dummies Questions & Answers
Hi all,
I've got a file that has 12 fields. I've merged 2 files and there will be some duplicates in the following:
FILE:
1. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, 100
2. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, (EMPTY)
3. CDC, 54321, TEST3,... (4 Replies)
Discussion started by: tugar
4 Replies
10. Shell Programming and Scripting
I create a CGI in bash/html.
My awk script looks like :
echo "<table>"
for fn in /var/www/cgi-bin/LPAR_MAP/*;
do
echo "<td>"
echo "<PRE>"
awk -F',|;' -v test="$test" '
NR==1 {
split(FILENAME ,a,"");
}
$0 ~ test {
if(!header++){
... (12 Replies)
Discussion started by: Tim2424
12 Replies
LEARN ABOUT DEBIAN
www::search::ebay::bysellerid
WWW::Search::Ebay::BySellerID(3pm) User Contributed Perl Documentation WWW::Search::Ebay::BySellerID(3pm)
NAME
WWW::Search::Ebay::BySellerID - backend for searching eBay for items offered by a particular seller
SYNOPSIS
use WWW::Search;
my $oSearch = new WWW::Search('Ebay::BySellerID');
my $sQuery = WWW::Search::escape_query("martinthurn");
$oSearch->native_query($sQuery);
while (my $oResult = $oSearch->next_result())
{ print $oResult->url, "
"; }
DESCRIPTION
See WWW::Search::Ebay for details. The query string must be an eBay seller ID.
This class is an Ebay specialization of WWW::Search. It handles making and interpreting Ebay searches http://www.ebay.com.
This class exports no public interface; all interaction should be done through WWW::Search objects.
NOTES
Searches only for items offered by eBay sellers whose ID matches exactly.
See WWW::Search::Ebay for explanation of the results.
SEE ALSO
To make new back-ends, see WWW::Search.
BUGS
Please tell the author if you find any!
AUTHOR
Martin 'Kingpin' Thurn, "mthurn at cpan.org", <http://tinyurl.com/nn67z>.
LICENSE
Copyright (C) 1998-2009 Martin 'Kingpin' Thurn
perl v5.12.4 2011-11-02 WWW::Search::Ebay::BySellerID(3pm)