Sponsored Content
Top Forums Shell Programming and Scripting comment/delete a particular pattern starting from second line of the matching pattern Post 302246150 by imas on Monday 13th of October 2008 02:37:21 AM
Old 10-13-2008
Hi Summer_Cherry,

Million thanks for giving me the piece of code which will comment starting from second occurance of matched pattern from the starting of line.

intelsol2>cat 1.txt
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
intelsol2>cat 2.txt
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
0789456|332211|10.20.30.40|078945633
1234567|225522|10.20.30.50|123456733
0321654|999999|10.20.30.40|032165433
0456123|777899|10.20.30.40|045612333
intelsol2>
nawk 'BEGIN{FS="|"}
{
if (NR==FNR)
a[$3]=0
else
{
a[$3]++
if (a[$3]>=2)
print "#"$0
else
print $0
}
}' 1.txt 2.txt >3.txt
intelsol2>cat 3.txt
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
#0789456|332211|10.20.30.40|078945633
#1234567|225522|10.20.30.50|123456733
#0321654|999999|10.20.30.40|032165433
#0456123|777899|10.20.30.40|045612333
intelsol2>

Also thanks a lot for unix forum guys "you ppl rock!!!"

FYI, i was trying to search for this code since all the weekend and went through sed manual "http://www.grymoire.com/Unix/Sed.html" sub topic (/1, /2, etc. Specifying which occurrence) however was unlucky in trying all the trial and error method.

Also Summer could you please explain me the code you have written so that i can understand it in a better way.

Again thanks a lot and you can close this thread.

Thanks
-imas
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

modify a particular pattern starting from second line of the search pattern

Hi, I am new to this forum and i would like to get help in this issue. I have a file 1.txt as shown: apple banana orange apple grapes banana orange grapes orange .... Now i would like to search for pattern say apple or orange and then put a # at the beginning of the pattern... (2 Replies)
Discussion started by: imas
2 Replies

2. UNIX for Dummies Questions & Answers

modify a particular pattern starting from second line of the search pattern

Hi, I think you ppl did not get my question correctly, let me explain I have 1.txt with following entries as shown: 0152364|134444|10.20.30.40|015236433 0233654|122555|10.20.30.50|023365433 ** ** ** In file 2.txt I have the following entries as shown: ... (1 Reply)
Discussion started by: imas
1 Replies

3. Shell Programming and Scripting

sed find matching pattern delete next line

trying to use sed in finding a matching pattern in a file then deleting the next line only .. pattern --> <ad-content> I tried this but it results are not what I wish sed '/<ad-content>/{N;d;}' akv.xml > akv5.xml ex, <Celebrant2First>Mickey</Celebrant2First> <ad-content> Minnie... (2 Replies)
Discussion started by: aveitas
2 Replies

4. UNIX for Dummies Questions & Answers

find pattern delete line with pattern and line above and line below

I have a file that will sometimes contain a pattern. The pattern is this: W/D FRM CHK 00 I want to find any lines with this pattern, delete those lines, and also delete the line above and the line below. (1 Reply)
Discussion started by: nickg
1 Replies

5. Shell Programming and Scripting

find pattern, delete line with pattern and line above and line below

I have a file that will sometimes contain a pattern. The pattern is this: FRM CHK 0000 I want to find any lines with this pattern, delete those lines, and also delete the line above and the line below. (4 Replies)
Discussion started by: nickg
4 Replies

6. Shell Programming and Scripting

delete lines starting with a pattern

i have a file sample.txt containing i want to delete lines starting with 123 neglecting spaces and tabs. but not lines containing 123. i.e. i want files sample.txt as help me thanxx (4 Replies)
Discussion started by: yashwantkumar
4 Replies

7. Shell Programming and Scripting

Delete multiple lines starting with a specific pattern

Hi, just tried some script, awk, sed for the last 2 hours and now need help. Let's say I have a huge file of 800,000 lines like this : It's a tedious job to look through it, I'd like to remove those useless lines in it as there's a few thousands : Or to be even more precise : if line1 =... (6 Replies)
Discussion started by: Zurd
6 Replies

8. Shell Programming and Scripting

Sed: printing lines AFTER pattern matching EXCLUDING the line containing the pattern

'Hi I'm using the following code to extract the lines(and redirect them to a txt file) after the pattern match. But the output is inclusive of the line with pattern match. Which option is to be used to exclude the line containing the pattern? sed -n '/Conn.*User/,$p' > consumers.txt (11 Replies)
Discussion started by: essem
11 Replies

9. UNIX for Dummies Questions & Answers

Grep -v lines starting with pattern 1 and not matching pattern 2

Hi all! Thanks for taking the time to view this! I want to grep out all lines of a file that starts with pattern 1 but also does not match with the second pattern. Example: Drink a soda Eat a banana Eat multiple bananas Drink an apple juice Eat an apple Eat multiple apples I... (8 Replies)
Discussion started by: demmel
8 Replies

10. UNIX for Beginners Questions & Answers

Grep file starting from pattern matching line

I have a file with a list of references towards the end and want to apply a grep for some string. text .... @unnumbered References @sp 1 @paragraphindent 0 2017. @strong{Chalenski, D.A.}; Wang, K.; Tatanova, Maria; Lopez, Jorge L.; Hatchell, P.; Dutta, P.; @strong{Small airgun... (1 Reply)
Discussion started by: kristinu
1 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotRules - database of robots.txt-derived permissions SYNOPSIS
use WWW::RobotRules; my $rules = WWW::RobotRules->new('MOMspider/1.0'); use LWP::Simple qw(get); { my $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } { my $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } # Now we can check if a URL is valid for those servers # whose "robots.txt" files we've gotten and parsed: if($rules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://www.robotstxt.org/wc/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be the first field in a new record. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved Unrecognized records are ignored. ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / This is an example of a malformed robots.txt file. # robots.txt for ancientcastle.example.com # I've locked myself away. User-agent: * Disallow: / # The castle is your home now, so you can go anywhere you like. User-agent: Belle Disallow: /west-wing/ # except the west wing! # It's good to be the Prince... User-agent: Beast Disallow: This file is missing the required blank lines between records. However, the intention is clear. SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File perl v5.12.1 2009-10-03 WWW::RobotRules(3)
All times are GMT -4. The time now is 12:24 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy