Sponsored Content
Full Discussion: Piping with grep
Operating Systems Linux Ubuntu Piping with grep Post 302529412 by fpmurphy on Thursday 9th of June 2011 09:43:53 AM
Old 06-09-2011
Bumping up posts or double posting is not permitted in these forums.

Please read the rules, which you agreed to when you registered, if you have not already done so.

You may receive an infraction for this. If so, don't worry, just try to follow the rules more carefully. The infraction will expire in the near future

Thank You.

The UNIX and Linux Forums.
This User Gave Thanks to fpmurphy For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

piping the output of find command to grep

Hi, I did not understand why the following did not work out as I expected: find . -name "pqp.txt" | grep -v "Permission" I thought I would be able to catch whichever paths containing my pqp.txt file without receiving the display of messages such as "find: cannot access... Permisson... (1 Reply)
Discussion started by: 435 Gavea
1 Replies

2. Shell Programming and Scripting

question about grep, cut, and piping

Howdy folks, I am fairly new to scripting but have lost of expirience in c++, pascal, and a few other. I am trying to complete a file search script that is sent a file name containing data to search that is arranged like this "id","name","rating" "1","bob","7" etc and an argument to... (1 Reply)
Discussion started by: dyrt
1 Replies

3. Shell Programming and Scripting

Grep causing long delay (batching) whilst piping

Hi all. I have a problem at work which I have managed to break down into a simple test scenario: I have written a monitoring script that outputs every second the status of various processes, but for now, lets just print the date input.sh: while true do date sleep 1 done This... (9 Replies)
Discussion started by: spudtheimpaler
9 Replies

4. UNIX for Dummies Questions & Answers

Piping GREP

Hi, I need to use a double grep so to speak. I need to grep for a particular item say BOB and then for each successful result I need to grep for another item say SMITH. I tried grep "BOB" filename | grep "SMITH" but it does not seem to work. I can achieve my desired result using an... (12 Replies)
Discussion started by: mojoman
12 Replies

5. Shell Programming and Scripting

Piping STDOUT as pattern to grep or sed

$>cat file.txt 123 d3 234 abc 3 zyf 23 124 def 8 ghi kz0 ... ... I have the following output on the screen through <some command>. $> <some command> abc def ghi ... ... I have to search for each of these patterns in the file.txt and print the lines in file.txt matching the... (4 Replies)
Discussion started by: VNR
4 Replies

6. Ubuntu

Piping with grep

Hi everybody, I have a big file with blast results (if you know what this means, otherwise look at it just as a text file with a specific form). I am trying to extract some ids from within this file, which have certain parameters. For example, some Of my IDs have the term 'No hit results'... (6 Replies)
Discussion started by: frymor
6 Replies

7. Shell Programming and Scripting

piping from grep to awk without intermediate files

I am trying to extract the file names alone, for example "TVLI_STATS_NRT_XLSTWS03_20120215_132629.csv", from below output which was given by the grep. sam:/data/log: grep "C10_Subscribe.000|subscribe|newfile|" PDEWG511_TVLI_JOB_STATS.ksh.201202* Output: ... (6 Replies)
Discussion started by: siteregsam
6 Replies

8. UNIX for Dummies Questions & Answers

Piping grep into awk, read the next line using grep

Hi, I have a number of files containing the information below. """"" Fundallinfo 6.3950 14.9715 14.0482 """"" I would like to grep for Fundallinfo and use it to read the next line? I ideally would like to read the three numbers that follow in the next line and... (2 Replies)
Discussion started by: Paul Moghadam
2 Replies

9. Shell Programming and Scripting

Piping through grep/awk prevents file write

So, this is weird... I'm running this command: iotop -o -P -k -bt -d 5 I'd like to save the output relelvant to rsyslogd to a file, so I do this: iotop -o -P -k -bt -d 5 | grep rsyslogd >> /var/log/rsyslogd Nothing is written to the file! I can write the full output to the file: ... (2 Replies)
Discussion started by: treesloth
2 Replies

10. OS X (Apple)

Piping to grep with pbpaste

cat file 1 aaa 2 bbb 3 ccc 4 ddd In TextEdit, I then copy the characters “ccc” to the clipboard. The problem is that the following command gives no output: bash-3.2$ pbpaste | grep - file Desired output: 3 ccc What should the syntax be for that command? I am using MacOS El... (3 Replies)
Discussion started by: palex
3 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotRules - database of robots.txt-derived permissions SYNOPSIS
use WWW::RobotRules; my $rules = WWW::RobotRules->new('MOMspider/1.0'); use LWP::Simple qw(get); { my $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } { my $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } # Now we can check if a URL is valid for those servers # whose "robots.txt" files we've gotten and parsed: if($rules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://www.robotstxt.org/wc/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be the first field in a new record. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved Unrecognized records are ignored. ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / This is an example of a malformed robots.txt file. # robots.txt for ancientcastle.example.com # I've locked myself away. User-agent: * Disallow: / # The castle is your home now, so you can go anywhere you like. User-agent: Belle Disallow: /west-wing/ # except the west wing! # It's good to be the Prince... User-agent: Beast Disallow: This file is missing the required blank lines between records. However, the intention is clear. SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File perl v5.12.1 2009-10-03 WWW::RobotRules(3)
All times are GMT -4. The time now is 12:22 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy