lwp::robotua(3) [osx man page]
LWP::RobotUA(3) User Contributed Perl Documentation LWP::RobotUA(3) NAME
LWP::RobotUA - a class for well-behaved Web robots SYNOPSIS
use LWP::RobotUA; my $ua = LWP::RobotUA->new('my-robot/0.1', 'me@foo.com'); $ua->delay(10); # be very nice -- max one hit every ten minutes! ... # Then just use it just like a normal LWP::UserAgent: my $response = $ua->get('http://whatever.int/...'); ... DESCRIPTION
This class implements a user agent that is suitable for robot applications. Robots should be nice to the servers they visit. They should consult the /robots.txt file to ensure that they are welcomed and they should not make requests too frequently. But before you consider writing a robot, take a look at <URL:http://www.robotstxt.org/>. When you use a LWP::RobotUA object as your user agent, then you do not really have to think about these things yourself; "robots.txt" files are automatically consulted and obeyed, the server isn't queried too rapidly, and so on. Just send requests as you do when you are using a normal LWP::UserAgent object (using "$ua->get(...)", "$ua->head(...)", "$ua->request(...)", etc.), and this special agent will make sure you are nice. METHODS
The LWP::RobotUA is a sub-class of LWP::UserAgent and implements the same methods. In addition the following methods are provided: $ua = LWP::RobotUA->new( %options ) $ua = LWP::RobotUA->new( $agent, $from ) $ua = LWP::RobotUA->new( $agent, $from, $rules ) The LWP::UserAgent options "agent" and "from" are mandatory. The options "delay", "use_sleep" and "rules" initialize attributes private to the RobotUA. If "rules" are not provided, then "WWW::RobotRules" is instantiated providing an internal database of robots.txt. It is also possible to just pass the value of "agent", "from" and optionally "rules" as plain positional arguments. $ua->delay $ua->delay( $minutes ) Get/set the minimum delay between requests to the same server, in minutes. The default is 1 minute. Note that this number doesn't have to be an integer; for example, this sets the delay to 10 seconds: $ua->delay(10/60); $ua->use_sleep $ua->use_sleep( $boolean ) Get/set a value indicating whether the UA should sleep() if requests arrive too fast, defined as $ua->delay minutes not passed since last request to the given server. The default is TRUE. If this value is FALSE then an internal SERVICE_UNAVAILABLE response will be generated. It will have an Retry-After header that indicates when it is OK to send another request to this server. $ua->rules $ua->rules( $rules ) Set/get which WWW::RobotRules object to use. $ua->no_visits( $netloc ) Returns the number of documents fetched from this server host. Yeah I know, this method should probably have been named num_visits() or something like that. :-( $ua->host_wait( $netloc ) Returns the number of seconds (from now) you must wait before you can make a new request to this host. $ua->as_string Returns a string that describes the state of the UA. Mainly useful for debugging. SEE ALSO
LWP::UserAgent, WWW::RobotRules COPYRIGHT
Copyright 1996-2004 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.16.2 2012-02-11 LWP::RobotUA(3)
Check Out this Related Man Page
WWW::RobotRules(3) User Contributed Perl Documentation WWW::RobotRules(3) NAME
WWW::RobotsRules - Parse robots.txt files SYNOPSIS
require WWW::RobotRules; my $robotsrules = new WWW::RobotRules 'MOMspider/1.0'; use LWP::Simple qw(get); $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); # Now we are able to check if a URL is valid for those servers that # we have obtained and parsed "robots.txt" files for. if($robotsrules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in <http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access to parts of their web site. The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can parse multiple /robots.txt files. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://info.webcrawler.com/mak/projects/robots/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)