Hi, I'm a linux guy and have used netbsd, openbsd, freebsd etc in the past but never tangled with the kernel or drivers outside of Linux.
My mother has fried her ethernet port on her iMac (G4 I think); I recently sent her a silicom USB U2E (usb 2 ethernet) dongle which is evidently not... (2 Replies)
Hi,
nevermind. I think I've found the answer. It appears I was looking for index, match, sub, and gsub.
I want to write a shell script that will clean the html out of a bunch of files and format the data for import into excel.
Awk seems like a powerful tool, but it seems oriented to... (1 Reply)
Hi guys, I'm modifying an old f77 code using vi. I'm modifying the code remotely from my windows machine using xming using vi. I'm using tabs to move past the first 6 columns of the code and to keep my loops and if statements neat, but when I hit the tab key, vi displays a big red block which is... (7 Replies)
Hello,
I've written a ksh awk script to ping multiple servers and write the results to a file. That part is working ok. I then want to extract the names of only the server which are available. This is indicated by '1 packets received'. The server name actually appears above that line so I found... (4 Replies)
Hi everyone,
Is there an efficient way to source all of the files contained in a directory? Theoretically I could create a FOR loop and successively source each file, but I just wanted to check if there was a cleaner method.
Thanks!
Mike (3 Replies)
Sorry if here is the wrong place to put this question, but for college we develop small programs in C using Solaris. Most of time is OK for us not to document nothing, until now.
Every time program is executed must print OS name.
Does Solaris has some predefined macros which I can include... (3 Replies)
Hi,
I am new to shell scripting.Please help me on this.I am using solaris 10 OS and shell i am using is
# echo $0
-sh
My requirement is i have source file say makefile.I need to extract files with extensions (.c |.cxx |.h |.hxx |.sc) from the makefile.after doing so i need to check whether... (13 Replies)
- Concatenate files and delete source files. Also have to add a comment.
- I need to concatenate 3 files which have the same characters in the beginning and have to remove those files and add a comment and the end.
Example:
cat REJ_FILE_ABC.txt REJ_FILE_XYZ.txt REJ_FILE_PQR.txt >... (0 Replies)
hi All, Any one answer my requirement.
I have source location
src_dir="/home/oracle/arun/IRMS-CM"
My Target location
dest_dir="/home/oracle/arun/LiveLink/IRMS-CM/$dc/$pc/$ct"
my source text files check with below example.text file content
$fn "\t" $dc "\t" $pc "\t" ... (3 Replies)
Discussion started by: sravanreddy
3 Replies
LEARN ABOUT REDHAT
www::robotrules
WWW::RobotRules(3) User Contributed Perl Documentation WWW::RobotRules(3)NAME
WWW::RobotsRules - Parse robots.txt files
SYNOPSIS
require WWW::RobotRules;
my $robotsrules = new WWW::RobotRules 'MOMspider/1.0';
use LWP::Simple qw(get);
$url = "http://some.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
$url = "http://some.other.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
# Now we are able to check if a URL is valid for those servers that
# we have obtained and parsed "robots.txt" files for.
if($robotsrules->allowed($url)) {
$c = get $url;
...
}
DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in
<http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access
to parts of their web site.
The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited.
The same WWW::RobotRules object can parse multiple /robots.txt files.
The following methods are provided:
$rules = WWW::RobotRules->new($robot_name)
This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot.
$rules->parse($robot_txt_url, $content, $fresh_until)
The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file.
$rules->allowed($uri)
Returns TRUE if this robot is allowed to retrieve this URL.
$rules->agent([$name])
Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache.
ROBOTS.TXT
The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of
<http://info.webcrawler.com/mak/projects/robots/norobots.html>):
The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form
<field-name>: <value>
The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The
following <field-names> can be used:
User-Agent
The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is
present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If
the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records.
Disallow
The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that
starts with this value will not be retrieved
ROBOTS.TXT EXAMPLES
The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called
"cybermapper":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
# Cybermapper knows where to go.
User-agent: cybermapper
Disallow:
This example indicates that no robots should visit this site further:
# go away
User-agent: *
Disallow: /
SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File
libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)