Hi All,
I have a root directory /tmp and I want to purge files or archive files in its subsequent subfolders.I listed the path of files I want to purge(archive) and the #of days.
(purge)
DAYS PATH
7 /tmp/arsenal/*
5 /tmp/chelsea/*
(archive?
the same as above but different folders... (15 Replies)
I posted a week ago regarding this scripting question, but I need to revisit and have a few more questions answered..
User cfajohnson was extremely helpful with the archive script, but clarification on my part is needed to help steer the answer in a direction that works in this particular... (5 Replies)
Dear all
I am in a problem. I have created a master server on which I have install a Solaris 10 OS as well as Oracle 10g with some additional solaris packages. Now I want to create a flash archive of this server and install that flash archive on another server, so that the new server will have... (6 Replies)
Hi All,
I would like to extract specific file from a zip archive.
I have a zip archive "sample.zip".
sample.zip contains few text files and images... text1.txt, text2.txt, pic.jpg etc...
I need to read specific file "text2.txt" from "sample.zip" WITHOUT EXTRACTING the zip file.
... (4 Replies)
Hello,
I have a problem using Archive::Tar. it seem very trivial but i cannot get it work.
First I have a list of files I grab from a directory. Then I create a tar archive and write the files into the archive. everything works great, except that I cannot properly extract the files.
What... (0 Replies)
Hi there,
I have one huge archive (it's a system image).
I need sometime to create smaller archives with only one or two file from my big archive.
So I'm looking for a command that extracts files from an archive and pipe them to another one.
I tried the following :
tar -xzOf oldarchive.tgz... (5 Replies)
I disabled the boot-archive service by using
#svcadm disable svc:/system/boot-archive:default
then i rebooted my system but i am unable to boot. It throws the following errors
CONSOLE LOGIN SERVICE(S) CANNOT RUN
then it automatically asked me for the maintenance mode passwd.
i logged... (3 Replies)
Requirement:
Under fuse application we have placeholders called containers;
Every container has their logs under:
<container1>/data/log/fuse.log
<container1>/data/log/fuse.log.1
<container1>/data/log/fuse.log.XX
<container2>/data/log/fuse.log... (6 Replies)
Discussion started by: Arjun Goswami
6 Replies
LEARN ABOUT REDHAT
www::robotrules
WWW::RobotRules(3) User Contributed Perl Documentation WWW::RobotRules(3)NAME
WWW::RobotsRules - Parse robots.txt files
SYNOPSIS
require WWW::RobotRules;
my $robotsrules = new WWW::RobotRules 'MOMspider/1.0';
use LWP::Simple qw(get);
$url = "http://some.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
$url = "http://some.other.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
# Now we are able to check if a URL is valid for those servers that
# we have obtained and parsed "robots.txt" files for.
if($robotsrules->allowed($url)) {
$c = get $url;
...
}
DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in
<http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access
to parts of their web site.
The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited.
The same WWW::RobotRules object can parse multiple /robots.txt files.
The following methods are provided:
$rules = WWW::RobotRules->new($robot_name)
This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot.
$rules->parse($robot_txt_url, $content, $fresh_until)
The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file.
$rules->allowed($uri)
Returns TRUE if this robot is allowed to retrieve this URL.
$rules->agent([$name])
Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache.
ROBOTS.TXT
The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of
<http://info.webcrawler.com/mak/projects/robots/norobots.html>):
The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form
<field-name>: <value>
The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The
following <field-names> can be used:
User-Agent
The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is
present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If
the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records.
Disallow
The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that
starts with this value will not be retrieved
ROBOTS.TXT EXAMPLES
The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called
"cybermapper":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
# Cybermapper knows where to go.
User-agent: cybermapper
Disallow:
This example indicates that no robots should visit this site further:
# go away
User-agent: *
Disallow: /
SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File
libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)