I've been trying to execute a script which call some SQL Queries for midnight report, using crontab I got the next mesage:
Message file sp1<lang>.msb not found
Error 6 initializing SQL*Plus
Message file sp1<lang>.msb not found
Error 6 initializing SQL*Plus
If I use the "at" command it... (4 Replies)
hi all
yes, I am a beginner in Unix,-sun solorais--V8...
what is the command to run files....
single and batch jobs....any help would be great...
Cheers
ps...i know running files in Sqlplus on Unix prompt, I can use the @' sign...but how does this work..
Cheers
E (2 Replies)
Problem:
I have a crontab and when it kicks off, xxx.sh shell script is called. Which has a nohup sqlplus session call. Problem is sql does not get executed and a text file is not getting created. Only a empty log file is getting created. Are there any constraints for crontab to open a sql... (6 Replies)
Hi,
I wrote a script shell whose function is to make a remove of directories depending on the result of a query on Oracle database.
When I execute my script shell directly from unix, it works fine. But, when I put it in a line of a crontab, it doesn't work no more, because of the sql... (1 Reply)
Hi experts,
need your helpo.
after editing the crontab while saving the file it says-
"/tmp/crontabRlaauT" 1 line, 77 characters
cron may not be running - call your system administrator
And i checked after certain time. script in cron is not running. I got a mail in user saying... (1 Reply)
Hi All,
Here is my command which I've scheduled to be run from crontab, but it's giving error message: rah: rahhost executable needed but not in PATH
My cmd is-
36 10 * * * /opt/IBM/dwe/db2/V9.5/bin/rah "df -m" >> /db2home/bculinux/Files/log/db.out 2>&1
Though I've added path... (5 Replies)
Hi guys today i'll bring to you a new problem that i need to execute.
So what i need to do it's create a script that:
conect to some database
logon
run a .sql script
logoff
and close the connection
after that, put this script on the crontab
To set up the crontab it's ok for me, i think... (3 Replies)
Hi All,
I am having the below script to be run from crontab, it it doesnt run.
1 * * * * /home/cobr_ext/test.sh > /home/cobr_ext/temp.txt
when i run i manally it runs without any issues.
Could please help me as to why doesnt it run the script.:( (7 Replies)
hi,
i have crontab where i have put my db backup scripts but crontab do not run at specific time. i have checkted the scripts andwhen run the script as ./mydbbkp.sh so it successfully run and do the job but from cron it is not running whihcmeans cron is not running. i have an oracle user under... (3 Replies)
Hi All,
I want to run multiple sql queries and store the data in variable but i want to use sql command only once. Is there a way without running sql command twice and storing.Please advise.
Eg :
Select 'Query 1 output' from dual;
Select 'Query 2 output' from dual;
I want to... (3 Replies)
Discussion started by: Rokkesh
3 Replies
LEARN ABOUT REDHAT
www::robotrules
WWW::RobotRules(3) User Contributed Perl Documentation WWW::RobotRules(3)NAME
WWW::RobotsRules - Parse robots.txt files
SYNOPSIS
require WWW::RobotRules;
my $robotsrules = new WWW::RobotRules 'MOMspider/1.0';
use LWP::Simple qw(get);
$url = "http://some.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
$url = "http://some.other.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
# Now we are able to check if a URL is valid for those servers that
# we have obtained and parsed "robots.txt" files for.
if($robotsrules->allowed($url)) {
$c = get $url;
...
}
DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in
<http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access
to parts of their web site.
The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited.
The same WWW::RobotRules object can parse multiple /robots.txt files.
The following methods are provided:
$rules = WWW::RobotRules->new($robot_name)
This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot.
$rules->parse($robot_txt_url, $content, $fresh_until)
The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file.
$rules->allowed($uri)
Returns TRUE if this robot is allowed to retrieve this URL.
$rules->agent([$name])
Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache.
ROBOTS.TXT
The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of
<http://info.webcrawler.com/mak/projects/robots/norobots.html>):
The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form
<field-name>: <value>
The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The
following <field-names> can be used:
User-Agent
The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is
present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If
the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records.
Disallow
The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that
starts with this value will not be retrieved
ROBOTS.TXT EXAMPLES
The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called
"cybermapper":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
# Cybermapper knows where to go.
User-agent: cybermapper
Disallow:
This example indicates that no robots should visit this site further:
# go away
User-agent: *
Disallow: /
SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File
libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)