hi ,
i am writing a script to copy the MQ messages from onw queue to another. The following i got from one site, but i di not understand , can anyone explain.
/root/scripts/sap/q -m$Q_MANAGER -i$Q_NAME_SRC_1 -F/logs/mq/MQ_COPYdump_$Q_NAME_SRC_1.$$
/root/scripts/sap/q -m$Q_MANAGER... (0 Replies)
I am relatively new to Shell Scripting. I can't understand the following two scripts. Can someone please spare a minute to explain?
1) content s of file a are
(021) 654-1234
sed 's/(//g;s/)//g;s/ /-/g' a
021-654-1234
2)cut -d: -f1,3,7 /etc/passwd |sort -t: +1n gives error (3 Replies)
Hi All;
Is there anybody can explain this script please?
trap 'C_logmsg "F" "CNTL/c OS signal trapped, Script ${G_SCRIPTNAME] terminated"; exit 1' 2
trap 'C_logmsg "F" "Kill Job Event sent from the Console, Script ${G_SCRIPTNAME] terminated"; exit 1' 15 (3 Replies)
Can u please explain what it is doing
#!/bin/sh
fullyear=`/home/local/bin/datemmdd 1`"."`date +%Y`
uehist=/u05/home/celldba/utility/ue/prod/history
echo $fullyear
cd $uehist
ls -ltr pwroutages.master.$fullyear* | awk '{print $9}' > /u01/home/celldba/tmp/pwroutages_master_all_tmp
while... (2 Replies)
Hi All,
I have a ksh script & would like to understand mening of below lines in script,
Starting lines of script is as below,
#!/bin/ksh
#%W% %I% %D% %T% ---- ???
#%W%G --- ???
num_ctrl_files=0
OS=`uname`
if
then
//g' | egrep -v '(.sh:|.ksh:)' | sed 's/^.*://g' | sed 's/^M//g' |... (6 Replies)
Hi
i have one script and i am running it but not getting current output so i want to understand how to input in the script.
when i do help then i am getting below massage
thanks
got it (1 Reply)
Hello world! Can someone please explain me how this code works? I'ts supposed to find words in a dictionary and show the anagrams of the words.
{
part = word2key($1)
data = $1
}
function word2key(word, a, i, x, result)
{
x = split(word, a, "")
asort(a)
... (1 Reply)
New to korn shel1 and having an issue. The following is suppose to read the parameter values from files in a source directory and then pass them on to a log file in a different directory, The ArchiveTracker scripts is suppose to call the parameterreader script to exact the parameter values and... (3 Replies)
Please help me to understand the below 3 lines of code.execute shell in jenkins
1)APP_IP=$( docker inspect --format '{{ .NetworkSettings.Networks.'"$DOCKER_NETWORK_NAME"'.IPAddress }}' ${PROJECT_NAME_KEY}"-CI" )
2)HOST_WORKSPACE=$(echo ${WORKSPACE} | sed... (1 Reply)
Discussion started by: naresh85
1 Replies
LEARN ABOUT REDHAT
www::robotrules
WWW::RobotRules(3) User Contributed Perl Documentation WWW::RobotRules(3)NAME
WWW::RobotsRules - Parse robots.txt files
SYNOPSIS
require WWW::RobotRules;
my $robotsrules = new WWW::RobotRules 'MOMspider/1.0';
use LWP::Simple qw(get);
$url = "http://some.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
$url = "http://some.other.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
# Now we are able to check if a URL is valid for those servers that
# we have obtained and parsed "robots.txt" files for.
if($robotsrules->allowed($url)) {
$c = get $url;
...
}
DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in
<http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access
to parts of their web site.
The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited.
The same WWW::RobotRules object can parse multiple /robots.txt files.
The following methods are provided:
$rules = WWW::RobotRules->new($robot_name)
This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot.
$rules->parse($robot_txt_url, $content, $fresh_until)
The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file.
$rules->allowed($uri)
Returns TRUE if this robot is allowed to retrieve this URL.
$rules->agent([$name])
Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache.
ROBOTS.TXT
The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of
<http://info.webcrawler.com/mak/projects/robots/norobots.html>):
The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form
<field-name>: <value>
The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The
following <field-names> can be used:
User-Agent
The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is
present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If
the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records.
Disallow
The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that
starts with this value will not be retrieved
ROBOTS.TXT EXAMPLES
The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called
"cybermapper":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
# Cybermapper knows where to go.
User-agent: cybermapper
Disallow:
This example indicates that no robots should visit this site further:
# go away
User-agent: *
Disallow: /
SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File
libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)