Hi all, I need your help fixing an issue with this code. I am a newbie to UNIX programming and there is an issue with this code I am hoping you can help me correct.
I have two files (system_files with 8342 records and rules1.txt file with 762 records). My understanding from the script below, is the script will read the rules1.txt file and if it does find a match on system_files, it will skip the record on the rules1.txt file and not write it to the temp file. So the temp file should only have 7580 records on it. After I run the script, I am getting over 12 million records in the temp file with each file being dupluicated thousands of times.
Any ideas where this scripts is going crazy on me ? Thanks
Last edited by Scrutinizer; 05-11-2012 at 12:29 PM..
Reason: code tags and formatting
Hi
I need to compare shadow file sizes with their real file counterparts. If the shadow file size differs form the realfile size then it must send a mail. My problem is that our system has over 1600 shadowfiles in different directories, with different names. the only consistancy is the .sh file... (4 Replies)
Hi Guru's
I have a hosts file which has around 1000 entries. I have been asked to change around 500 of the entries. The entries are in a file called hosts.new.
I tried using the diff command to get the uncommon one's. But the output was very confusing.
Would appreciate if you can help me out.... (12 Replies)
Hi All,
I have to compare set of files so I created a case statement with the option to give more than one file to compare. The Problem now i am facing is, if I compare the files directly, from prompt or just using the script only for a particular file then It's saying No difference, but If I... (4 Replies)
I've two files with data like below:
file1.txt:
AAA,Apples,123
BBB,Bananas,124
CCC,Carrot,125
file2.txt:
Store1|AAA|123|11
Store2|BBB|124|23
Store3|CCC|125|57
Store4|DDD|126|38
So,the field separator in file1.txt is a comma and in file2.txt,it is |
Now,the output should be... (2 Replies)
Hi Guys ,
we have one directory ...in that directory all files will be set on each day..
files must have header ,contents ,footer..
i wants to compare the header,contents,footer ..if its same means display an error message as 'files contents same' (7 Replies)
I hope I can explain this correctly. I am using Bash-4.2 for my shell.
I have a group of file names held in an array. I want to compare the names in this array against the names of files currently present in a directory. If the file does not exist in the directory, that is not a problem.... (5 Replies)
more prod.properties
# remote connection details
cdr_url=http://myprod.col.net:1890/service
cdr_user=user1
cdr_pswd=pass11
boot_time=ON
more back.properties
cdr_url=http://myback.col.net:1890/service
cdr_user=user1
cdr_pswd=pass11
storage=file
I need to compare the back.properties... (6 Replies)
I have two file as given below which shows the ACL permissions of each file. I need to compare the source file with target file and list down the difference as specified below in required output. Can someone help me on this ?
Source File
*************
# file: /local/test_1
# owner: own
#... (4 Replies)
Discussion started by: sarathy_a35
4 Replies
LEARN ABOUT SUSE
www::robotrules
WWW::RobotRules(3) User Contributed Perl Documentation WWW::RobotRules(3)NAME
WWW::RobotRules - database of robots.txt-derived permissions
SYNOPSIS
use WWW::RobotRules;
my $rules = WWW::RobotRules->new('MOMspider/1.0');
use LWP::Simple qw(get);
{
my $url = "http://some.place/robots.txt";
my $robots_txt = get $url;
$rules->parse($url, $robots_txt) if defined $robots_txt;
}
{
my $url = "http://some.other.place/robots.txt";
my $robots_txt = get $url;
$rules->parse($url, $robots_txt) if defined $robots_txt;
}
# Now we can check if a URL is valid for those servers
# whose "robots.txt" files we've gotten and parsed:
if($rules->allowed($url)) {
$c = get $url;
...
}
DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html>
Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site.
The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited.
The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts.
The following methods are provided:
$rules = WWW::RobotRules->new($robot_name)
This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot.
$rules->parse($robot_txt_url, $content, $fresh_until)
The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file.
$rules->allowed($uri)
Returns TRUE if this robot is allowed to retrieve this URL.
$rules->agent([$name])
Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache.
ROBOTS.TXT
The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of
<http://www.robotstxt.org/wc/norobots.html>):
The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form
<field-name>: <value>
The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The
following <field-names> can be used:
User-Agent
The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is
present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If
the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records.
The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that
constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and
will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be
the first field in a new record.
Disallow
The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that
starts with this value will not be retrieved
Unrecognized records are ignored.
ROBOTS.TXT EXAMPLES
The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called
"cybermapper":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
# Cybermapper knows where to go.
User-agent: cybermapper
Disallow:
This example indicates that no robots should visit this site further:
# go away
User-agent: *
Disallow: /
This is an example of a malformed robots.txt file.
# robots.txt for ancientcastle.example.com
# I've locked myself away.
User-agent: *
Disallow: /
# The castle is your home now, so you can go anywhere you like.
User-agent: Belle
Disallow: /west-wing/ # except the west wing!
# It's good to be the Prince...
User-agent: Beast
Disallow:
This file is missing the required blank lines between records. However, the intention is clear.
SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File
perl v5.12.1 2009-10-03 WWW::RobotRules(3)