Sponsored Content
Special Forums UNIX and Linux Applications Load above normal, qmail, bogofilter, increase CPU? Post 302869697 by jim mcnamara on Wednesday 30th of October 2013 09:57:52 PM
Old 10-30-2013
Bumping up posts or double posting is not permitted in these forums.

Please read the rules, which you agreed to when you registered, if you have not already done so.

You may receive an infraction for this. If so, don't worry, just try to follow the rules more carefully. The infraction will expire in the near future

Thank You.

The UNIX and Linux Forums.
 

8 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

CPU load unit of measure?

If unix says my cpu load is 2.15 exactly what does that mean? --Jason (1 Reply)
Discussion started by: Mac J
1 Replies

2. AIX

Application high CPU load

after a long period of running, the network application's CPU load in our syst em increase slowly, the failed at the end. we use "truss" tool to trace the process, found that it processes something like "semop" ,"semctl","thread_waitlock","kread" kernel call . The trace log file looks like the... (0 Replies)
Discussion started by: Frank2004
0 Replies

3. Red Hat

High cpu load average

Hi Buddies, Thanx for reading my first post... After googling a lot and searching so many forums I am feeling down a bit... Please don't mind my ignorence, and my grammer ... :) My server is running RHEL 2.6.9-5.EL. The cpu load is going higher than roof, almost 100 sometimes. I am... (2 Replies)
Discussion started by: squid04
2 Replies

4. AIX

increase load for testing

Hi, Anyone know of a good procedure or command that will significantly increase the load of my server without crashing it? I want to run some threshold tests and monitor load, cpu and memory usage. Thanks Chris. (1 Reply)
Discussion started by: chlawren
1 Replies

5. Linux

How to find the load on CPU ?

Hi ALL, I have to develop a script which checks for the load on CPU on regular intervals. I created a simple script which uses 'uptime' command to find out the avg load in the last 5 min. I then used grep and put the value of the avg load in a variable OUT. It was working fine till... (5 Replies)
Discussion started by: vikings.svnit
5 Replies

6. Solaris

CPU load -12.50 in server.

Friends I have noticed that the Sun Fire v490 server with Solaris9 OS in my office, is showing a load of 12.50 during peak time and the CPU showing a max of 75% and an average of 60%. The Application running in this machine hung last month(For reasons unknown) and is running fine after... (5 Replies)
Discussion started by: Renjesh
5 Replies

7. Shell Programming and Scripting

awk & CPU Load

Deal All, I'm writing a simple awk to generate some sort of report. The awk will check 24 files (file generated each one hour in a wholoe day) and then it will print one field to another file for counting purposes. The script is working fine but the problem is that the CPU load is very high... (10 Replies)
Discussion started by: charbel
10 Replies

8. Shell Programming and Scripting

how to increase cpu load

can someone suggest me some code in any language that will increase CPU and memory both. (3 Replies)
Discussion started by: learnbash
3 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotsRules - Parse robots.txt files SYNOPSIS
require WWW::RobotRules; my $robotsrules = new WWW::RobotRules 'MOMspider/1.0'; use LWP::Simple qw(get); $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); # Now we are able to check if a URL is valid for those servers that # we have obtained and parsed "robots.txt" files for. if($robotsrules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in <http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access to parts of their web site. The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can parse multiple /robots.txt files. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://info.webcrawler.com/mak/projects/robots/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)
All times are GMT -4. The time now is 03:28 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy