Sponsored Content
Operating Systems AIX Script for real time network traffic per process Post 302780959 by joeyg on Friday 15th of March 2013 10:16:24 AM
Old 03-15-2013
Bumping up posts or double posting is not permitted in these forums.

Please read the rules, which you agreed to when you registered, if you have not already done so.

You may receive an infraction for this. If so, don't worry, just try to follow the rules more carefully. The infraction will expire in the near future

Thank You.

The UNIX and Linux Forums.
 

8 More Discussions You Might Find Interesting

1. Cybersecurity

How to capture network traffic

Hi, Can someone give me the clue on how to capture network traffic at gateway. Thanx (2 Replies)
Discussion started by: kayode
2 Replies

2. Shell Programming and Scripting

Perl or Shell script to read a transaction log in real time

Hello, I have a Apache webserver running on RedHat. Its primary function is a proxy server for users accessing the internet. I have a transaction log that logs every transactions of every users. For users trying to access certain sites/content the transactions goes into a 302 redirect loop and... (2 Replies)
Discussion started by: bruno406
2 Replies

3. Infrastructure Monitoring

Network Traffic

Hi all, Got a strange one here, well not so much strange, different :-) I need to work out if a server is particulary chatty, whether its talking / communicating heavily to a particular server, as Im planning to physically move the server to a different server, over a link. Hence the... (6 Replies)
Discussion started by: sbk1972
6 Replies

4. Shell Programming and Scripting

shell script to replicate the log files from one location to another in real time

Hi, On the server, we have app log files in this location /app/logs/error.log On the same server, in a real time, we would like to replicate that into /var/ directory. if someone has already done this, please share the script. Thanks in advance. (4 Replies)
Discussion started by: lookinginfo
4 Replies

5. Shell Programming and Scripting

Shell script to convert epoch time to real time

Dear experts, I have an epoch time input file such as : - 1302451209564 1302483698948 1302485231072 1302490805383 1302519244700 1302492787481 1302505299145 1302506557022 1302532112140 1302501033105 1302511536485 1302512669550 I need the epoch time above to be converted into real... (4 Replies)
Discussion started by: aismann
4 Replies

6. Shell Programming and Scripting

Script for real time network traffic per process

Hi All Gurus, I want to write a script (bash/ksh/csh) which will show real time network traffic ( TCP or UDP ) generated by per process/PID. For both Linux/AIX system, as nethogs ( Linux package ) shows ? Any suggestion is MOST welcome. Thanks in Advance, Amritendu Das (3 Replies)
Discussion started by: linux.amrit
3 Replies

7. Infrastructure Monitoring

How do I know what traffic is in network port?

If I would like to know what connection , data , traffic in a network port ( eth0 ) , what can I do ? ps. because I always found the network is very slow , so I would like what the network port is doing . Thanks Login ID ust3 is currently in read-only mode for multiple infractions. Creating... (0 Replies)
Discussion started by: ust03
0 Replies

8. IP Networking

I would like to monitor network traffic for a computer on my network

My son does homework on a school laptop. I was thinking about setting up a gateway on my home network, so that I can monitor web traffic and know if he is doing his homework without standing over his shoulder. Ideally I would like to use the Raspberry Pi Model b that I already have. However, I... (15 Replies)
Discussion started by: gandolf989
15 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotsRules - Parse robots.txt files SYNOPSIS
require WWW::RobotRules; my $robotsrules = new WWW::RobotRules 'MOMspider/1.0'; use LWP::Simple qw(get); $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); # Now we are able to check if a URL is valid for those servers that # we have obtained and parsed "robots.txt" files for. if($robotsrules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in <http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access to parts of their web site. The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can parse multiple /robots.txt files. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://info.webcrawler.com/mak/projects/robots/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)
All times are GMT -4. The time now is 01:12 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy