Sponsored Content
Special Forums Cybersecurity APACHE rewrite / redirect URL Post 302246176 by radoulov on Monday 13th of October 2008 04:27:33 AM
Old 10-13-2008
No duplicate or cross posting! Please read the rules!
Continue here.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Apache Rewrite help!

I am trying to write RewriteRule on Apache_1.3.26 to get users web page from another server. for example if users tries to get web page on www.somedomain.com/~usersname it will get the web page from www.testdomain.com/~username without redirect and users will not be aware of any redirect... (1 Reply)
Discussion started by: hassan2
1 Replies

2. Web Development

APACHE rewrite / redirect URL

Hello. I have scenario where a Client send request to Server1. Server1 send request to Server2. Request are xmlHTTPRequest - need to get data (XML) from Server2 back to client. Trying to use APACHE proxy... Anyone can help? What to download / configure / ...? Thank you for your... (2 Replies)
Discussion started by: ampo
2 Replies

3. Web Development

Apache rewrite rules.

Hi, I am new to Apache but I have requirement as follows. if the url is http://images/data1/templates/ it should redirect to http:/172.20.224.23/templates/ if the url doesn't have "data1/templates" (mean http://images/) it should redirect to http://images:8080/. I tried as below ... (3 Replies)
Discussion started by: sambadamerla
3 Replies

4. Web Development

Regex to rewrite URL to another URL based on HTTP_HOST?

I am trying to find a way to test some code, but I need to rewrite a specific URL only from a specific HTTP_HOST The call goes out to http://SUB.DOMAIN.COM/showAssignment/7bde10b45efdd7a97629ef2fe01f7303/jsmodule/Nevow.Athena The ID in the middle is always random due to the cookie. I... (5 Replies)
Discussion started by: EXT3FSCK
5 Replies

5. Red Hat

Need some help on tomcat URL rewrite or mod_jk

I am trying to remove the context name from the url of my server. Current URL - http://www.domainname.com/MyApp/ What I need to make is to make it avaialble at - http://www.domainname.com/ I have already tried couple of things like below - RewriteEngine On RewriteCond... (0 Replies)
Discussion started by: rockf1bull
0 Replies

6. UNIX for Advanced & Expert Users

Apache rewrite rule help needed

Hi All, I want to redirect from http://localhost/abc/xyz/def?cc=dk&lc=da to http://localhost/abc/mnc/pdf?cc=dk&lc=da. Please suggest a rewrite rule. (0 Replies)
Discussion started by: jagnikam
0 Replies

7. Web Development

Append query string via URL rewrite in apache

Hi, Googled around but I couldn't find anything similar. I'm looking to do the following in apache.. if a user comes into the following URL. http://www.example.com/some/thing.cid?wholebunch_of_stuff_here keep the URL exactly as is BUT add &something at the very end of it. thanks in... (1 Reply)
Discussion started by: kmaq7621
1 Replies

8. Web Development

Mod_rewrite - URL rewrite based upon HTTP_REFERER

Hello, I have added following rewrite cond and rewrite rules but it does not work. RewriteCond %{HTTP_REFERER} ^http://192\.168\.1\.150/categories/.*$ RewriteRule ^(.*)$ http://www.blahblah.com/ When I hit url : http://192.168.1.150/categories/881-Goes?page=7 in my browser - it... (2 Replies)
Discussion started by: ashokvpp
2 Replies

9. Web Development

Apache rewrite/redirect parameters explanation

hi what is the meaning of the below code, please explain in details RewriteRule '^/(.*)$1$2/?' (1 Reply)
Discussion started by: raghur77
1 Replies

10. Web Development

Redirect URL containing #!

I have a Rewrite Rule that helps me redirect a page with no hindrance. I am rewriting mydomain.com/best to mydomain.com/#!/ using RewriteRule ^\/best\/? /#!/ Now I want to Rewrite mydomain.com/#!/best to (0 Replies)
Discussion started by: Junaid Subhani
0 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotsRules - Parse robots.txt files SYNOPSIS
require WWW::RobotRules; my $robotsrules = new WWW::RobotRules 'MOMspider/1.0'; use LWP::Simple qw(get); $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); # Now we are able to check if a URL is valid for those servers that # we have obtained and parsed "robots.txt" files for. if($robotsrules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in <http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access to parts of their web site. The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can parse multiple /robots.txt files. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://info.webcrawler.com/mak/projects/robots/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)
All times are GMT -4. The time now is 02:03 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy