02-03-2011
Apache Virtual URL
Hi All,
i'am facing a problem with urls that don't have a filestructure under DocumentRoot.
A URL like
HTML Code:
http://mydomain.com/applicationrew/Structure1/Structure2/some?parameter=key¶meter1=key1
Should be rewritet to something else.
Now i defined a Location like
<Location ~ "/applicationrew/(.*)">
Order deny,allow
Allow from all
</Location>
But every request ends up in 404 File not found. When i make a directory under DocumentRoot (applicationrew) he complains about the next part of the URL (Structure1).
Is there a possiblity to switch off apache to look on the file system?
I thought Location would do that job.
Regards
wuschelz
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hello, how to hide the full addres url, in apache web server. eg, www.example.org/www/pub/index.html, the address in browser only www.example.org .
Thank You. (2 Replies)
Discussion started by: blesets
2 Replies
2. UNIX for Advanced & Expert Users
Hi all,
How can I enable encoding of special characters present in URL?
eg
If the URL is
http://127.0.0.1/test.cgi?param1=test & test co
it shouldbe encoded to
http://127.0.0.1/test.cgi?param1=test%20%26%20test%20co
Thanks and Reagards,
uttam hoode (3 Replies)
Discussion started by: uttamhoode
3 Replies
3. Cybersecurity
Hello.
I have scenario where a Client send request to Server1.
Server1 send request to Server2.
Request are xmlHTTPRequest - need to get data (XML) from Server2 back to client.
Trying to use APACHE proxy...
Anyone can help?
What to download / configure / ...?
Thank you for your help. (1 Reply)
Discussion started by: ampo
1 Replies
4. Web Development
Hello.
I have scenario where a Client send request to Server1.
Server1 send request to Server2.
Request are xmlHTTPRequest - need to get data (XML) from Server2 back to client.
Trying to use APACHE proxy...
Anyone can help?
What to download / configure / ...?
Thank you for your... (2 Replies)
Discussion started by: ampo
2 Replies
5. Web Development
I'd like to translate a friendly url such as:
http://www.xxxyyyzzz.com/page/12345678/
to:
http://www.xxxyyyzzz.com/page/12/34/56/78/
Seems simple enough, but I cannot figure out how. Any one done this before? (2 Replies)
Discussion started by: markericksen
2 Replies
6. Shell Programming and Scripting
i need help on redirecting apache conf file
i want redirect everything to www.example.com/home
example if i type a url
www.example.com/home/text1
i need that redirected to www.example.com/home (0 Replies)
Discussion started by: shehzad_m
0 Replies
7. Web Development
Hello,
I have a situation where I am trying to use Apache's RedirectMatch directive to redirect all users to a HTTPS URL except a single (Linux) user accessing there own webspace. I have found a piece of regular expression code that negates the username:
^((?!andy).)*$but when I try using it... (0 Replies)
Discussion started by: LostInTheWoods
0 Replies
8. UNIX for Advanced & Expert Users
I need help in apache url redirection:
I have added the below command in httpd.conf and it is working fine.
Redirect http://xyz.com/site/homehttp://abc.com/site/home
Can we set a rule such that http://xyz.com/site/* -> http://abc.com/site/* is applied
For... (0 Replies)
Discussion started by: raghur77
0 Replies
9. Web Development
I need help in apache url redirection:
I have added the below command in httpd.conf and it is working fine.
Redirect http://xyz.com/site/homehttp://abc.com/site/home
Can we set a rule such that http://xyz.com/site/* -> http://abc.com/site/* is applied
For... (0 Replies)
Discussion started by: raghur77
0 Replies
10. Red Hat
Hi Folks,
I am running a website and that needs to be tightened with security in terms of hacking... Whereas, In my URL, when i click on certain links the entire link as contains some words like below:/control_panel
/controlpanel
/admin
/cms
Whereas, i need to block those words in apache... (1 Reply)
Discussion started by: gsiva
1 Replies
LEARN ABOUT DEBIAN
html::linkextor
HTML::LinkExtor(3pm) User Contributed Perl Documentation HTML::LinkExtor(3pm)
NAME
HTML::LinkExtor - Extract links from an HTML document
SYNOPSIS
require HTML::LinkExtor;
$p = HTML::LinkExtor->new(&cb, "http://www.perl.org/");
sub cb {
my($tag, %links) = @_;
print "$tag @{[%links]}
";
}
$p->parse_file("index.html");
DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means
that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods.
$p = HTML::LinkExtor->new
$p = HTML::LinkExtor->new( $callback )
$p = HTML::LinkExtor->new( $callback, $base )
The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If
a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method.
The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide
$base.
The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All
non-link attributes are removed.
$p->links
Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements:
[$tag, $attr => $url1, $attr2 => $url2,...]
The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing
between them the second call will return an empty list.
Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created.
EXAMPLE
This is an example showing how you can extract links from a document received using LWP:
use LWP::UserAgent;
use HTML::LinkExtor;
use URI::URL;
$url = "http://www.perl.org/"; # for instance
$ua = LWP::UserAgent->new;
# Set up a callback that collect image links
my @imgs = ();
sub callback {
my($tag, %attr) = @_;
return if $tag ne 'img'; # we only look closer at <img ...>
push(@imgs, values %attr);
}
# Make the parser. Unfortunately, we don't know the base yet
# (it might be different from $url)
$p = HTML::LinkExtor->new(&callback);
# Request document and parse it as it arrives
$res = $ua->request(HTTP::Request->new(GET => $url),
sub {$p->parse($_[0])});
# Expand all image URLs to absolute ones
my $base = $res->base;
@imgs = map { $_ = url($_, $base)->abs; } @imgs;
# Print them out
print join("
", @imgs), "
";
SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL
COPYRIGHT
Copyright 1996-2001 Gisle Aas.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl v5.14.2 2011-10-15 HTML::LinkExtor(3pm)