Sponsored Content
Special Forums IP Networking The requested URL was rejected. Please consult with your administrator Post 303000826 by Corona688 on Thursday 20th of July 2017 12:10:26 PM
Old 07-20-2017
Quote:
Originally Posted by drl
Hi.

Yes, as Corona688 wrote, links, although different from links2 worked for me as well. I think I decided usually to use links2 because it seemed to render HTML tables in text very well.
Whether its links or links2 depends on your distribution. That actually changed for me with an upgrade.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

url calling and parameter passing to url in script

Hi all, I need to write a unix script in which need to call a url. Then need to pass parameters to that url. please help. Regards, gander_ss (1 Reply)
Discussion started by: gander_ss
1 Replies

2. Shell Programming and Scripting

url calling and parameter passing to url in script

Hi all, I need to write a unix script in which need to call a url. Then need to pass parameters to that url. please help. Regards, gander_ss (1 Reply)
Discussion started by: gander_ss
1 Replies

3. What is on Your Mind?

Unix Administrator and Linux Administrator transition

Hello Unix Experts, I'm going to be graduating with a CIS (Computer Information Systems) degree in the coming year. I have been offered an internship with a job title of Unix Administrator under a well known company. I understand that Unix is used for high-end servers in many large... (1 Reply)
Discussion started by: brentmd24
1 Replies

4. Forum Support Area for Unregistered Users & Account Problems

user names being rejected during registration

To Whom It May Concern, I've tried registering, and am unable to do so. The message being returned is that my name choices are being rejected for not meeting some administrator criteria, which is not discernible from the message returned by the board software. The password I chose was all... (1 Reply)
Discussion started by: username_in_use
1 Replies

5. Web Development

Regex to rewrite URL to another URL based on HTTP_HOST?

I am trying to find a way to test some code, but I need to rewrite a specific URL only from a specific HTTP_HOST The call goes out to http://SUB.DOMAIN.COM/showAssignment/7bde10b45efdd7a97629ef2fe01f7303/jsmodule/Nevow.Athena The ID in the middle is always random due to the cookie. I... (5 Replies)
Discussion started by: EXT3FSCK
5 Replies

6. UNIX for Dummies Questions & Answers

Consult

Hi Gurus of Unix, I am Newby in Unix I Want to know what are the best Tool´s that Unix has, (like openoffice, antivurus, etc) Probably you can tell a link where I can find the best tool to use with my unix. I has an OpenSolaris Machine Regards (1 Reply)
Discussion started by: andresguillen
1 Replies

7. UNIX for Dummies Questions & Answers

Awk: print all URL addresses between iframe tags without repeating an already printed URL

Here is what I have so far: find . -name "*php*" -or -name "*htm*" | xargs grep -i iframe | awk -F'"' '/<iframe*/{gsub(/.\*iframe>/,"\"");print $2}' Here is an example content of a PHP or HTM(HTML) file: <iframe src="http://ADDRESS_1/?click=5BBB08\" width=1 height=1... (18 Replies)
Discussion started by: striker4o
18 Replies

8. Post Here to Contact Site Administrators and Moderators

I want to consult about 4th of Forum Rules

Hi, Admin, because my English is very poor, So I don't understand very well about 4th of Forum Rules, that is “Do not 'bump up' questions if they are not answered promptly. No duplicate or cross-posting and do not report a post”. My question is: if I created a new thread, and some people reply... (3 Replies)
Discussion started by: weichanghe2000
3 Replies

9. Shell Programming and Scripting

Reading URL using Mechanize and dump all the contents of the URL to a file

Hello, Am very new to perl , please help me here !! I need help in reading a URL from command line using PERL:: Mechanize and needs all the contents from the URL to get into a file. below is the script which i have written so far , #!/usr/bin/perl use LWP::UserAgent; use... (2 Replies)
Discussion started by: scott_cog
2 Replies

10. Forum Support Area for Unregistered Users & Account Problems

Registration User Name Rejected

Unix Admin Registration process refused two user names. I may have already registered here and it's kicking me out because of email address but giving me username msg. What's the solution to this? Thanks (0 Replies)
Discussion started by: Unregistered
0 Replies
HTML::LinkExtor(3)					User Contributed Perl Documentation					HTML::LinkExtor(3)

NAME
HTML::LinkExtor - Extract links from an HTML document SYNOPSIS
require HTML::LinkExtor; $p = HTML::LinkExtor->new(&cb, "http://www.perl.org/"); sub cb { my($tag, %links) = @_; print "$tag @{[%links]} "; } $p->parse_file("index.html"); DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods. $p = HTML::LinkExtor->new $p = HTML::LinkExtor->new( $callback ) $p = HTML::LinkExtor->new( $callback, $base ) The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method. The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide $base. The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All non-link attributes are removed. $p->links Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements: [$tag, $attr => $url1, $attr2 => $url2,...] The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing between them the second call will return an empty list. Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created. EXAMPLE
This is an example showing how you can extract links from a document received using LWP: use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.perl.org/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'img'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be different from $url) $p = HTML::LinkExtor->new(&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join(" ", @imgs), " "; SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL COPYRIGHT
Copyright 1996-2001 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2013-03-25 HTML::LinkExtor(3)
All times are GMT -4. The time now is 03:05 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy