Sponsored Content
Contact Us Post Here to Contact Site Administrators and Moderators Page Not Found error while parsing url Post 302881757 by Neo on Tuesday 31st of December 2013 07:29:47 PM
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to get the page size (of a url) using wget

Hi , I am trying to get page size of a url(e.g.,www.example.com) using wget command.Any thoughts what are the parameters i need to send with wget to get the size alone? Regards, Raj (1 Reply)
Discussion started by: rajbal
1 Replies

2. Shell Programming and Scripting

parsing url string

Hi, I have a url like this:- http://resource.ibab.ac.in/cgi-bin/pubmed_abstract/y.cgi?pmid=+1.10529272+&pmid=+3.8379586 I want to parse the url string.I want to get the values 1.10529272 3.8379586 How do i do that? (5 Replies)
Discussion started by: vanitham
5 Replies

3. Shell Programming and Scripting

How to extract url from html page?

for example, I have an html file, contain <a href="http://awebsite" id="awebsite" class="first">website</a>and sometime a line contains more then one link, for example <a href="http://awebsite" id="awebsite" class="first">website</a><a href="http://bwebsite" id="bwebsite"... (36 Replies)
Discussion started by: 14th
36 Replies

4. UNIX for Dummies Questions & Answers

Awk: print all URL addresses between iframe tags without repeating an already printed URL

Here is what I have so far: find . -name "*php*" -or -name "*htm*" | xargs grep -i iframe | awk -F'"' '/<iframe*/{gsub(/.\*iframe>/,"\"");print $2}' Here is an example content of a PHP or HTM(HTML) file: <iframe src="http://ADDRESS_1/?click=5BBB08\" width=1 height=1... (18 Replies)
Discussion started by: striker4o
18 Replies

5. UNIX for Dummies Questions & Answers

man page in MANPATH not found

dear unix experts, the 'man' command on my system isn't finding a manpage that is in a MANPATH directory, or even when I specify the path directly: 12:56pm ilya@node1390 /idi/sabetilab/ilya/usr/share/man $ man -M . xemacs No manual entry for xemacs 12:56pm ilya@node1390... (4 Replies)
Discussion started by: notestaff
4 Replies

6. Web Development

CGI not working with httpd server on busybox 1.15.0 on ltib Linux 2.6.34 (404 page not found)

I have some industrial ARM linux board with 2.6.34 Linux on it with Busybox v1.15.0. The https.conf is located in /etc/ and contains: H:/root/web In the www directory I also have 'cgi-bin' folder with chmod 777 and in that folder a file called 'testcgi'. Now I start the server with... (1 Reply)
Discussion started by: Roboserg
1 Replies

7. Shell Programming and Scripting

API Based URL Parsing

Hi Friends, We have a situation where we need to pass API based URL which will return XML response ,and I need to convert those XML response to a delimited flat file. url.txt -- will have http://xyz.com/beta/xxx.xml?isDRR=True&city=London&province=England... (1 Reply)
Discussion started by: rakesh5300
1 Replies

8. Shell Programming and Scripting

Use curl to send a static xml file using url encoding to a web page using pos

Hi I am try to use curl to send a static xml file using url encoding to a web page using post. This has to go through a particular port on our firewall as well. This is my first exposure to curl and am not having much success, so any help you can supply, or point me in the right direction would be... (1 Reply)
Discussion started by: Paul Walker
1 Replies

9. Shell Programming and Scripting

Reading URL using Mechanize and dump all the contents of the URL to a file

Hello, Am very new to perl , please help me here !! I need help in reading a URL from command line using PERL:: Mechanize and needs all the contents from the URL to get into a file. below is the script which i have written so far , #!/usr/bin/perl use LWP::UserAgent; use... (2 Replies)
Discussion started by: scott_cog
2 Replies

10. Web Development

New "Page Not Found" (404) Page

Made some changes to the forum, so when a page is not found and generates a 404 error, the site redirects to "Today's Posts" page and added a "Not Found" message: <?php header('HTTP/1.0 404 Not Found', true, 404); header("Location: https://www.unix.com/search.php?do=getdaily&redirect=404");... (0 Replies)
Discussion started by: Neo
0 Replies
HTML::LinkExtor(3)					User Contributed Perl Documentation					HTML::LinkExtor(3)

NAME
HTML::LinkExtor - Extract links from an HTML document SYNOPSIS
require HTML::LinkExtor; $p = HTML::LinkExtor->new(&cb, "http://www.perl.org/"); sub cb { my($tag, %links) = @_; print "$tag @{[%links]} "; } $p->parse_file("index.html"); DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods. $p = HTML::LinkExtor->new $p = HTML::LinkExtor->new( $callback ) $p = HTML::LinkExtor->new( $callback, $base ) The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method. The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide $base. The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All non-link attributes are removed. $p->links Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements: [$tag, $attr => $url1, $attr2 => $url2,...] The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing between them the second call will return an empty list. Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created. EXAMPLE
This is an example showing how you can extract links from a document received using LWP: use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.perl.org/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'img'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be different from $url) $p = HTML::LinkExtor->new(&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join(" ", @imgs), " "; SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL COPYRIGHT
Copyright 1996-2001 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2013-03-25 HTML::LinkExtor(3)
All times are GMT -4. The time now is 09:31 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy