Sponsored Content
Operating Systems Linux Learning scrapers, webcrawlers, search engines and CURL Post 303019102 by Neo on Friday 22nd of June 2018 11:16:30 PM
Old 06-23-2018
Quote:
Originally Posted by TBotNik
  • Text only vs regular brower: which is best?
  • wget vs php fileopen vs CURL: Which is best?
  • HTML tag find/parse: Are there libraries that effectively do this?
  • HTML tag find/parse: Is REGEX the best way to parse these? Where are examples?
  • Checking for the new meta-tags of:
I think you are better off to get web page content using PHP scripts and parse the files with REGEX.

If you Google around, I am sure you can find many sample PHP scripts that do most of what you want. This is very old technology and there is no need to reinvent the wheel parsing HTML data.
 

3 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

I dont want to know any search engines

I just want to know where I can download it on this website plz (1 Reply)
Discussion started by: memattmyself
1 Replies

2. UNIX for Dummies Questions & Answers

Using cURL to save online search results

Hi, I'm attacking this from ignorance because I am not sure how to even ask the question. Here is the mission: I have a list of about 4,000 telephone numbers for past customers. I need to determine how many of these customers are still in business. Obviously, I could call all the numbers.... (0 Replies)
Discussion started by: jccbin
0 Replies

3. Shell Programming and Scripting

Checking status of engines using C-shell

I am relatively new to scripting. I am trying to develop a script that will 1. Source an executable file as an argument to the script that sets up the environment 2. Run a command "stat" that gives the status of 5 Engines running on the system 3. Check the status of the 5 Engines as either... (0 Replies)
Discussion started by: paslas
0 Replies
HTML::LinkExtor(3)					User Contributed Perl Documentation					HTML::LinkExtor(3)

NAME
HTML::LinkExtor - Extract links from an HTML document SYNOPSIS
require HTML::LinkExtor; $p = HTML::LinkExtor->new(&cb, "http://www.perl.org/"); sub cb { my($tag, %links) = @_; print "$tag @{[%links]} "; } $p->parse_file("index.html"); DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods. $p = HTML::LinkExtor->new $p = HTML::LinkExtor->new( $callback ) $p = HTML::LinkExtor->new( $callback, $base ) The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method. The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide $base. The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All non-link attributes are removed. $p->links Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements: [$tag, $attr => $url1, $attr2 => $url2,...] The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing between them the second call will return an empty list. Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created. EXAMPLE
This is an example showing how you can extract links from a document received using LWP: use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.perl.org/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'img'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be different from $url) $p = HTML::LinkExtor->new(&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join(" ", @imgs), " "; SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL COPYRIGHT
Copyright 1996-2001 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2013-03-25 HTML::LinkExtor(3)
All times are GMT -4. The time now is 09:45 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy