Sponsored Content
Full Discussion: HPC Related Links
Special Forums UNIX and Linux Applications High Performance Computing HPC Related Links Post 302279528 by Neo on Friday 23rd of January 2009 03:00:03 AM
Old 01-23-2009
HPC Related Links

Our directory of HPC related links is growing:

Virtualization, Grid and Cloud Computing - Links

Please contribute!
 

6 More Discussions You Might Find Interesting

1. What is on Your Mind?

Post Your Favorite UNIX/Linux Related RSS Feed Links

Hello, I am planning to revise the RSS News subforum areas, here: News, Links, Events and Announcements - The UNIX Forums ... maybe with a subforum for each OS specific news, like HP-UX, Solaris, RedHat, OSX, etc. RSS subforums.... Please post your favorite OS specific RSS (RSS2) link... (0 Replies)
Discussion started by: Neo
0 Replies

2. High Performance Computing

Guides for new HPC admins

In my company, it's fallen on me to serve as the admin of our new HPC cluster, a task that's very new to me. It's very important to me to lay a solid foundation and avoid any unnecessary pitfalls. So, can anyone recommend a succinct guide or list of do's-and-don'ts for adiminstering an HPC cluster?... (0 Replies)
Discussion started by: DBryan
0 Replies

3. Solaris

Hard Links and Soft or Sym links

When loooking at files in a directory using ls, how can I tell if I have a hard link or soft link? (11 Replies)
Discussion started by: Harleyrci
11 Replies

4. AIX

List all the soft links and hard links

Hi I'm logged in as root in an aix box Which command will list all the soft links and hard links present in the server ? (2 Replies)
Discussion started by: newtoaixos
2 Replies

5. Shell Programming and Scripting

need downloading related help...but its not related to unix

Hi All, I am trying to dowmload the zip file "zkManageCustomers.zip " but i dont have access. Can anyone help me to download this file See the below link- http://www.ibm.com/developerworks/opensource/library/wa-aj-open/index.html?ca=drs- Please help me as early as... (1 Reply)
Discussion started by: aish11
1 Replies

6. Homework & Coursework Questions

Class hpc project

My high school started a tech lab where students like myself can take apart computers, build circuit boards, learn to program and lots more. I got the job of building a cluster with 4 old work stations we have. This is just a trial if it works well we can get more work stations. We have one... (3 Replies)
Discussion started by: PC-2011
3 Replies
HTML::LinkExtor(3)					User Contributed Perl Documentation					HTML::LinkExtor(3)

NAME
HTML::LinkExtor - Extract links from an HTML document SYNOPSIS
require HTML::LinkExtor; $p = HTML::LinkExtor->new(&cb, "http://www.perl.org/"); sub cb { my($tag, %links) = @_; print "$tag @{[%links]} "; } $p->parse_file("index.html"); DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods. $p = HTML::LinkExtor->new([$callback[, $base]]) The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method. The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide $base. The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All non-link attributes are removed. $p->links Returns a list of all links found in the document. The returned values will be anonymous arrays with the follwing elements: [$tag, $attr => $url1, $attr2 => $url2,...] The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing between them the second call will return an empty list. Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created. EXAMPLE
This is an example showing how you can extract links from a document received using LWP: use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.perl.org/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'img'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be diffent from $url) $p = HTML::LinkExtor->new(&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join(" ", @imgs), " "; SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL COPYRIGHT
Copyright 1996-2001 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.8.0 2001-04-10 HTML::LinkExtor(3)
All times are GMT -4. The time now is 04:00 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy