01-15-2006
more AIX links
I saw the large interest in the links that Perderabo posted. I know that I appreciated the links. Sooner or later they prove useful. Here are some more links. Sorry that some are specific to my environment.
= = = = = = = = = = = = = = = = = = = =
IBM pSeries firmware and microcode website.
IBM Support
AIX Maintenance level update website.
Quick links for AIX fixes
IBM Publib documentation link.
pSeries and AIX Information Center
edit by bakunin: thanks for the input, i have added the links to the pinned posting by perderabo
Last edited by bakunin; 02-20-2008 at 12:22 PM..
8 More Discussions You Might Find Interesting
1. HP-UX
Manufacturer Links
Homepage: Hewlett Packard Enterprise (HPE) (the www.hp.com is for the consumer market)
Documentation: HPE QuickSpecs
Support: https://support.hpe.com
Community: https://community.hpe.com
FreeWare
The HP-UX Porting and Archive Centre (UK)
FTP Servers... (0 Replies)
Discussion started by: Perderabo
0 Replies
2. AIX
Manufacturer Links
General Information
Home Page: IBM United States
Documentation/Information: IBM System p - UNIX servers: Support and services
pSeries and AIX Information Center
Developerworks AIX Wiki: AIX Wiki
AIX for System Administrators
In-depth information from IBM:
IBM... (0 Replies)
Discussion started by: Perderabo
0 Replies
3. UNIX for Dummies Questions & Answers
how can we know the different links (hard and soft) to a file .....
i got a file in my directory and i need to know the different links attached to this file
any one plz help
thanks.. (1 Reply)
Discussion started by: uniqmaniak
1 Replies
4. UNIX for Advanced & Expert Users
why directory has two links as default.
what's the purpose? (1 Reply)
Discussion started by: nagalenoj
1 Replies
5. AIX
I need to copy a directory from a production system to a test system on the same Aix server. However I need to ensure that the soft links are preserved as part of the copy ( therefore I guess the cp command is not the way to go )
What command can I use in Aix to achieve this copy ?
thanks in... (2 Replies)
Discussion started by: jimthompson
2 Replies
6. AIX
Hello,
I got an IHS 6.1 installed and want to publish a directory with an index of files, directories and symlinks / symbolic links / soft links, last ones being created with the usual Unix command "ln -s .... ....".
In httpd.conf I've tried following for that directory:
Options Indexes... (1 Reply)
Discussion started by: zaxxon
1 Replies
7. Solaris
When loooking at files in a directory using ls, how can I tell if I have a hard link or soft link? (11 Replies)
Discussion started by: Harleyrci
11 Replies
8. AIX
Hi
I'm logged in as root in an aix box
Which command will list all the soft links and hard links present in the server ? (2 Replies)
Discussion started by: newtoaixos
2 Replies
LEARN ABOUT SUSE
html::linkextor
HTML::LinkExtor(3) User Contributed Perl Documentation HTML::LinkExtor(3)
NAME
HTML::LinkExtor - Extract links from an HTML document
SYNOPSIS
require HTML::LinkExtor;
$p = HTML::LinkExtor->new(&cb, "http://www.perl.org/");
sub cb {
my($tag, %links) = @_;
print "$tag @{[%links]}
";
}
$p->parse_file("index.html");
DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means
that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods.
$p = HTML::LinkExtor->new
$p = HTML::LinkExtor->new( $callback )
$p = HTML::LinkExtor->new( $callback, $base )
The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If
a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method.
The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide
$base.
The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All
non-link attributes are removed.
$p->links
Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements:
[$tag, $attr => $url1, $attr2 => $url2,...]
The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing
between them the second call will return an empty list.
Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created.
EXAMPLE
This is an example showing how you can extract links from a document received using LWP:
use LWP::UserAgent;
use HTML::LinkExtor;
use URI::URL;
$url = "http://www.perl.org/"; # for instance
$ua = LWP::UserAgent->new;
# Set up a callback that collect image links
my @imgs = ();
sub callback {
my($tag, %attr) = @_;
return if $tag ne 'img'; # we only look closer at <img ...>
push(@imgs, values %attr);
}
# Make the parser. Unfortunately, we don't know the base yet
# (it might be different from $url)
$p = HTML::LinkExtor->new(&callback);
# Request document and parse it as it arrives
$res = $ua->request(HTTP::Request->new(GET => $url),
sub {$p->parse($_[0])});
# Expand all image URLs to absolute ones
my $base = $res->base;
@imgs = map { $_ = url($_, $base)->abs; } @imgs;
# Print them out
print join("
", @imgs), "
";
SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL
COPYRIGHT
Copyright 1996-2001 Gisle Aas.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl v5.12.1 2009-02-09 HTML::LinkExtor(3)