Sponsored Content
Top Forums UNIX for Advanced & Expert Users End of Life / Life Cycle Information Post 302176925 by robertmcol on Wednesday 19th of March 2008 03:23:30 PM
Old 03-19-2008
Question End of Life / Life Cycle Information

Hello everyone, I was searching for locations where I can get End of Life information on multiple versions of Unix.

I have found some information which I list below, what I have not found or confirmed is where I can get the information for:

DEC Unix/OSF1 V4.0D
NCR Unix SVR4 MP-RAS Rel 3.02.
HP-UX B.11.11 U 9000/785
HP-UX 11-i v 11.23
HP-UX B.11.00 A 9000/782


Any assistance that you can provide is much appreciated.


Quote:
SUN Solaris
RELEASE LS DATE Phase 1 End Phase 2 End
Solaris 10 TBD TBD TBD
Solaris 9 TBD TBD TBD
Solaris 8 2/16/2007 3/31/2009 3/31/2012
Solaris 7 8/15/2003 8/15/2005 8/15/2008


IBM AIX
Version Service Discontinued
5.01.0 4/01/2006
5.02.0 4/30/2009
5.1 4/01/2006
5.2 9/30/2008
4.3.3 12/31/2006

What I am not clear on here is why version 5.02.0 has a later Service discontinued date then does version 5.1 and 5.2 and help here also would be appreciated.
Thank you
Robertmcol
 

3 More Discussions You Might Find Interesting

1. News, Links, Events and Announcements

End of life date for errata support for our final Red Hat Linux distribution

Dear Red Hat Linux user, We are approaching the published end of life date for errata support for our final Red Hat Linux distribution. We'd like to remind you of this date and the options available to you for migrating your Red Hat Linux implementations: Red Hat Enterprise Linux and the... (0 Replies)
Discussion started by: google
0 Replies

2. AIX

AIX 5.2 End of Life

Hello, has anyone heard of plans to end of life AIX 5.2? We are currently a 5.2 shop and are bringing in some new servers and are debating using 5.3 on the new boxes. Also, does anyone know of a site or document that show all the advantages of 5.3 over 5.2? Such as better jfs2 support etc. ... (3 Replies)
Discussion started by: zuessh
3 Replies

3. HP-UX

The life cycle of System logfiles

Hi , The log files of the system are located in /var/admin/syslog , I want to know on which way the files are generated. To be more clear for example old log files are deleted automatically from the sytem ( is that configured ? if yes what is the criteria: is it file volume or file date or ..)... (5 Replies)
Discussion started by: enabbou
5 Replies
Web::Scraper(3pm)					User Contributed Perl Documentation					 Web::Scraper(3pm)

NAME
Web::Scraper - Web Scraping Toolkit using HTML and CSS Selectors or XPath expressions SYNOPSIS
use URI; use Web::Scraper; # First, create your scraper block my $tweets = scraper { # Parse all LIs with the class "status", store them into a resulting # array 'tweets'. We embed another scraper for each tweet. process "li.status", "tweets[]" => scraper { # And, in that array, pull in the elementy with the class # "entry-content", "entry-date" and the link process ".entry-content", body => 'TEXT'; process ".entry-date", when => 'TEXT'; process 'a[rel="bookmark"]', link => '@href'; }; }; my $res = $tweets->scrape( URI->new("http://twitter.com/miyagawa") ); # The result has the populated tweets array for my $tweet (@{$res->{tweets}}) { print "$tweet->{body} $tweet->{when} (link: $tweet->{link}) "; } The structure would resemble this (visually) { tweets => [ { body => $body, when => $date, link => $uri }, { body => $body, when => $date, link => $uri }, ] } DESCRIPTION
Web::Scraper is a web scraper toolkit, inspired by Ruby's equivalent Scrapi. It provides a DSL-ish interface for traversing HTML documents and returning a neatly arranged Perl data strcuture. The scraper and process blocks provide a method to define what segments of a document to extract. It understands HTML and CSS Selectors as well as XPath expressions. METHODS
scraper $scraper = scraper { ... }; Creates a new Web::Scraper object by wrapping the DSL code that will be fired when scrape method is called. scrape $res = $scraper->scrape(URI->new($uri)); $res = $scraper->scrape($html_content); $res = $scraper->scrape($html_content); $res = $scraper->scrape($http_response); $res = $scraper->scrape($html_element); Retrieves the HTML from URI, HTTP::Response, HTML::Tree or text strings and creates a DOM object, then fires the callback scraper code to retrieve the data structure. If you pass URI or HTTP::Response object, Web::Scraper will automatically guesses the encoding of the content by looking at Content-Type headers and META tags. Otherwise you need to decode the HTML to Unicode before passing it to scrape method. You can optionally pass the base URL when you pass the HTML content as a string instead of URI or HTTP::Response. $res = $scraper->scrape($html_content, "http://example.com/foo"); This way Web::Scraper can resolve the relative links found in the document. process scraper { process "tag.class", key => 'TEXT'; process '//tag[contains(@foo, "bar")]', key2 => '@attr'; process '//comment()', 'comments[]' => 'TEXT'; }; process is the method to find matching elements from HTML with CSS selector or XPath expression, then extract text or attributes into the result stash. If the first argument begins with "//" or "id(" it's treated as an XPath expression and otherwise CSS selector. # <span class="date">2008/12/21</span> # date => "2008/12/21" process ".date", date => 'TEXT'; # <div class="body"><a href="http://example.com/">foo</a></div> # link => URI->new("http://example.com/") process ".body > a", link => '@href'; # <div class="body"><!-- HTML Comment here --><a href="http://example.com/">foo</a></div> # comment => " HTML Comment here " # # NOTES: A comment nodes are accessed when installed # the HTML::TreeBuilder::XPath (version >= 0.14) and/or # the HTML::TreeBuilder::LibXML (version >= 0.13) process "//div[contains(@class, 'body')]/comment()", comment => 'TEXT'; # <div class="body"><a href="http://example.com/">foo</a></div> # link => URI->new("http://example.com/"), text => "foo" process ".body > a", link => '@href', text => 'TEXT'; # <ul><li>foo</li><li>bar</li></ul> # list => [ "foo", "bar" ] process "li", "list[]" => "TEXT"; # <ul><li id="1">foo</li><li id="2">bar</li></ul> # list => [ { id => "1", text => "foo" }, { id => "2", text => "bar" } ]; process "li", "list[]" => { id => '@id', text => "TEXT" }; EXAMPLES
There are many examples in the "eg/" dir packaged in this distribution. It is recommended to look through these. NESTED SCRAPERS
TBD FILTERS
TBD AUTHOR
Tatsuhiko Miyagawa <miyagawa@bulknews.net> LICENSE
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO
<http://blog.labnotes.org/category/scrapi/> HTML::TreeBuilder::XPath perl v5.14.2 2011-11-19 Web::Scraper(3pm)
All times are GMT -4. The time now is 06:54 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy