Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Selecting information from several web pages... Post 31356 by Vishnu on Wednesday 6th of November 2002 03:36:33 PM
Old 11-06-2002
Selecting information from several web pages...

Hi All!

Is this possible?

I know of several hundreds of urls linking to similar looking hp-ux man pages, like these. In these urls only the last words separated by / are changing in numbering, so we can generate these...

http://docs.hp.com/hpux/onlinedocs/B...00/31-con.html
http://docs.hp.com/hpux/onlinedocs/B...00/34-con.html
http://docs.hp.com/hpux/onlinedocs/B...3/331-con.html

I know that all these pages follow a certain pattern in their layout. I want to make a small consolidated report of all hp-ux commands listed in these pages with only their, say descriptions, examples etc...

If I have a command which works in a loop on such urls and in each turn return me the page contents, I can filter out sections which I want...

Is this possible? Any hint is highly appreciated...

Also is there a UNIX utility which converts html to simple readable text?

Cheers!
Vishnu.
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Dynamic web pages for Unix Web Server

Hi, my company is considering a new development of our web site, which used to run on Apachi over Solaris. The company who is going to do this for us knows only about developing it in ASP. I guess this means we'll have to have another ISS server on NT for these dynamic pages :( What are... (5 Replies)
Discussion started by: me2unix
5 Replies

2. Shell Programming and Scripting

Count links in all of my web pages

Counts the number of hyperlinks in all web pages in the current directory and all of its sub-directories. Count in all files of type "*htm" and "*html" . i want the output to look something like this: Total number of web pages: (number) Total number of links: (number) Average number of links... (1 Reply)
Discussion started by: phillip
1 Replies

3. UNIX for Dummies Questions & Answers

Browse Web pages through command line

Is there any way to browse web pages while on the command line? I know wget can download pages, but I was wondering if there was an option other than that. (2 Replies)
Discussion started by: vroomlicious
2 Replies

4. Shell Programming and Scripting

Investigating web pages in awk

hello. i want to make an awk script to search an html file and output all the links (e.g .html, .htm, .jpg, .doc, .pdf, etc..) inside it. also, i want the links that will be output to be split into 3 groups (separated by an empty line), the first group with links to other webpages (.html .htm etc),... (1 Reply)
Discussion started by: adpe
1 Replies

5. UNIX for Dummies Questions & Answers

curl command with web pages

I can't quite seem to understand what the curl command does with a web address. I tried this: curl O'Reilly Media: Tech Books, Conferences, Courses, News but I just got the first few lines of a web page, and it's nowhere on my machine. Can someone elaborate? (2 Replies)
Discussion started by: Straitsfan
2 Replies

6. UNIX for Dummies Questions & Answers

Forcing web pages to anti-aliase

Here is an observation that has started to riddle me and perhaps someone can enlighten me. When a web page (or desktop page for that matter) uses the standard font, it is not anti-aliased, unless the user opts in to do so via the desktop settings. It appears however that fonts are not... (0 Replies)
Discussion started by: figaro
0 Replies

7. Shell Programming and Scripting

Checking Web Pages?

Hey guys, Unfortunatley, I can not use wget on our systems.... I am looking for another way for a UNIX script to test web pages and let me know if they are up or down for some of our application. Has anyone saw this before? Thanks, Ryan (2 Replies)
Discussion started by: rwcolb90
2 Replies

8. Shell Programming and Scripting

Get web pages and compare

Hello, I'm writing a shell script to wget content web pages from multiple server into a variable and compare if they match return 0 or return 2 #!/bin/bash # Cluster 1 CLUSTER1_SERVERS="srv1 srv2 srv3 srv4" CLUSTER1_APPLIS="test/version.html test2.version.jsp" # Liste des... (4 Replies)
Discussion started by: gtam
4 Replies

9. Shell Programming and Scripting

Get web pages and compare

Hello I'm writing a script to get content of web pages on different machines and compare them using their md5 hash hear is my code #!/bin/bash # Cluster 1 CLUSTER1_SERVERS="srv01:7051 srv02:7052 srv03:7053 srv04:7054" CLUSTER1_APPLIS="test/version.html test2/version.html... (2 Replies)
Discussion started by: gtam
2 Replies
uri(n)						    Tcl Uniform Resource Identifier Management						    uri(n)

NAME
uri - URI utilities SYNOPSIS
package require Tcl 8.2 package require uri ?1.1.1? uri::split url uri::join ?key value?... uri::resolve base url uri::isrelative url uri::geturl url ?options...? uri::canonicalize uri uri::register schemeList script DESCRIPTION
This package contains two parts. First it provides regular expressions for a number of url/uri schemes. Second it provides a number of com- mands for manipulating urls/uris and fetching data specified by them. For the latter this package analyses the requested url/uri and then dispatches it to the appropriate package (http, ftp, ...) for actual fetching. COMMANDS
uri::split url uri::split takes a single url, decodes it and then returns a list of key/value pairs suitable for array set containing the con- stituents of the url. If the scheme is missing from the url it defaults to http. Currently only the schemes http, ftp, mailto, urn and file are supported. See section EXTENDING on how to expand that range. uri::join ?key value?... uri::join takes a list of key/value pairs (generated by uri::split, for example) and returns the canonical url they represent. Cur- rently only the schemes http, ftp, mailto, urn and file are supported. See section EXTENDING on how to expand that range. uri::resolve base url uri::resolve resolves the specified url relative to base. In other words: A non-relative url is returned unchanged, whereas for a relative url the missing parts are taken from base and prepended to it. The result of this operation is returned. For an empty url the result is base. uri::isrelative url uri::isrelative determines whether the specified url is absolute or relative. uri::geturl url ?options...? uri::geturl decodes the specified url and then dispatches the request to the package appropriate for the scheme found in the url. The command assumes that the package to handle the given scheme either has the same name as the scheme itself (including possible capitalization) followed by ::geturl, or, in case of this failing, has the same name as the scheme itself (including possible capi- talization). It further assumes that whatever package was loaded provides a geturl-command in the namespace of the same name as the package itself. This command is called with the given url and all given options. Currently geturl does not handle any options itself. Note: file-urls are an exception to the rule described above. They are handled internally. It is not possible to specify results of the command. They depend on the geturl-command for the scheme the request was dispatched to. uri::canonicalize uri uri::canonicalize returns the canonical form of a URI. The canonical form of a URI is one where relative path specifications, ie. . and .., have been resolved. uri::register schemeList script uri::register registers the first element of schemeList as a new scheme and the remaining elements as aliases for this scheme. It creates the namespace for the scheme and executes the script in the new namespace. The script has to declare variables containing the regular expressions relevant to the scheme. At least the variable schemepart has to be declared as that one is used to extend the variables keeping track of the registered schemes. SCHEMES
In addition to the commands mentioned above this package provides regular expression to recognize urls for a number of url schemes. For each supported scheme a namespace of the same name as the scheme itself is provided inside of the namespace uri containing the variable url whose contents are a regular expression to recognize urls of that scheme. Additional variables may contain regular expressions for parts of urls for that scheme. The variable uri::schemes contains a list of all supported schemes. Currently these are ftp, file, http, gopher, mailto, news, wais and prospero. EXTENDING
Extending the range of schemes supported by uri::split and uri::join is easy because both commands do not handle the request by themselves but dispatch it to another command in the uri namespace using the scheme of the url as criterion. uri::split and uri::join call Split[string totitle <scheme>] and Join[string totitle <scheme>] respectively. CREDITS
Original code by Andreas Kupries. Modularisation by Steve Ball. KEYWORDS
uri, url, fetching information, www, http, ftp, mailto, gopher, wais, prospero, file uri 1.1.1 uri(n)
All times are GMT -4. The time now is 01:32 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy