11-06-2002
Selecting information from several web pages...
Hi All!
Is this possible?
I know of several hundreds of urls linking to similar looking hp-ux man pages, like these. In these urls only the last words separated by / are changing in numbering, so we can generate these...
http://docs.hp.com/hpux/onlinedocs/B...00/31-con.html
http://docs.hp.com/hpux/onlinedocs/B...00/34-con.html
http://docs.hp.com/hpux/onlinedocs/B...3/331-con.html
I know that all these pages follow a certain pattern in their layout. I want to make a small consolidated report of all hp-ux commands listed in these pages with only their, say descriptions, examples etc...
If I have a command which works in a loop on such urls and in each turn return me the page contents, I can filter out sections which I want...
Is this possible? Any hint is highly appreciated...
Also is there a UNIX utility which converts html to simple readable text?
Cheers!
Vishnu.
9 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi,
my company is considering a new development of our web site, which used to run on Apachi over Solaris.
The company who is going to do this for us knows only about developing it in ASP.
I guess this means we'll have to have another ISS server on NT for these dynamic pages :(
What are... (5 Replies)
Discussion started by: me2unix
5 Replies
2. Shell Programming and Scripting
Counts the number of hyperlinks in all web pages in the current directory and all of its sub-directories. Count in all files of type "*htm" and "*html" .
i want the output to look something like this:
Total number of web pages: (number)
Total number of links: (number)
Average number of links... (1 Reply)
Discussion started by: phillip
1 Replies
3. UNIX for Dummies Questions & Answers
Is there any way to browse web pages while on the command line?
I know wget can download pages, but I was wondering if there was an option other than that. (2 Replies)
Discussion started by: vroomlicious
2 Replies
4. Shell Programming and Scripting
hello. i want to make an awk script to search an html file and output all the links (e.g .html, .htm, .jpg, .doc, .pdf, etc..) inside it. also, i want the links that will be output to be split into 3 groups (separated by an empty line), the first group with links to other webpages (.html .htm etc),... (1 Reply)
Discussion started by: adpe
1 Replies
5. UNIX for Dummies Questions & Answers
I can't quite seem to understand what the curl command does with a web address. I tried this:
curl O'Reilly Media: Tech Books, Conferences, Courses, News
but I just got the first few lines of a web page, and it's nowhere on my machine. Can someone elaborate? (2 Replies)
Discussion started by: Straitsfan
2 Replies
6. UNIX for Dummies Questions & Answers
Here is an observation that has started to riddle me and perhaps someone can enlighten me. When a web page (or desktop page for that matter) uses the standard font, it is not anti-aliased, unless the user opts in to do so via the desktop settings.
It appears however that fonts are not... (0 Replies)
Discussion started by: figaro
0 Replies
7. Shell Programming and Scripting
Hey guys,
Unfortunatley, I can not use wget on our systems....
I am looking for another way for a UNIX script to test web pages and let me know if they are up or down for some of our application.
Has anyone saw this before?
Thanks,
Ryan (2 Replies)
Discussion started by: rwcolb90
2 Replies
8. Shell Programming and Scripting
Hello,
I'm writing a shell script to wget content web pages from multiple server into a variable and compare
if they match return 0 or return 2
#!/bin/bash
# Cluster 1
CLUSTER1_SERVERS="srv1 srv2 srv3 srv4"
CLUSTER1_APPLIS="test/version.html test2.version.jsp"
# Liste des... (4 Replies)
Discussion started by: gtam
4 Replies
9. Shell Programming and Scripting
Hello
I'm writing a script to get content of web pages on different machines and compare them using their md5 hash
hear is my code
#!/bin/bash
# Cluster 1
CLUSTER1_SERVERS="srv01:7051 srv02:7052 srv03:7053 srv04:7054"
CLUSTER1_APPLIS="test/version.html test2/version.html... (2 Replies)
Discussion started by: gtam
2 Replies
LEARN ABOUT DEBIAN
urlwatch
URLWATCH(1) User Commands URLWATCH(1)
NAME
urlwatch - Watch web pages and arbitrary URLs for changes
SYNOPSIS
urlwatch [options]
DESCRIPTION
urlwatch watches a list of URLs for changes and prints out unified diffs of the changes. You can filter always-changing parts of websites
by providing a "hooks.py" script.
OPTIONS
--version
show program's version number and exit
-h, --help
show the help message and exit
-v, --verbose
Show debug/log output
--urls=FILE
Read URLs from the specified file
--hooks=FILE
Use specified file as hooks.py module
-e, --display-errors
Include HTTP errors (404, etc..) in the output
ADVANCED FEATURES
urlwatch includes some advanced features that you have to activate by creating a hooks.py file that specifies for which URLs to use a spe-
cific feature. You can also use the hooks.py file to filter trivially-varying elements of a web page.
ICALENDAR FILE PARSING
This module allows you to parse .ics files that are in iCalendar format and provide a very simplified text-based format for the diffs. Use
it like this in your hooks.py file:
from urlwatch import ical2txt
def filter(url, data):
if url.endswith('.ics'):
return ical2txt.ical2text(data).encode('utf-8') + data
# ...you can add more hooks here...
HTML TO TEXT CONVERSION
There are three methods of converting HTML to text in the current version of urlwatch: "lynx" (default), "html2text" and "re". The former
two use command-line utilities of the same name to convert HTML to text, and the last one uses a simple regex-based tag stripping method
(needs no extra tools). Here is an example of using it in your hooks.py file:
from urlwatch import html2txt
def filter(url, data):
if url.endswith('.html') or url.endswith('.htm'):
return html2txt.html2text(data, method='lynx')
# ...you can add more hooks here...
FILES
~/.urlwatch/urls.txt
A list of HTTP/FTP URLs to watch (one URL per line)
~/.urlwatch/lib/hooks.py
A Python module that can be used to filter contents
~/.urlwatch/cache/
The state of web pages is saved in this folder
AUTHOR
Thomas Perl <thp@thpinfo.com>
WEBSITE
http://thpinfo.com/2008/urlwatch/
urlwatch 1.11 July 2010 URLWATCH(1)