...
...
but I want it in a dictionary with this format:
As you can see, there will be several urls inside of an html doc so there will be keys that can contain many values(urls).
...
...
Is each value of the dictionary:
(a) a list (or array) of URLs? or
(b) a comma-delimited string of URLs?
If you want (a), then try something like the following:
Disclaimer: Completely untested; I don't have the module at the moment.
These 2 Users Gave Thanks to durden_tyler For This Post:
Hey im new to unix!
I am tryin to create a web page in unix and have done it all but when i try and load it it says permission denied!?>
i have chmod a+rx for folder and file to make sure but still permissions wont let me?!
any ideas can anyone do a quick run through of how to make a web page... (4 Replies)
Hello,
I am new to unix, but wanted to know how can we fetch data from a web page (i.e. an HTML Page), my requirement is to read an html page and wanted to create a flat file (text file) based on the contents available in the mentioned HTML page.
Thanks
Imtiaz (3 Replies)
hey uhh this is my first post and i was wondering
how do i make a web page for like a small business or something
anything will help
thanks (3 Replies)
I'm 13 years of age and I am into computers. I am trying to learn how to make a webpage.
I could use the help and I would greatly appriciate it. (1 Reply)
Hello,
I'm a total newbie to HTTP commands, so I'm not sure how to do this. What I'd like is to write a C program to fetch the contents of a html page of a given address.
Could someone help with this?
Thanks in advance! (4 Replies)
Hello,
I have created a dictionary which has the following structure:
DOMAINWORD=(equivalent in English)gloss(es) in Hindi each separated by a comma(equivalent in English)gloss(es) in Hindi each separated by a comma or a semi-colon
An example will make this clear
... (13 Replies)
Hi,
I have written the following python snippet to store the capital letter starting words into a dictionary as key and no of its appearances as a value in this dictionary against the key.
#!/usr/bin/env python
import sys
import re
hash = {} # initialize an empty dictinonary
for line in... (1 Reply)
My main aim here is to create a database of verbs in a language to Hindi. The output if it works well will be put up on a University site for researchers to use for Machine Translation. This because one of the main weaknesses of MT is in the area of verbs.
Sorry for the long post but the problem... (4 Replies)
Discussion started by: gimley
4 Replies
LEARN ABOUT DEBIAN
urlwatch
URLWATCH(1) User Commands URLWATCH(1)NAME
urlwatch - Watch web pages and arbitrary URLs for changes
SYNOPSIS
urlwatch [options]
DESCRIPTION
urlwatch watches a list of URLs for changes and prints out unified diffs of the changes. You can filter always-changing parts of websites
by providing a "hooks.py" script.
OPTIONS --version
show program's version number and exit
-h, --help
show the help message and exit
-v, --verbose
Show debug/log output
--urls=FILE
Read URLs from the specified file
--hooks=FILE
Use specified file as hooks.py module
-e, --display-errors
Include HTTP errors (404, etc..) in the output
ADVANCED FEATURES
urlwatch includes some advanced features that you have to activate by creating a hooks.py file that specifies for which URLs to use a spe-
cific feature. You can also use the hooks.py file to filter trivially-varying elements of a web page.
ICALENDAR FILE PARSING
This module allows you to parse .ics files that are in iCalendar format and provide a very simplified text-based format for the diffs. Use
it like this in your hooks.py file:
from urlwatch import ical2txt
def filter(url, data):
if url.endswith('.ics'):
return ical2txt.ical2text(data).encode('utf-8') + data
# ...you can add more hooks here...
HTML TO TEXT CONVERSION
There are three methods of converting HTML to text in the current version of urlwatch: "lynx" (default), "html2text" and "re". The former
two use command-line utilities of the same name to convert HTML to text, and the last one uses a simple regex-based tag stripping method
(needs no extra tools). Here is an example of using it in your hooks.py file:
from urlwatch import html2txt
def filter(url, data):
if url.endswith('.html') or url.endswith('.htm'):
return html2txt.html2text(data, method='lynx')
# ...you can add more hooks here...
FILES
~/.urlwatch/urls.txt
A list of HTTP/FTP URLs to watch (one URL per line)
~/.urlwatch/lib/hooks.py
A Python module that can be used to filter contents
~/.urlwatch/cache/
The state of web pages is saved in this folder
AUTHOR
Thomas Perl <thp@thpinfo.com>
WEBSITE
http://thpinfo.com/2008/urlwatch/
urlwatch 1.11 July 2010 URLWATCH(1)