I have a html file called myfile. If I simply put "cat myfile.html" in UNIX, it shows all the html tags like <a href=r/26><img src="http://www>. But I want to extract only text part.
Same problem happens in "type" command in MS-DOS.
I know you can do it by opening it in Internet Explorer,... (4 Replies)
Hiya,
I am trying to extract a news article from a web page. The sed I have written brings back a lot of Javascript code and sometimes advertisments too. Can anyone please help with this one ??? I need to fix this sed so it picks up the article ONLY (don't worry about the title or date .. i got... (2 Replies)
Hi All,
I'm trying to extract some floating point numbers from within some HTML code like this:
<TR><TD class='awrc'>Parse CPU to Parse Elapsd %:</TD><TD ALIGN='right' class='awrc'> 64.50</TD><TD class='awrc'>% Non-Parse CPU:</TD><TD ALIGN='right' class='awrc'> ... (2 Replies)
I am attempting to extract weather data from the following website, but for the Victoria area only:
Text Forecasts - Environment Canada
I use this:
sed -n "/Greater Victoria./,/Fraser Valley./p"
But that phrasing does not sometimes get it all and think perhaps the website has more... (2 Replies)
Hi,
I need to basically get a list of all the tarballs located at uri
I am currently doing a wget on urito get the index.html page
Now this index page contains the list of uris that I want to use in my bash script.
can someone please guide me ,.
I am new to Linux and shell scripting.
... (5 Replies)
Hi everyone. I have an html file with lines like so:
link href="localFolder/...">
link href="htp://...">
img src="localFolder/...">
img src="htp://...">
I want to remove the links with http in the href and imgs with http in its src. I'm having trouble removing them because there... (4 Replies)
Hi
I've searched for it for few hours now and i can't seem to find anything working like i want. I've got webpage, saved in file par with form like this:
<html><body><form name='sendme' action='http://example.com/' method='POST'>
<textarea name='1st'>abc123def678</textarea>
<textarea... (9 Replies)
Does anybody know how to remove all urls from html files?
all urls are links with anchor texts in the form of
<a href="http://www.anydomain.com">ANCHOR</a>
they may start with www or not.
Goal is to delete all urls and keep the ANCHOR text and if possible to change tags around anchor to... (2 Replies)
I have done a fair amount of searching the threads, but I have not been able to cobble together a solution to my challenge. What I am trying to do is to line edit a file that will leave behind only the domain and tld of a long list of urls. The list looks something like this:
www.google.com... (3 Replies)
I'm extracting text between table tags in HTML
<th><a href="/wiki/Buick_LeSabre" title="Buick LeSabre">Buick LeSabre</a></th>
using this:
awk -F "</*th>" '/<\/*th>/ {print $2}' auto2 > auto3
then this (text between a href):
sed -e 's/\(<*>\)//g' auto3 > auto4
How to shorten this into one... (8 Replies)
Discussion started by: p1ne
8 Replies
LEARN ABOUT DEBIAN
urlwatch
URLWATCH(1) User Commands URLWATCH(1)NAME
urlwatch - Watch web pages and arbitrary URLs for changes
SYNOPSIS
urlwatch [options]
DESCRIPTION
urlwatch watches a list of URLs for changes and prints out unified diffs of the changes. You can filter always-changing parts of websites
by providing a "hooks.py" script.
OPTIONS --version
show program's version number and exit
-h, --help
show the help message and exit
-v, --verbose
Show debug/log output
--urls=FILE
Read URLs from the specified file
--hooks=FILE
Use specified file as hooks.py module
-e, --display-errors
Include HTTP errors (404, etc..) in the output
ADVANCED FEATURES
urlwatch includes some advanced features that you have to activate by creating a hooks.py file that specifies for which URLs to use a spe-
cific feature. You can also use the hooks.py file to filter trivially-varying elements of a web page.
ICALENDAR FILE PARSING
This module allows you to parse .ics files that are in iCalendar format and provide a very simplified text-based format for the diffs. Use
it like this in your hooks.py file:
from urlwatch import ical2txt
def filter(url, data):
if url.endswith('.ics'):
return ical2txt.ical2text(data).encode('utf-8') + data
# ...you can add more hooks here...
HTML TO TEXT CONVERSION
There are three methods of converting HTML to text in the current version of urlwatch: "lynx" (default), "html2text" and "re". The former
two use command-line utilities of the same name to convert HTML to text, and the last one uses a simple regex-based tag stripping method
(needs no extra tools). Here is an example of using it in your hooks.py file:
from urlwatch import html2txt
def filter(url, data):
if url.endswith('.html') or url.endswith('.htm'):
return html2txt.html2text(data, method='lynx')
# ...you can add more hooks here...
FILES
~/.urlwatch/urls.txt
A list of HTTP/FTP URLs to watch (one URL per line)
~/.urlwatch/lib/hooks.py
A Python module that can be used to filter contents
~/.urlwatch/cache/
The state of web pages is saved in this folder
AUTHOR
Thomas Perl <thp@thpinfo.com>
WEBSITE
http://thpinfo.com/2008/urlwatch/
urlwatch 1.11 July 2010 URLWATCH(1)