12-01-2009
Quote:
Originally Posted by
L0rd
KenJackson's brings just 5 lines some code, sory, where are links ?
I didn't give a complete solution, that's why I called it
a starter and referenced the looping.
I am awed by the power of
sed. I routinely use it's regular expression capability, but I rarely use hold buffer and looping command. It has been my goal for some time to become skilful at using these. Your question would have been the perfect opportunity for me to dig in and come up with a solution that demonstrates that power. But I flat did not have the time then, and it looks like you have a solution now that you find satisfying. I'll work on it off-line.
Quote:
Originally Posted by
L0rd
and Scrutinizer's solution ist the best presented here. Its simply and it works.
Yeah, I've noticed Scrutinizer writes some good and straight-forward code. Stick with him.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I have a html file called myfile. If I simply put "cat myfile.html" in UNIX, it shows all the html tags like <a href=r/26><img src="http://www>. But I want to extract only text part.
Same problem happens in "type" command in MS-DOS.
I know you can do it by opening it in Internet Explorer,... (4 Replies)
Discussion started by: los111
4 Replies
2. UNIX for Advanced & Expert Users
Hiya,
I am trying to extract a news article from a web page. The sed I have written brings back a lot of Javascript code and sometimes advertisments too. Can anyone please help with this one ??? I need to fix this sed so it picks up the article ONLY (don't worry about the title or date .. i got... (2 Replies)
Discussion started by: stargazerr
2 Replies
3. Shell Programming and Scripting
Hi All,
I'm trying to extract some floating point numbers from within some HTML code like this:
<TR><TD class='awrc'>Parse CPU to Parse Elapsd %:</TD><TD ALIGN='right' class='awrc'> 64.50</TD><TD class='awrc'>% Non-Parse CPU:</TD><TD ALIGN='right' class='awrc'> ... (2 Replies)
Discussion started by: pondlife
2 Replies
4. Shell Programming and Scripting
I am attempting to extract weather data from the following website, but for the Victoria area only:
Text Forecasts - Environment Canada
I use this:
sed -n "/Greater Victoria./,/Fraser Valley./p"
But that phrasing does not sometimes get it all and think perhaps the website has more... (2 Replies)
Discussion started by: lagagnon
2 Replies
5. Shell Programming and Scripting
Hi,
I need to basically get a list of all the tarballs located at uri
I am currently doing a wget on urito get the index.html page
Now this index page contains the list of uris that I want to use in my bash script.
can someone please guide me ,.
I am new to Linux and shell scripting.
... (5 Replies)
Discussion started by: mnanavati
5 Replies
6. Shell Programming and Scripting
Hi everyone. I have an html file with lines like so:
link href="localFolder/...">
link href="htp://...">
img src="localFolder/...">
img src="htp://...">
I want to remove the links with http in the href and imgs with http in its src. I'm having trouble removing them because there... (4 Replies)
Discussion started by: CowCow339
4 Replies
7. Shell Programming and Scripting
Hi
I've searched for it for few hours now and i can't seem to find anything working like i want. I've got webpage, saved in file par with form like this:
<html><body><form name='sendme' action='http://example.com/' method='POST'>
<textarea name='1st'>abc123def678</textarea>
<textarea... (9 Replies)
Discussion started by: seb001
9 Replies
8. Shell Programming and Scripting
Does anybody know how to remove all urls from html files?
all urls are links with anchor texts in the form of
<a href="http://www.anydomain.com">ANCHOR</a>
they may start with www or not.
Goal is to delete all urls and keep the ANCHOR text and if possible to change tags around anchor to... (2 Replies)
Discussion started by: georgi58
2 Replies
9. Shell Programming and Scripting
I have done a fair amount of searching the threads, but I have not been able to cobble together a solution to my challenge. What I am trying to do is to line edit a file that will leave behind only the domain and tld of a long list of urls. The list looks something like this:
www.google.com... (3 Replies)
Discussion started by: chamb1
3 Replies
10. Shell Programming and Scripting
I'm extracting text between table tags in HTML
<th><a href="/wiki/Buick_LeSabre" title="Buick LeSabre">Buick LeSabre</a></th>
using this:
awk -F "</*th>" '/<\/*th>/ {print $2}' auto2 > auto3
then this (text between a href):
sed -e 's/\(<*>\)//g' auto3 > auto4
How to shorten this into one... (8 Replies)
Discussion started by: p1ne
8 Replies
LEARN ABOUT DEBIAN
gurlchecker
GURLCHECKER(1) http://labs.libre-entrepris GURLCHECKER(1)
NAME
gurlchecker - A graphical web link checker that works on a whole site, a single local page or a browser bookmarks file.
SYNOPSIS
gurlchecker [-h|--help] [-v|--version] [-u|--http-user user] [-p|--http-passwd password] [-A|--no-urls-args] [URL]
DESCRIPTION
This manual page documents the gurlchecker command.
gurlchecker is a graphical web link checker. It searches URLs in a whole site, a single local page or a browser bookmarks page, checking
them, one by one, in order to know their validity. Furthermore, it can manage entire projects.
OPTIONS
This program follows the usual GNU command line syntax, with long options starting with two dashes (`-'). A summary of options is included
below.
-h, --help
Show summary of options.
-v, --version
Show version of program.
-u, --http-user
User name for HTTP Basic Authentication.
-p, --http-passwd
Password for HTTP Basic Authentication.
-A, --no-urls-args
Ignore URLs arguments.
With this option gurlchecker will systematically remove arguments on URLs. It is useful to avoid nasty infinite loops when arguments
are randomly generated, for example.
AUTHORS
This manual page was written by Daniel Pecos Martinez <dani@netpecos.org> and Emmanuel Saracco <esaracco@users.labs.libre-entreprise.org>
for the Debian system (but may be used by others). Permission is granted to copy, distribute and/or modify this document under the terms of
the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections,
no Front-Cover Texts and no Back-Cover Texts.
AUTHORS
Daniel Pecos Martinez
Author.
Emmanuel Saracco
Author.
COPYRIGHT
Copyright (C) 2003-2005 Daniel Pecos Martinez, Emmanuel Saracco
http://gurlchecker.labs.li Apr 8, 2009 GURLCHECKER(1)