How to extract url from html page?


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting How to extract url from html page?
# 1  
Old 10-16-2010
How to extract url from html page?

for example, I have an html file, contain
Code:
<a href="http://awebsite"  id="awebsite" class="first">website</a>

and sometime a line contains more then one link, for example
Code:
<a href="http://awebsite"  id="awebsite" class="first">website</a><a href="http://bwebsite"  id="bwebsite" class="first">websiteb</a>

how can I extract that become something like this.
Code:
http://awebsite website
http://bwebsite websiteb

I only know how to get word between <a>and</a> using
Code:
sed -e 's/<[^<]*>//g'

but I don't know how to get the link..
thanks
# 2  
Old 10-16-2010
http://hpricot.com/
Code:
#!/usr/bin/env ruby -Ku

require 'hpricot'
doc = open("file"){|f|Hpricot(f)}
(doc/"a").each do |x|
 print "-->#{x.get_attribute("href)}, #{x.inner_text}\n"
end

Code:
$ ruby geturl.rb
-->http://awebsite, website
-->http://bwebsite, websiteb

# 3  
Old 10-16-2010
Quote:
Originally Posted by kurumi
http://hpricot.com/
Code:
#!/usr/bin/env ruby -Ku

require 'hpricot'
doc = open("file"){|f|Hpricot(f)}
(doc/"a").each do |x|
 print "-->#{x.get_attribute("href)}, #{x.inner_text}\n"
end

Code:
$ ruby geturl.rb
-->http://awebsite, website
-->http://bwebsite, websiteb

thanks for the answering, but I don't really know what is hpricot and how to use it.
I'm new to shell programming and I don't know ruby at all.Smilie

do you know any solution using sed, awk, or grep maybe?
thanks.
# 4  
Old 10-16-2010
Try...
Code:
 
awk -F'href="|"  |">|</' '{for(i=2;i<=NF;i=i+4) print $i,$(i+2)}' infile

# 5  
Old 10-16-2010
Quote:
Originally Posted by malcomex999
Code:
awk -F'href="|"  |">|</' '{for(i=2;i<=NF;i=i+4) print $i,$(i+2)}' infile

Code:
$ cat file
<a href="http://awebsite"  id="awebsite" class="first" someattribute="last" > website</a>
<a href="http://bwebsite"  id="bwebsite" class="first">websiteb</a>

$ awk -F'href="|"  |">|</' '{for(i=2;i<=NF;i=i+4) print $i,$(i+2)}' file
http://awebsite a>
http://bwebsite websiteb

$ ruby test.rb
-->http://awebsite,  website
-->http://bwebsite, websiteb

# 6  
Old 10-16-2010
Code:
grep -o 'http://[^"]*'

# 7  
Old 10-16-2010
Quote:
Originally Posted by Scrutinizer
Code:
grep -o 'http://[^"]*'

OP also needs the inner text, not just urls.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Post Here to Contact Site Administrators and Moderators

Page Not Found error while parsing url

Hi I just tried to post following link while answering, its not parsing properly, just try on your browser Tried to paste while answering : https://www.unix.com/302873559-post2.htmlNot operator is not coming with HTML/PHP tags so attaching file (2 Replies)
Discussion started by: Akshay Hegde
2 Replies

2. Shell Programming and Scripting

Use curl to send a static xml file using url encoding to a web page using pos

Hi I am try to use curl to send a static xml file using url encoding to a web page using post. This has to go through a particular port on our firewall as well. This is my first exposure to curl and am not having much success, so any help you can supply, or point me in the right direction would be... (1 Reply)
Discussion started by: Paul Walker
1 Replies

3. Shell Programming and Scripting

URL/HTML encoding

Hey guys, looking for a way to encode a string into URL and HTML in a bash script that I'm making to encode strings in various different digests etc. Can't find anything on it anywhere else on the forums. Any help much appreciated, still very new to bash and programming etc. (4 Replies)
Discussion started by: 3therk1ll
4 Replies

4. Shell Programming and Scripting

Extracting anchor text and its URL from HTML files in BASH

Hi All, I have some HTML files and my requirement is to extract all the anchor text words from the HTML files along with their URLs and store the result in a separate text file separated by space. For example, <a href="/kid/stay_healthy/">Staying Healthy</a> which has /kid/stay_healthy/ as... (3 Replies)
Discussion started by: shoaibjameel123
3 Replies

5. Red Hat

Publishing HTML Page

Hi All, Thanks for reading. I am not sure if I am asking this in the correct group. But here it goes: There is a shell script which does some system checks and creates an html file called system_summary.html on my Red Hat machine say in /reports directory every hour. Now I want to view it... (6 Replies)
Discussion started by: deepakgang
6 Replies

6. UNIX for Dummies Questions & Answers

Publishing HTML Page

Hi All, Thanks for reading. I am not sure if I am asking this in the correct group. But here it goes: There is a shell script which does some system checks and creates an html file called system_summary.html on my Red Hat machine say in /reports directory every hour. Now I want to view it... (1 Reply)
Discussion started by: deepakgang
1 Replies

7. Web Development

findstr in html page

I am planning to create an html page that will count number of connected ports, challenge for me is how to put it in a page. Thanks! (1 Reply)
Discussion started by: webmunkey23
1 Replies

8. Solaris

Accessing a HTML page

Hi All, In our unix server we have an apache web server running. I can access the default apache web page from my windows machine. Now, I want to create my own webpage. Therefore I created webpage at /export/home/myname/test.html file. Where do I need to place this file and what do I need... (0 Replies)
Discussion started by: pkm_oec
0 Replies

9. UNIX for Dummies Questions & Answers

How do I extract text only from html file without HTML tag

I have a html file called myfile. If I simply put "cat myfile.html" in UNIX, it shows all the html tags like <a href=r/26><img src="http://www>. But I want to extract only text part. Same problem happens in "type" command in MS-DOS. I know you can do it by opening it in Internet Explorer,... (4 Replies)
Discussion started by: los111
4 Replies

10. Shell Programming and Scripting

How to get the page size (of a url) using wget

Hi , I am trying to get page size of a url(e.g.,www.example.com) using wget command.Any thoughts what are the parameters i need to send with wget to get the size alone? Regards, Raj (1 Reply)
Discussion started by: rajbal
1 Replies
Login or Register to Ask a Question