Python Web Page Scraping Urls Creating A Dictionary


 
Thread Tools Search this Thread
Top Forums Programming Python Web Page Scraping Urls Creating A Dictionary
# 1  
Old 06-06-2017
Python Web Page Scraping Urls Creating A Dictionary

I have thrown in the towel and cant figure out how to do this. I have a directory of html files that contain urls that I need to scrape (loop through) and add into a dictionary. An example of the output I would like is:
Code:
bigbadwolf.htlm: https://www.blah.com, http://www.blahblah.com, http://www.blahblahblah.com
maryhadalittlelamb.html: http://www.red.com, https://www.redyellow.com, http://www.zigzag.com
time.html: https://www.est.com, http://www.pst.com, https://www.cst.com

My code that I have so far is:
Code:
for subdir, dirs, files in os.walk('./html/tutorials/blah'):
    for tut in files:
        if tut.endswith(".html"):
            fpath = os.path.join("./html/tutorials/blah", tut)
            content = open(fpath, "r").read()
            file = BeautifulSoup(content, 'lxml')
            for links in file.find_all('a'):
		urls = links.get('href')
		print "HTML Files: {}\nUrls: {}\n".format(tut,urls)

produces the correct output for the most part:
Code:
HTML Files: bigbadwolf.html
Urls: https://www.blah.com

HTML Files: bigbadwolf.html
Urls: https://www.blahblah.com

HTML Files: bigbadwolf.html
Urls: https://www.blahblahblah.com

HTML files: maryhadalittlelamb.html
Urls: http://www.red.com 

HTML files: maryhadalittlelamb.html
Urls: https://www.redyellow.com 

HTML files: maryhadalittlelamb.html
Urls: http://www.zigzag.com

but I want it in a dictionary with this format:
Code:
bigbadwolf.htlm: https://www.blah.com, http://www.blahblah.com, http://www.blahblahblah.com
maryhadalittlelamb.html: http://www.red.com, https://www.redyellow.com, http://www.zigzag.com
time.html: https://www.est.com, http://www.pst.com, https://www.cst.com

As you can see, there will be several urls inside of an html doc so there will be keys that can contain many values(urls). I tried many variable of the below code but cant get a single key to have many urls associated with it.
Code:
tut_links = {}
for subdir, dirs, files in os.walk('./html/tutorials/blah'):
    for tut in files:
        if tut.endswith(".html"):
            fpath = os.path.join("./html/tutorials/blah", tut)
            content = open(fpath, "r").read()
            file = BeautifulSoup(content, 'lxml')
            for links in file.find_all('a'):
                urls = links.get('href')
                tut_links[tut] = urls

produces:
Code:
bigbadwolf.htlm: https://www.blah.com
maryhadalittlelamb.html: http://www.red.com
time.html: https://www.est.com
...
...
...

Can someone please shine some light on what I am trying to do?

Last edited by metallica1973; 06-06-2017 at 01:42 PM..
# 2  
Old 06-06-2017
We don't get many Python questions here, sorry.

PHP, PERL, and all the standard UNIX and Linux shell programming languages, as well a C and C++ questions; but not many Python questions.

I'm not a Python programmer; so perhaps someone else here is? and they can help you?
# 3  
Old 06-06-2017
Quote:
Originally Posted by metallica1973
...
...
but I want it in a dictionary with this format:
Code:
bigbadwolf.htlm: https://www.blah.com, http://www.blahblah.com, http://www.blahblahblah.com
maryhadalittlelamb.html: http://www.red.com, https://www.redyellow.com, http://www.zigzag.com
time.html: https://www.est.com, http://www.pst.com, https://www.cst.com

As you can see, there will be several urls inside of an html doc so there will be keys that can contain many values(urls).
...
...
Is each value of the dictionary:
(a) a list (or array) of URLs? or
(b) a comma-delimited string of URLs?

If you want (a), then try something like the following:

Code:
tut_links = {}
for subdir, dirs, files in os.walk('./html/tutorials/blah'):
    for tut in files:
        if tut.endswith(".html"):
            tut_links[tut] = []
            fpath = os.path.join("./html/tutorials/blah", tut)
            content = open(fpath, "r").read()
            file = BeautifulSoup(content, 'lxml')
            for links in file.find_all('a'):
                urls = links.get('href')
                tut_links[tut].append(urls)

Disclaimer: Completely untested; I don't have the module at the moment.
These 2 Users Gave Thanks to durden_tyler For This Post:
# 4  
Old 06-06-2017
Thanks for all the replies

Quote:
Is each value of the dictionary:
(a) a list (or array) of URLs? or
(b) a comma-delimited string of URLs?
it is a comma-delimited string of URLs

---------- Post updated at 02:30 PM ---------- Previous update was at 02:05 PM ----------

Thanks durden_tyler,

I tested your additional list stuff in which i had looked at before and didnt go down that path and it worked. Awesome. Many thanks

---------- Post updated at 02:51 PM ---------- Previous update was at 02:30 PM ----------

Would you happen to know how to delete duplicate entries inside of this embedded list?
Code:
bigbadwolf.htlm: 

'https://www.blah.com',
'https://www.blah.com',
'https://www.blah.com',
'http://www.blahblah.com'
'http://www.blahblah.com'

# 5  
Old 06-06-2017
Quote:
Originally Posted by metallica1973
...
...
Would you happen to know how to delete duplicate entries inside of this embedded list?
...
...
Do not add a duplicate entry in the first place:

Code:
tut_links = {}
for subdir, dirs, files in os.walk('./html/tutorials/blah'):
    for tut in files:
        if tut.endswith(".html"):
            tut_links[tut] = []
            fpath = os.path.join("./html/tutorials/blah", tut)
            content = open(fpath, "r").read()
            file = BeautifulSoup(content, 'lxml')
            for links in file.find_all('a'):
                urls = links.get('href')
                if urls not in tut_links[tut]:
                    tut_links[tut].append(urls)

This User Gave Thanks to durden_tyler For This Post:
# 6  
Old 06-07-2017
I cross referenced the html file with the output urls and its correct. Most of the html files do contain multiple duplicate urls as in:

http://www.blah.org
http://www.blah.org
http://www.blah.org

So I would need to remove the duplicates. I have dont this before in the past using:
Code:
tut_links=list(set(tut_links))

Let me give this a shot and see what happens. Thanks for all the help. I will let you know how it goes.

---------- Post updated 06-07-17 at 02:54 PM ---------- Previous update was 06-06-17 at 04:03 PM ----------

Thanks for all the help. Here is the finish code:
Code:
tut_links = {}

for subdir, dirs, files in os.walk('./html/tutorials/blah'):
    for tut in files:
        if tut.endswith(".html"):
	    tut_links[tut] = []
            fpath = os.path.join("./html/tutorials/blah", tut)
            content = open(fpath, "r").read()
            file = BeautifulSoup(content, 'lxml')
            for links in file.find_all('a', href=True):
		urls = links.get('href')
	    	if urls.startswith('http' or 'https'):		
	           tut_links[tut].append(urls)
	    for dup in tut_links.values(): --> removes duplicate urls from the dictionary value list
    		dup[:] = list(set(dup))

Worked like a champ
Code:
'bigbadwolf.htlm' : ['https://www.blah.com', 'http://www.blahblah.com','http://www.blahblahblah.com']


Last edited by metallica1973; 06-07-2017 at 04:37 PM..
This User Gave Thanks to metallica1973 For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

8 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Creating verbal structures from a dictionary and a template

My main aim here is to create a database of verbs in a language to Hindi. The output if it works well will be put up on a University site for researchers to use for Machine Translation. This because one of the main weaknesses of MT is in the area of verbs. Sorry for the long post but the problem... (4 Replies)
Discussion started by: gimley
4 Replies

2. Shell Programming and Scripting

Counting all words that start with a capital letter in a string using python dictionary

Hi, I have written the following python snippet to store the capital letter starting words into a dictionary as key and no of its appearances as a value in this dictionary against the key. #!/usr/bin/env python import sys import re hash = {} # initialize an empty dictinonary for line in... (1 Reply)
Discussion started by: royalibrahim
1 Replies

3. Shell Programming and Scripting

Creating a dictionary with domain name adjuncted

Hello, I have created a dictionary which has the following structure: DOMAINWORD=(equivalent in English)gloss(es) in Hindi each separated by a comma(equivalent in English)gloss(es) in Hindi each separated by a comma or a semi-colon An example will make this clear ... (13 Replies)
Discussion started by: gimley
13 Replies

4. Programming

fetching a web page in C

Hello, I'm a total newbie to HTTP commands, so I'm not sure how to do this. What I'd like is to write a C program to fetch the contents of a html page of a given address. Could someone help with this? Thanks in advance! (4 Replies)
Discussion started by: rayne
4 Replies

5. UNIX for Dummies Questions & Answers

Make a Web page

I'm 13 years of age and I am into computers. I am trying to learn how to make a webpage. I could use the help and I would greatly appriciate it. (1 Reply)
Discussion started by: lydia98
1 Replies

6. UNIX for Dummies Questions & Answers

how do i make a web page

hey uhh this is my first post and i was wondering how do i make a web page for like a small business or something anything will help thanks (3 Replies)
Discussion started by: Neil Peart
3 Replies

7. UNIX for Dummies Questions & Answers

Accessing Web Page

Hello, I am new to unix, but wanted to know how can we fetch data from a web page (i.e. an HTML Page), my requirement is to read an html page and wanted to create a flat file (text file) based on the contents available in the mentioned HTML page. Thanks Imtiaz (3 Replies)
Discussion started by: Imtiaz
3 Replies

8. UNIX for Dummies Questions & Answers

making a web page

Hey im new to unix! I am tryin to create a web page in unix and have done it all but when i try and load it it says permission denied!?> i have chmod a+rx for folder and file to make sure but still permissions wont let me?! any ideas can anyone do a quick run through of how to make a web page... (4 Replies)
Discussion started by: shashora
4 Replies
Login or Register to Ask a Question