Learning scrapers, webcrawlers, search engines and CURL


 
Thread Tools Search this Thread
Operating Systems Linux Learning scrapers, webcrawlers, search engines and CURL
# 1  
Old 06-22-2018
Learning scrapers, webcrawlers, search engines and CURL

All,

I'm trying to learn scrapers, webcrawlers, search engines and CURL. I've chosen to interrogate the
following sites:

  • Manta,
  • SuperPages,
  • Yellow Book,
  • Yellow Pages.


These show organizations/businesses by search type/category, so effective in
finding potential clients.

Since I only run Linux, I have the following questions as I consider my approaches:

  • Text only vs regular browser: which is best?
  • wget vs php fileopen vs CURL: Which is best?
  • HTML tag find/parse: Are there libraries that effectively do this?
  • HTML tag find/parse: Is REGEX the best way to parse these? Where are examples?
  • Checking for the new meta-tags of:

Code:
<meta name="category" content="">
<meta name="subcategory" content="">

Where the content is the category or subcategory name

EX:
Code:
<meta name="category" content="technology">
<meta name="subcategory" content="retail">
<meta name="subcategory" content="electronics">
<meta name="subcategory" content="computers">
<meta name="subcategory" content="laptops">
<meta name="subcategory" content="20+ inch screen">

or

Code:
<meta name="category" content="technology">
<meta name="subcategory" content="manufacturer">
<meta name="subcategory" content="electronics">
<meta name="subcategory" content="computers">
<meta name="subcategory" content="laptops">
<meta name="subcategory" content="20+ inch screen">

I'm processing these into a temp DB, but final repository for the data will be
both SugarCRM CE and PHPList.

There is differing information available on each of these at differing sites and
very confusing, as many businesses
will show on all these, but some listing have website and email and others do not
so will be updating existing records with blank fields, when the info is found on
one of the other sites.

Trying to avoid the time consuming process of calling each, however the level of
detail I need for my app I'm writing may still require the calls. I'll see where
I'm at once the intial automation is finished.

I look at all the HOWTOs and they are like GREEk to me, and so not comprehending what I'm
reading. Going to need some help with the comprehension before it all registers and
I come to a basic understanding of both terms and techniques, so sorry to bother you
with a NEWBIE level understanding of this!

Cheers!

OMR/TBNK

Last edited by TBotNik; 06-22-2018 at 08:18 PM..
# 2  
Old 06-23-2018
Quote:
Originally Posted by TBotNik
  • Text only vs regular brower: which is best?
  • wget vs php fileopen vs CURL: Which is best?
  • HTML tag find/parse: Are there libraries that effectively do this?
  • HTML tag find/parse: Is REGEX the best way to parse these? Where are examples?
  • Checking for the new meta-tags of:
I think you are better off to get web page content using PHP scripts and parse the files with REGEX.

If you Google around, I am sure you can find many sample PHP scripts that do most of what you want. This is very old technology and there is no need to reinvent the wheel parsing HTML data.
# 3  
Old 06-25-2018
Thanks Neo!

Quote:
Originally Posted by Neo
I think you are better off to get web page content using PHP scripts and parse the files with REGEX.

If you Google around, I am sure you can find many sample PHP scripts that do most of what you want. This is very old technology and there is no need to reinvent the wheel parsing HTML data.
Neo, As I stated, still struggling with the terminology and concepts, so patience, I'm total newbie at using this technology, that's why I'm asking Qs as I don't even know where to focus, right now, to accomplish this.
Cheers!
OMR/TBNK
# 4  
Old 09-05-2018
Neo,


Think you're right about not re-inventing the wheel! Looking for code now!


Cheers!


PS


Did you get my email?
Login or Register to Ask a Question

Previous Thread | Next Thread

3 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Checking status of engines using C-shell

I am relatively new to scripting. I am trying to develop a script that will 1. Source an executable file as an argument to the script that sets up the environment 2. Run a command "stat" that gives the status of 5 Engines running on the system 3. Check the status of the 5 Engines as either... (0 Replies)
Discussion started by: paslas
0 Replies

2. UNIX for Dummies Questions & Answers

Using cURL to save online search results

Hi, I'm attacking this from ignorance because I am not sure how to even ask the question. Here is the mission: I have a list of about 4,000 telephone numbers for past customers. I need to determine how many of these customers are still in business. Obviously, I could call all the numbers.... (0 Replies)
Discussion started by: jccbin
0 Replies

3. Shell Programming and Scripting

I dont want to know any search engines

I just want to know where I can download it on this website plz (1 Reply)
Discussion started by: memattmyself
1 Replies
Login or Register to Ask a Question