Performing extractions on web source page


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Performing extractions on web source page
# 1  
Old 08-28-2011
Performing extractions on web source page

I have downloaded a web source page to a file. I then egrep a single word to extract a line containing it to another file.
I then cat the second file and remove everything before a word and after a second word to capture the phrase desired.
This did not work. I used vi to validate that the 2 search words existed in the extracted (second) file.
To test I inserted XXXX and YYYY in the second file and used those words as sed extractions terms instead of the terms contained which I originally searched on. That works.
Can someone explain/suggest what might be in the extracted file (#2) that is fiddling my search? Here is how I did it:
Code:
egrep "seconds" /tmp/wget_saved_file > /tmp/tstfil

[used vi to insert XXXX & YYYY into tstfil]
Code:
cat /tmp/tstfil | sed  's/.*XXXX//;s/YYYY.*$//'   #<<--works

Is there perhaps something about text contained in a source file that prevents it from being used as search terms?

Last edited by pludi; 08-28-2011 at 06:51 PM..
# 2  
Old 08-28-2011
It would be helpful to see the original line that you extract with grep, and the original sed.

I can only guess, but it sounds like there might be two instances of one of the words. Consider this line of text:

Code:
This text is a sample for the use of pattern matching in text

If the words that you are looking to capture things between are text and pattern, and you use the set of sed replacements shown below, you'll get nothing on the output:

Code:
s/.*text//; s/pattern.*//

The reason this produces nothing as output is because the first match matches the whole line and deletes it.

If you are certain that word two only appears once on the line, then reversing the sed substitution commands will help, but it's not foolproof.

And, you don't need to cat a file to sed. Sed is capable of reading the file and thus you'll reduce overhead:

Code:
sed 's/foo/bar/' input-flie >new-file

This User Gave Thanks to agama For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Dump web page source as rendered by browser

Hi guys| I need to retrieve a specific .m3u8 link from a web page, which makes use of iframes and JavaScript I tried to get the full source with "wget", "lynx", "w3m" and "phantomjs", but they can't dump all the source, with the part containing the link that i need, which seems to be inside... (0 Replies)
Discussion started by: Marmz
0 Replies

2. Shell Programming and Scripting

Parse Page Source and Extract Links

Hi Friends, I have a bunch of URLs. Each URL will open up an abstract page. But, the source contains a link to the main PDF article. I am looking for a script to do the following task 1. Read input file with URLs. 2. Parse the source and grab all the lines that has the word 'PDF'.... (1 Reply)
Discussion started by: jacobs.smith
1 Replies

3. Shell Programming and Scripting

Save page source, including javascript

I need to get the source code of a webpage. I have tried to use wget and curl, but it doesn't show the necessary javascript part of the source. I don't have to execute it, only to view the source. How do I do that? (1 Reply)
Discussion started by: locoroco
1 Replies

4. Shell Programming and Scripting

web page source cleanup

is it possible to pass webpages to remove all tag style information, but leave the tag... say I have <h1 style='font-size: xxx; color: xxxxxx'>headline 1</h1> i want to get <h1>headline 1</h1> BTW, i got an oneliner here to remove all tags: sed -n '/^$/!{s/<*>//g;p; Thanks a... (4 Replies)
Discussion started by: dtdt
4 Replies

5. Shell Programming and Scripting

Getting source code of a page

I want to download a particular page from the internet and get the source code of the page in html format. I want to parse the source code to find a specific parameters using grep command. could someone tell me the linux command to download a specific page and parse the source code of it. ... (1 Reply)
Discussion started by: ahamed
1 Replies

6. UNIX for Dummies Questions & Answers

Looking for a web page that won't let me in

Hi, I have a project for school using wget and egrep to locate pattern locations on a web page. One of the things we have to do is handle an "access denied" exception. Here is the problem, I can not think of/find any web pages that give me an access denied error to play with, can anyone suggest... (1 Reply)
Discussion started by: njmiano
1 Replies

7. Shell Programming and Scripting

write page source to standard output

I'm new to PERL, but I want to take the page source and write it to a file or standard output. I used perl.org as a test website. Here is the script: use strict; use warnings; use LWP::Simple; getprint('http://www.perl.org') or die 'Unable to get page'; exit 0; ... (1 Reply)
Discussion started by: wxornot
1 Replies

8. UNIX for Dummies Questions & Answers

reading web page source in unix

is there a command that allows you to take a url and grab the source code from the page and output it to stdout? i want to know because i want to grab a page and pass it thru another program to analyze the page. any help would be appreciated thanks (3 Replies)
Discussion started by: jaymzlee
3 Replies

9. UNIX for Dummies Questions & Answers

Which comand to show the source code on a web page?

Hi folks! I am using MacOsX that runs freeBSD. Could you tell me what comand to type on the Unix Terminal to display on the terminal the source code of a certain web page? I think something like #<comand> http://www.apple.com will display on the terminal's window the html source code... (11 Replies)
Discussion started by: fundidor
11 Replies
Login or Register to Ask a Question