Sponsored Content
Top Forums Shell Programming and Scripting Extract URLs from HTML code using sed Post 302375750 by Scrutinizer on Sunday 29th of November 2009 08:58:22 AM
Old 11-29-2009
That sed statement will not work since there are no line breaks so all the results are on a single line and thus that statement will return only the last href.

Last edited by Scrutinizer; 11-30-2009 at 04:05 PM..
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

How do I extract text only from html file without HTML tag

I have a html file called myfile. If I simply put "cat myfile.html" in UNIX, it shows all the html tags like <a href=r/26><img src="http://www>. But I want to extract only text part. Same problem happens in "type" command in MS-DOS. I know you can do it by opening it in Internet Explorer,... (4 Replies)
Discussion started by: los111
4 Replies

2. UNIX for Advanced & Expert Users

sed to extract HTML content

Hiya, I am trying to extract a news article from a web page. The sed I have written brings back a lot of Javascript code and sometimes advertisments too. Can anyone please help with this one ??? I need to fix this sed so it picks up the article ONLY (don't worry about the title or date .. i got... (2 Replies)
Discussion started by: stargazerr
2 Replies

3. Shell Programming and Scripting

sed to extract only floating point numbers from HTML

Hi All, I'm trying to extract some floating point numbers from within some HTML code like this: <TR><TD class='awrc'>Parse CPU to Parse Elapsd %:</TD><TD ALIGN='right' class='awrc'> 64.50</TD><TD class='awrc'>% Non-Parse CPU:</TD><TD ALIGN='right' class='awrc'> ... (2 Replies)
Discussion started by: pondlife
2 Replies

4. Shell Programming and Scripting

SED to extract HTML text data, not quite right!

I am attempting to extract weather data from the following website, but for the Victoria area only: Text Forecasts - Environment Canada I use this: sed -n "/Greater Victoria./,/Fraser Valley./p" But that phrasing does not sometimes get it all and think perhaps the website has more... (2 Replies)
Discussion started by: lagagnon
2 Replies

5. Shell Programming and Scripting

Extract urls from index.html downloaded using wget

Hi, I need to basically get a list of all the tarballs located at uri I am currently doing a wget on urito get the index.html page Now this index page contains the list of uris that I want to use in my bash script. can someone please guide me ,. I am new to Linux and shell scripting. ... (5 Replies)
Discussion started by: mnanavati
5 Replies

6. Shell Programming and Scripting

Remove external urls from .html file

Hi everyone. I have an html file with lines like so: link href="localFolder/..."> link href="htp://..."> img src="localFolder/..."> img src="htp://..."> I want to remove the links with http in the href and imgs with http in its src. I'm having trouble removing them because there... (4 Replies)
Discussion started by: CowCow339
4 Replies

7. Shell Programming and Scripting

help with sed needed to extract content from html tags

Hi I've searched for it for few hours now and i can't seem to find anything working like i want. I've got webpage, saved in file par with form like this: <html><body><form name='sendme' action='http://example.com/' method='POST'> <textarea name='1st'>abc123def678</textarea> <textarea... (9 Replies)
Discussion started by: seb001
9 Replies

8. Shell Programming and Scripting

How to remove urls from html files

Does anybody know how to remove all urls from html files? all urls are links with anchor texts in the form of <a href="http://www.anydomain.com">ANCHOR</a> they may start with www or not. Goal is to delete all urls and keep the ANCHOR text and if possible to change tags around anchor to... (2 Replies)
Discussion started by: georgi58
2 Replies

9. Shell Programming and Scripting

Trying to extract domain and tld from list of urls.

I have done a fair amount of searching the threads, but I have not been able to cobble together a solution to my challenge. What I am trying to do is to line edit a file that will leave behind only the domain and tld of a long list of urls. The list looks something like this: www.google.com... (3 Replies)
Discussion started by: chamb1
3 Replies

10. Shell Programming and Scripting

Awk/sed HTML extract

I'm extracting text between table tags in HTML <th><a href="/wiki/Buick_LeSabre" title="Buick LeSabre">Buick LeSabre</a></th> using this: awk -F "</*th>" '/<\/*th>/ {print $2}' auto2 > auto3 then this (text between a href): sed -e 's/\(<*>\)//g' auto3 > auto4 How to shorten this into one... (8 Replies)
Discussion started by: p1ne
8 Replies
bibtexconv(1)						    BSD General Commands Manual 					     bibtexconv(1)

NAME
bibtexconv -- BibTeX Converter SYNOPSIS
bibtexconv BibTeX_File [-export-to-bibtex=file] [-export-to-separate-bibtexs=prefix] [-export-to-xml=file] [-export-to-separate-xmls=prefix] [-nbsp=string] [-non-interactive] [-check-urls] [-only-check-new-urls] [-store-downloads=directory] [-add-url-command] [-skip-notes-with-isbn-and-issn] [-add-notes-with-isbn-and-issn] DESCRIPTION
bibtexconv is a BibTeX file converter which allows one to export BibTeX entries to other formats, including customly defined text output. Furthermore, it provides the possibility to check URLs (including MD5, size and MIME type computations) and to verify ISBN and ISSN numbers. bibtexconv provides an interactive mode (usually used in form of export scripts) which allows for fine-granular selection of entries, sorting and export template specification. OPTIONS
The following arguments may be provided: BibTeX_File The BibTeX input file. -export-to-bibtex=file Write the results as BibTeX into the given file. -export-to-separate-bibtexs=prefix Write the results as BibTeX; for each entry, an own file will be created. The filename will be generated from given prefix (e.g. "/tmp/MyBibTex-"), the entry key and ".bib". -export-to-xml=file Write the results as XML (as used by IETF Internet Draft processing tools) into the given file. -export-to-separate-xmls=prefix Write the results as XML; for each entry, an own file will be created. The filename will be generated from given prefix (e.g. "/tmp/MyBibTex-"), the entry key and ".xml". -nbsp=string Replace non-breakable space by given string (for example, "&nbsp;" when writing HTML). -check-urls Check URLs by downloading the content file and adding MD5, size and MIME type entries. -only-check-new-urls Combined with -check-urls, checks are only performed for new entries where MD5, size and/or MIME type are still unknown. -store-downloads=directory Combined with -check-urls, all checked references are downloaded and stored in the given directory. Existing files will be overwrit- ten. -add-url-command Add url{} commands to url tags in BibTeX export. -skip-notes-with-isbn-and-issn When reading the BibTeX file, ignore "note" items with ISBN and ISSN. -add-notes-with-isbn-and-issn When writing a BibTeX file, create "note" items with ISBN and ISSN. -non-interactive Turns on non-interactive mode. Useful for just performing URL checks and ISBN/ISSN verification. COMMANDS
After startup, bibtexconv accepts the following commands from standard input (not as command-line arguments!): cite key custom1 custom2 custom3 custom4 custom5 Select specific entry in the input BibTeX file, given by key. Optionally, up to five custom strings may be attached to the selected entry. citeAll Select all entries in the input BibTeX file. clear Clear all selections. echo string Write given string to standard output. export Export selected entries to standard output, according to configured printing template. header include file Include another export script, given by file name. monthNames jan feb mar apr may jun jul aug sep oct nov dec Changes textual representation of month names. This is necessary when non-English names are required for export. nbsp Changes textual representation of the "non-breakable space", for example &nbsp; in HTML or ~ in BibTeX. sort template template+ templatenew trailer utf8Style xmlStyle EXAMPLES
Have a look into /usr/share/doc/bibtexconv/examples/ (or corresponding path of your system) for example export scripts. The export scripts contain the commands which are read by bibtexconv from standard input. bibtexconv /usr/share/doc/bibtexconv/examples/ExampleReferences.bib -export-to-bibtex=UpdatedReferences.bib -check-urls -only-check-new-urls -non-interactive Checks URLs of all entries in /usr/share/doc/bibtexconv/examples/ExampleReferences.bib, adds MD5, size and MIME type items and writes the results to UpdatedReferences.bib. bibtexconv /usr/share/doc/bibtexconv/examples/ExampleReferences.bib </usr/share/doc/bibtexconv/examples/web-example.export >MyPublications.html Uses export script /usr/share/doc/bibtexconv/examples/web-example.export to export references from /usr/share/doc/bibtexconv/exam- ples/ExampleReferences.bib to MyPublications.html as XHTML 1.1. bibtexconv /usr/share/doc/bibtexconv/examples/ExampleReferences.bib </usr/share/doc/bibtexconv/examples/text-example.export >MyPublications.txt Uses export script /usr/share/doc/bibtexconv/examples/text-example.export to export references from /usr/share/doc/bibtexconv/exam- ples/ExampleReferences.bib to MyPublications.txt as plain text. bibtexconv /usr/share/doc/bibtexconv/examples/ExampleReferences.bib -non-interactive -export-to-separate-xmls=reference. Convert all references to XML references to be includable in IETF Internet Drafts. For each reference, an own file is generated, named with the prefix "reference.", for example reference.Globecom2010.xml for entry Globecom2010. bibtexconv /usr/share/doc/bibtexconv/examples/ExampleReferences.bib -non-interactive -export-to-separate-bibtexs= Convert all references to BibTeX references. For each reference, an own file is generated, named with the prefix "", for example Globecom2010.bib for entry Globecom2010. bibtexconv May 12, 2012 bibtexconv
All times are GMT -4. The time now is 11:21 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy