Sponsored Content
Top Forums Shell Programming and Scripting SED extract url - please help a lamer Post 302345282 by digi on Tuesday 18th of August 2009 08:37:10 PM
Old 08-18-2009
hey edidataguy

You're my hero. That did it. Thanks !
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Help needed using sed to replace a url in 1000's of web pages

Hi, I'm new to scripting. I understand the concepts and syntax of some commands but have difficulty with others and combining actions to achieve what I'm trying to do so hope someone on here can help. A long while back I inherited a website with 1000's of pages most of which were created by a... (2 Replies)
Discussion started by: bob_from_brid
2 Replies

2. Shell Programming and Scripting

stripping http and https from a url using sed

I have to write a sed script which removes http and https from a URL. So if a URL is https://www.example.com or Example Web Page, script should return me Example Web Page i tried echo $url | sed 's|^http://||g'. It doesn't work. Please help (4 Replies)
Discussion started by: vickylife
4 Replies

3. Shell Programming and Scripting

Extract URL from RSS Feed in AWK

Hi, I have following data file; <outline title="Matt Cutts" type="rss" version="RSS" xmlUrl="http://www.mattcutts.com/blog/feed/" htmlUrl="http://www.mattcutts.com/blog"/> <outline title="Stone" text="Stone" type="rss" version="RSS" xmlUrl="http://feeds.feedburner.com/STC-Art"... (8 Replies)
Discussion started by: fahdmirza
8 Replies

4. Shell Programming and Scripting

How to extract url from html page?

for example, I have an html file, contain <a href="http://awebsite" id="awebsite" class="first">website</a>and sometime a line contains more then one link, for example <a href="http://awebsite" id="awebsite" class="first">website</a><a href="http://bwebsite" id="bwebsite"... (36 Replies)
Discussion started by: 14th
36 Replies

5. UNIX for Dummies Questions & Answers

Awk: print all URL addresses between iframe tags without repeating an already printed URL

Here is what I have so far: find . -name "*php*" -or -name "*htm*" | xargs grep -i iframe | awk -F'"' '/<iframe*/{gsub(/.\*iframe>/,"\"");print $2}' Here is an example content of a PHP or HTM(HTML) file: <iframe src="http://ADDRESS_1/?click=5BBB08\" width=1 height=1... (18 Replies)
Discussion started by: striker4o
18 Replies

6. Shell Programming and Scripting

Downloading of dynamically generated URL using curl and sed

I've been attempting to use curl and sed to allow for downloading a file from a dynamically generated URL. I've been able to retrieve and save the HTML of the page that does the dynamic generation of the download URL using curl but I'm very new to sed and I seem to be stuck at this part. HTML: ... (1 Reply)
Discussion started by: schwein
1 Replies

7. Shell Programming and Scripting

How to use GREP to extract URL from file

Hi All , Here is what I want to do: Given a line: 98.70.217.222 - - "GET /liveupdate-aka.symantec.com/1340071490jtun_nav2k8enn09m25.m25?h=abcdefgh HTTP/1.1" 200 159229484 "-" "hBU1OhDsPXknMepDBJNScBj4BQcmUz5TwAAAAA" "-" 1. Get the URL component: ... (2 Replies)
Discussion started by: Naks_Sh10
2 Replies

8. Shell Programming and Scripting

Extract values from multi lined url source

Hello, I want extract multi values from multi url source to a csv text. Thank you very much for help. my curl code : curl "http://www.web.com/cities//city.html Source code: div class="clear"></div> <table class="listing-details"> <tr> ... (1 Reply)
Discussion started by: hoo
1 Replies

9. Shell Programming and Scripting

Replace URL using sed

Original Line {background-image:url('http://www.myoldhost.com/images/scds/tsp3.png');} Expected {background-image:url('http://www.mynewhost.com/nndn/hddh/ccdcd.png');} I am using following syntax STATIC_HOST_TEMP="http://myhost.com/temp/xyx.png" $sed -e... (1 Reply)
Discussion started by: 8055
1 Replies

10. Shell Programming and Scripting

Url encoding a string using sed

Hi I was hoping some one would know if it is possible to url encode a string using sed? My problem is I have extracted some key value pairs from a text file with sed, and will be inserting these pairs as source variables into a curl script to automatically download some xml from our server. My... (5 Replies)
Discussion started by: Paul Walker
5 Replies
XML::Filter::Merger(3pm)				User Contributed Perl Documentation				  XML::Filter::Merger(3pm)

NAME
XML::Filter::Merger - Assemble multiple SAX streams in to one document SYNOPSIS
## See XML::SAX::Manifold and XML::SAX::ByRecord for easy ways ## to use this processor. my $w = XML::SAX::Writer->new( Output => *STDOUT ); my $h = XML::Filter::Merger->new( Handler => $w ); my $p = XML::SAX::ParserFactory->parser( Handler => $h ); ## To insert second and later docs in to the first doc: $h->start_manifold_document( {} ); $p->parse_file( $_ ) for @ARGV; $h->end_manifold_document( {} ); ## To insert multiple docs inline (especially useful if ## a subclass does the inline parse): $h->start_document( {} ); $h->start_element( { ... } ); .... $h->start_element( { Name => "foo", ... } ); $p->parse_uri( $uri ); ## Body of $uri inserted in <foo>...</foo> $h->end_element( { Name => "foo", ... } ); ... DESCRIPTION
Combines several documents in to one "manifold" document. This can be done in two ways, both of which start by parsing a master document in to which (the guts of) secondary documents will be inserted. Inlining Secondary Documents The most SAX-like way is to simply pause the parsing of the master document between the two events where you want to insert a secondard document and parse the complete secondard document right then and there so it's events are inserted in the pipeline at the right spot. XML::Filter::Merger only passes the content of the secondary document's root element: my $h = XML::Filter::Merger->new( Handler => $w ); $h->start_document( {} ); $h->start_element( { Name => "foo1" } ); $p->parse_string( "<foo2><baz /></foo2>" ); $h->end_element( { Name => "foo1" } ); $h->end_document( {} ); results in $w seeing a document like "<foo1><baz/></foo1>". This technique is especially useful when subclassing XML::Filter::Merger to implement XInclude-like behavior. Here's a useless example that inserts some content after each "characters()" event: package Subclass; use vars qw( @ISA ); @ISA = qw( XML::Filter::Merger ); sub characters { my $self = shift; return $self->SUPER::characters( @_ ) ## ** unless $self->in_master_document; ## ** my $r = $self->SUPER::characters( @_ ); $self->set_include_all_roots( 1 ); XML::SAX::PurePerl->new( Handler => $self )->parse_string( "<hey/>" ); return $r; } ## **: It is often important to use the recursion guard shown here ## to protect the decision making logic that should only be run on ## the events in the master document from being run on events in the ## subdocument. Of course, if you want to apply the logic ## recursively, just leave the guard code out (and, yes, in this ## example, th guard code is phrased in a slightly redundant fashion, ## but we want to make the idiom clear). Feeding this filter "<foo> </foo>" results in "<foo> <hey/></foo>". We've called set_include_all_roots( 1 ) to get the secondary document's root element included. Inserting Manifold Documents A more involved way suitable to handling consecutive documents it to use the two non-SAX events--"start_manifold_document" and "end_manifold_document"--that are called before the first document to be combined and after the last one, respectively. The first document to be started after the "start_manifold_document" is the master document and is emitted as-is except that it will contain the contents of all of the other documents just before the root "end_element()" tag. For example: $h->start_manifold_document( {} ); $p->parse_string( "<foo1><bar /></foo1>" ); $p->parse_string( "<foo2><baz /></foo2>" ); $h->end_manifold_document( {} ); results in "<foo><bar /><baz /></foo>". The details In case the above was a bit vague, here are the rules this filter lives by. For the master document: o Events before the root "end_element" are forwarded as received. Because of the rules for secondary documents, any secondary documents sent to the filter in the midst of a master document will be inserted inline as their events are received. o All remaining events, from the root "end_element" are buffered until the end_manifold_document() received, and are then forwarded on. For secondary documents: o All events before the root "start_element" are discarded. There is no way to recover these (though we can add an option for most non- DTD events, I believe). o The root "start_element" is discarded by default, or forwarded if "set_include_all_roots( $v )" has been used to set a true value. o All events up to, but not including, the root "end_element" are forwarded as received. o The root "end_element" is discarded or forwarded if the matching "start_element" was. o All remaining events until and including the "end_document" are forwarded and processing. o Secondary documents may contain other secondary documents. o Secondary documents need not be well formed. The must, however, be well balanced. This requires very little buffering and is "most natural" with the limitations: o All of each secondary document's events must all be received between two consecutive events of it's master document. This is because most master document events are not buffered and this filter cannot tell from which upstream source a document came. o If the master document should happen to have some egregiously large amount of whitespace, commentary, or illegal events after the root element, buffer memory could be huge. This should be exceedingly rare, even non-existent in the real world. o If any documents are not well balanced, the result won't be. o METHODS
new my $d = XML::Filter::Merger->new( \%options ); reset Clears the filter after an accident. Useful when reusing the filter. new() and start_manifold_document() both call this. start_manifold_document This must be called before the master document's "start_document()" if you want XML::Filter::Merger to insert documents that will be sent after the master document. It does not need to be called if you are going to insert secondary documents by sending their events in the midst of processing the master document. It is passed an empty ({}) data structure. Additional Methods These are provided to make it easy for subclasses to find out roughly where they are in the document structure. Generally, these should be called after calling SUPER::start_...() and before calling SUPER::end_...() to be accurate. in_master_document Returns TRUE if the current event is in the first top level document. document_depth Gets how many nested documents surround the current document. 0 means that you are in a top level document. In manifold mode, This may or may not be a secondary document: secondary documents may also follow the primary document, in which case they have a document depth of 0. element_depth Gets how many nested elements surround the current element in the current input document. Does not count elements from documents surrounding this document. top_level_document_number Returns the number of the top level document in a manifold document. This is 0 for the first top level document, which is always the master document. end_manifold_document This must be called after the last document's end_document is called. It is passed an empty ({}) data structure which is passed on to the next processor's end_document() call. This call also causes the end_element() for the root element to be passed on. set_include_all_roots $h->set_include_all_roots( 1 ); Setting this option causes the merger to include all root element nodes, not just the first document's. This means that later documents are treated as subdocuments of the output document, rather than as envelopes carrying subdocuments. Given two documents received are: Doc1: <root1><foo></root1> Doc1: <root2><bar></root2> Doc3: <root3><baz></root3> then with this option cleared (the default), the result looks like: <root1><foo><bar><baz></root1> . This is useful when processing document oriented XML and each upstream filter channel gets a complete copy of the document. This is the case with the machine XML::SAX::Manifold and the splitting filter XML::Filter::Distributor. With this option set, the result looks like: <root1><foo><root2><bar></root2><root3><baz></root3></root1> This is useful when processing record oriented XML, where the first document only contains the preamble and postamble for the records and not all of the records. This is the case with the machine XML::SAX::ByRecord and the splitting filter XML::Filter::DocSplitter. The two splitter filters mentioned set this feature appropriately. LIMITATIONS
The events before and after a secondary document's root element events are discarded. It is conceivable that characters, PIs and commentary outside the root element might need to be kept. This may be added as an option. The DocumentLocators are not properly managed: they should be saved and restored around each each secondary document. Does not yet buffer all events after the first document's root end_element event. If these bite you, contact me. AUTHOR
Barrie Slaymaker <barries@slaysys.com> COPYRIGHT
Copyright 2002, Barrie Slaymaker, All Rights Reserved. You may use this module under the terms of the Artistic, GNU Public, or BSD licenses, you choice. perl v5.10.0 2009-09-02 XML::Filter::Merger(3pm)
All times are GMT -4. The time now is 03:36 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy