Sponsored Content
Top Forums Shell Programming and Scripting Reading URL using Mechanize and dump all the contents of the URL to a file Post 302888744 by scott_cog on Monday 17th of February 2014 06:32:48 AM
Old 02-17-2014
Hi Ctsgnb,

Thanks for your reply.

Actually in the server we dont have wget ,curl utility and we cannot install as well.

Btw, i found the answer and i was able to get the output which i supposed to get. Hope it will be useful for others who are in search of such thing. and it was pretty simple.

Code:
#!/usr/bin/perl

use strict;
use warnings;
use LWP::UserAgent;

my $req;
my $res;

my $ua = LWP::UserAgent->new;
$ua->agent("Firefox/1.5.0.10");
$ua->timeout(1000);

$req = HTTP::Request->new(GET => 'http://yyyzzzzzxxxxxxxiiiii');

$res = $ua->request($req);

if ($res->is_success) {
    print $res->content;
}
else {
    print "Error: " . $res->status . "\n";
}


Last edited by scott_cog; 02-17-2014 at 11:22 AM.. Reason: .
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Replacing URL in a file with space

Hi, I have a file with a URL text written in it within double quotes e.g. "http://abcd.xyz.com/mno/somefile.dtd" I want the above text to get replaced by a single space character. I tried cat File1.txt | sed -e 's/("http)*(dtd")/ /g' > File2.txt But it didnt work out. Can someone... (5 Replies)
Discussion started by: dsrookie
5 Replies

2. UNIX for Advanced & Expert Users

Posting a file from Unix to URl

HI Can you please help me,how to post a xml file from Unix to URL. Basically,i want to map contents of my file at an url Regards Pooja (1 Reply)
Discussion started by: PoojaM
1 Replies

3. UNIX for Advanced & Expert Users

url calling and parameter passing to url in script

Hi all, I need to write a unix script in which need to call a url. Then need to pass parameters to that url. please help. Regards, gander_ss (1 Reply)
Discussion started by: gander_ss
1 Replies

4. Shell Programming and Scripting

url calling and parameter passing to url in script

Hi all, I need to write a unix script in which need to call a url. Then need to pass parameters to that url. please help. Regards, gander_ss (1 Reply)
Discussion started by: gander_ss
1 Replies

5. UNIX for Dummies Questions & Answers

ReDirecting a URL to another URL - Linux

Hello, I need to redirect an existing URL, how can i do that? There's a current web address to a GUI that I have to redirect to another webaddress. Does anyone know how to do this? This is on Unix boxes Linux. example: https://m45.testing.address.net/host.php make it so the... (3 Replies)
Discussion started by: SkySmart
3 Replies

6. Shell Programming and Scripting

[lynx dump] Order (by name/URL)

Hi :) How to use dump in lynx. $ lynx -dump http://www.google.com So, this is an example of a lynx dump: txt1 blabla Other txt some text 1. http://url_of_txt1 2. http://url_of_blabla 3. http://url_of_Other_txt 4. http://url_of_some_text ... How can i obtain this output? ... (12 Replies)
Discussion started by: aspire
12 Replies

7. Web Development

Regex to rewrite URL to another URL based on HTTP_HOST?

I am trying to find a way to test some code, but I need to rewrite a specific URL only from a specific HTTP_HOST The call goes out to http://SUB.DOMAIN.COM/showAssignment/7bde10b45efdd7a97629ef2fe01f7303/jsmodule/Nevow.Athena The ID in the middle is always random due to the cookie. I... (5 Replies)
Discussion started by: EXT3FSCK
5 Replies

8. Shell Programming and Scripting

Given a url how to get last part of file

HI, I have a URL that points to a file: LINK= "http://www.webpage.org/project/team2/file.tar" However when I try to use wget on this variable I receive the following error. wget $LINK line 4: http://www.webpage.org/project/team2/file.tar: No such file or directory wget:... (1 Reply)
Discussion started by: bashnewbee
1 Replies

9. UNIX for Dummies Questions & Answers

Awk: print all URL addresses between iframe tags without repeating an already printed URL

Here is what I have so far: find . -name "*php*" -or -name "*htm*" | xargs grep -i iframe | awk -F'"' '/<iframe*/{gsub(/.\*iframe>/,"\"");print $2}' Here is an example content of a PHP or HTM(HTML) file: <iframe src="http://ADDRESS_1/?click=5BBB08\" width=1 height=1... (18 Replies)
Discussion started by: striker4o
18 Replies

10. Shell Programming and Scripting

Extracting the file name from the specified URL

Hello Everyone, I am trying to write a shell script(or Perl Script) that would do the following: I have a file that contains the following lines: File: https://ims-svnus.com/dev/DB/trunk/feeds/templates/shell_script.txt -r860... (5 Replies)
Discussion started by: filter
5 Replies
lwpcook(3pm)						User Contributed Perl Documentation					      lwpcook(3pm)

NAME
lwpcook - The libwww-perl cookbook DESCRIPTION
This document contain some examples that show typical usage of the libwww-perl library. You should consult the documentation for the individual modules for more detail. All examples should be runnable programs. You can, in most cases, test the code sections by piping the program text directly to perl. GET
It is very easy to use this library to just fetch documents from the net. The LWP::Simple module provides the get() function that return the document specified by its URL argument: use LWP::Simple; $doc = get 'http://search.cpan.org/dist/libwww-perl/'; or, as a perl one-liner using the getprint() function: perl -MLWP::Simple -e 'getprint "http://search.cpan.org/dist/libwww-perl/"' or, how about fetching the latest perl by running this command: perl -MLWP::Simple -e ' getstore "ftp://ftp.sunet.se/pub/lang/perl/CPAN/src/latest.tar.gz", "perl.tar.gz"' You will probably first want to find a CPAN site closer to you by running something like the following command: perl -MLWP::Simple -e 'getprint "http://www.cpan.org/SITES.html"' Enough of this simple stuff! The LWP object oriented interface gives you more control over the request sent to the server. Using this interface you have full control over headers sent and how you want to handle the response returned. use LWP::UserAgent; $ua = LWP::UserAgent->new; $ua->agent("$0/0.1 " . $ua->agent); # $ua->agent("Mozilla/8.0") # pretend we are very capable browser $req = HTTP::Request->new( GET => 'http://search.cpan.org/dist/libwww-perl/'); $req->header('Accept' => 'text/html'); # send request $res = $ua->request($req); # check the outcome if ($res->is_success) { print $res->decoded_content; } else { print "Error: " . $res->status_line . " "; } The lwp-request program (alias GET) that is distributed with the library can also be used to fetch documents from WWW servers. HEAD
If you just want to check if a document is present (i.e. the URL is valid) try to run code that looks like this: use LWP::Simple; if (head($url)) { # ok document exists } The head() function really returns a list of meta-information about the document. The first three values of the list returned are the document type, the size of the document, and the age of the document. More control over the request or access to all header values returned require that you use the object oriented interface described for GET above. Just s/GET/HEAD/g. POST
There is no simple procedural interface for posting data to a WWW server. You must use the object oriented interface for this. The most common POST operation is to access a WWW form application: use LWP::UserAgent; $ua = LWP::UserAgent->new; my $req = HTTP::Request->new( POST => 'http://rt.cpan.org/Public/Dist/Display.html'); $req->content_type('application/x-www-form-urlencoded'); $req->content('Status=Active&Name=libwww-perl'); my $res = $ua->request($req); print $res->as_string; Lazy people use the HTTP::Request::Common module to set up a suitable POST request message (it handles all the escaping issues) and has a suitable default for the content_type: use HTTP::Request::Common qw(POST); use LWP::UserAgent; $ua = LWP::UserAgent->new; my $req = POST 'http://rt.cpan.org/Public/Dist/Display.html', [ Status => 'Active', Name => 'libwww-perl' ]; print $ua->request($req)->as_string; The lwp-request program (alias POST) that is distributed with the library can also be used for posting data. PROXIES
Some sites use proxies to go through fire wall machines, or just as cache in order to improve performance. Proxies can also be used for accessing resources through protocols not supported directly (or supported badly :-) by the libwww-perl library. You should initialize your proxy setting before you start sending requests: use LWP::UserAgent; $ua = LWP::UserAgent->new; $ua->env_proxy; # initialize from environment variables # or $ua->proxy(ftp => 'http://proxy.myorg.com'); $ua->proxy(wais => 'http://proxy.myorg.com'); $ua->no_proxy(qw(no se fi)); my $req = HTTP::Request->new(GET => 'wais://xxx.com/'); print $ua->request($req)->as_string; The LWP::Simple interface will call env_proxy() for you automatically. Applications that use the $ua->env_proxy() method will normally not use the $ua->proxy() and $ua->no_proxy() methods. Some proxies also require that you send it a username/password in order to let requests through. You should be able to add the required header, with something like this: use LWP::UserAgent; $ua = LWP::UserAgent->new; $ua->proxy(['http', 'ftp'] => 'http://username:password@proxy.myorg.com'); $req = HTTP::Request->new('GET',"http://www.perl.com"); $res = $ua->request($req); print $res->decoded_content if $res->is_success; Replace "proxy.myorg.com", "username" and "password" with something suitable for your site. ACCESS TO PROTECTED DOCUMENTS
Documents protected by basic authorization can easily be accessed like this: use LWP::UserAgent; $ua = LWP::UserAgent->new; $req = HTTP::Request->new(GET => 'http://www.linpro.no/secret/'); $req->authorization_basic('aas', 'mypassword'); print $ua->request($req)->as_string; The other alternative is to provide a subclass of LWP::UserAgent that overrides the get_basic_credentials() method. Study the lwp-request program for an example of this. COOKIES
Some sites like to play games with cookies. By default LWP ignores cookies provided by the servers it visits. LWP will collect cookies and respond to cookie requests if you set up a cookie jar. use LWP::UserAgent; use HTTP::Cookies; $ua = LWP::UserAgent->new; $ua->cookie_jar(HTTP::Cookies->new(file => "lwpcookies.txt", autosave => 1)); # and then send requests just as you used to do $res = $ua->request(HTTP::Request->new(GET => "http://no.yahoo.com/")); print $res->status_line, " "; As you visit sites that send you cookies to keep, then the file lwpcookies.txt" will grow. HTTPS
URLs with https scheme are accessed in exactly the same way as with http scheme, provided that an SSL interface module for LWP has been properly installed (see the README.SSL file found in the libwww-perl distribution for more details). If no SSL interface is installed for LWP to use, then you will get "501 Protocol scheme 'https' is not supported" errors when accessing such URLs. Here's an example of fetching and printing a WWW page using SSL: use LWP::UserAgent; my $ua = LWP::UserAgent->new; my $req = HTTP::Request->new(GET => 'https://www.helsinki.fi/'); my $res = $ua->request($req); if ($res->is_success) { print $res->as_string; } else { print "Failed: ", $res->status_line, " "; } MIRRORING
If you want to mirror documents from a WWW server, then try to run code similar to this at regular intervals: use LWP::Simple; %mirrors = ( 'http://www.sn.no/' => 'sn.html', 'http://www.perl.com/' => 'perl.html', 'http://search.cpan.org/distlibwww-perl/' => 'lwp.html', 'gopher://gopher.sn.no/' => 'gopher.html', ); while (($url, $localfile) = each(%mirrors)) { mirror($url, $localfile); } Or, as a perl one-liner: perl -MLWP::Simple -e 'mirror("http://www.perl.com/", "perl.html")'; The document will not be transferred unless it has been updated. LARGE DOCUMENTS
If the document you want to fetch is too large to be kept in memory, then you have two alternatives. You can instruct the library to write the document content to a file (second $ua->request() argument is a file name): use LWP::UserAgent; $ua = LWP::UserAgent->new; my $req = HTTP::Request->new(GET => 'http://www.cpan.org/authors/Gisle_Aas/libwww-perl-6.02.tar.gz'); $res = $ua->request($req, "libwww-perl.tar.gz"); if ($res->is_success) { print "ok "; } else { print $res->status_line, " "; } Or you can process the document as it arrives (second $ua->request() argument is a code reference): use LWP::UserAgent; $ua = LWP::UserAgent->new; $URL = 'ftp://ftp.unit.no/pub/rfc/rfc-index.txt'; my $expected_length; my $bytes_received = 0; my $res = $ua->request(HTTP::Request->new(GET => $URL), sub { my($chunk, $res) = @_; $bytes_received += length($chunk); unless (defined $expected_length) { $expected_length = $res->content_length || 0; } if ($expected_length) { printf STDERR "%d%% - ", 100 * $bytes_received / $expected_length; } print STDERR "$bytes_received bytes received "; # XXX Should really do something with the chunk itself # print $chunk; }); print $res->status_line, " "; COPYRIGHT
Copyright 1996-2001, Gisle Aas This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.14.2 2012-02-11 lwpcook(3pm)
All times are GMT -4. The time now is 01:08 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy