Sponsored Content
Top Forums Shell Programming and Scripting feasibility of opening a website link from unix and get a response in the form of xml or html Post 302655873 by vivek d r on Thursday 14th of June 2012 02:03:48 AM
Old 06-14-2012
feasibility of opening a website link from unix and get a response in the form of xml or html

i just wanted to know whether is it possible to open a website link and get a response in the form of xml or html format...
the website is of local network...
for example something like this

after a similiar stament is executed the output should give the respose got from opening the link in internet explorer.... i know this question might sound stupid but i just wanted to know since i dont know what is unix capable of... i just know basic shell scripting.... any help would be deeply appreciated....

Last edited by vivek d r; 06-14-2012 at 03:13 AM..
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Changing Unix form to Microsoft Word form to be able to email it to someone.

Please someone I need information on how to change a Unix form/document into a microsoft word document in order to be emailed to another company. Please help ASAP. Thankyou :confused: (8 Replies)
Discussion started by: Cheraunm
8 Replies

2. Shell Programming and Scripting

HTML form to cgi help

I wrote a script to automate user account verification against peoplesoft. Now I want to make it available to my peers via the web. It is running on Solaris. I have the form written, but am not sure how to make it work. I think the form should call a perl cgi when submitted. The cgi should call... (7 Replies)
Discussion started by: 98_1LE
7 Replies

3. UNIX for Dummies Questions & Answers

opening non-html files in lynx??

when i try to open a txt file in lynx I need to provide the filename or use wildcards to open. Autocompletion doesn't work for some reason. Also, trying to open files like: .sh, .py etc. ends up in the following error: lynx: Start file could not be found or is not text/html or text/plain ... (0 Replies)
Discussion started by: riwa
0 Replies

4. Web Development

Rewrite rules to change “link.html?hl=es” to “/es/link.html” etc?

Hey! Does anyone know how to create rewrite rules to change: “link.html?hl=en” to “/en/link.html” “link.html?hl=jp” to “/jp/link.html” “link.html?hl=es” to “/es/link.html” etc? Where "link.html" changes based on the page request? (2 Replies)
Discussion started by: Neo
2 Replies

5. Windows & DOS: Issues & Discussions

error opening website

hi I have unusual problem you might say. I can't open microsoft.com , I've checked file hosts located somewhere in windows/system32/drivers .. and its not blocked from there, what else could cause this problem, I need to download microsoft visual studio and I can't cause I can't open the website,... (1 Reply)
Discussion started by: c0mrade
1 Replies

6. Shell Programming and Scripting

Unix Script to read the XML file from Website

Hi Experts, I need a unix shell script which can copy the xml file from the below pasted website and paste in in my unix directory. http://www.westpac.co.nz/olcontent/olcontent.nsf/fx.xml Thanks in Advance... (8 Replies)
Discussion started by: phani333
8 Replies

7. Post Here to Contact Site Administrators and Moderators

Slow response from website

Hi, I am experiencing slow response of unix.com from past 3-4 days. like- - most of the time the page does not reload instantly (when I do a manual reload from browser) - not able to view graphics. ( displays only text). - when posting into forum, the page gets stuck for considerably long... (6 Replies)
Discussion started by: clx
6 Replies

8. Solaris

man pages in html form

Hi I would like to convert standard online man pages from my solaris10 system into html form to publish it on my webpage. How this can be done in Sol10 ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

9. Shell Programming and Scripting

Extract/Parse information from html (website)

Hello, I want to extract some informations from a html (website, http://www.energiecontracting.de/7-mitglieder/von-A-Z.php?a_z=B&seite=2 ) file and save those in a predefined format (.csv).. However it seems that the code on that website is kinda messy and I can't find a way to handle it... (5 Replies)
Discussion started by: TehOne
5 Replies

10. Shell Programming and Scripting

Script to alert about a slow link on the website

Hello all, Currently I am using a script with "curl" to get the an alert if 200 ok would not be grepped.and the link is down. is it possible to get an alert mail if a particular link on a website is not completely down but SLOW?? (0 Replies)
Discussion started by: chirag991
0 Replies
Feed::Find(3pm) 					User Contributed Perl Documentation					   Feed::Find(3pm)

NAME
Feed::Find - Syndication feed auto-discovery SYNOPSIS
use Feed::Find; my @feeds = Feed::Find->find('http://example.com/'); DESCRIPTION
Feed::Find implements feed auto-discovery for finding syndication feeds, given a URI. It (currently) passes all of the auto-discovery tests at http://diveintomark.org/tests/client/autodiscovery/. Feed::Find will discover the following feed formats: o RSS 0.91 o RSS 1.0 o RSS 2.0 o Atom USAGE
Feed::Find->find($uri) Given a URI $uri, use a variety of techniques to find the feeds associated with that page. If $uri itself points to a feed (i.e., if the Content-Type of the response is a recognized feed type), returns $uri. Returns a list of feed URIs. The following techniques are used: 1. <link> tag auto-discovery If the page contains any <link> tags in the <head> section, these tags are examined for recognized feed content types. The following content types are treated as feeds: application/x.atom+xml, application/atom+xml, application/xml, text/xml, application/rss+xml, and application/rdf+xml. 2. Scanning <a> tags If the page does not contain any known <link> tags, the page is then scanned for <a> tags for links to URIs with certain file extensions. The following extensions are treated as feeds: .rss, .xml, and .rdf. Note that this technique is employed only if the first technique returns no results. Feed::Find->find_in_html($html [, $base_uri ]) Given a reference to a string $html containing an HTML page, uses the same techniques as described above in find to find the feeds associated with that page. If you know the URI of the page, you should provide it in $base_uri, so that relative links can be properly made absolute. Feed::Find will attempt to determine the correct base URI, but unless that URI is specified in the HTML itself (in a "<meta>" tag), you'll need to supply it yourself. Returns a list of feed URIs. LICENSE
Feed::Find is free software; you may redistribute it and/or modify it under the same terms as Perl itself. AUTHOR &; COPYRIGHT Except where otherwise noted, Feed::Find is Copyright 2004 Benjamin Trott, ben+cpan@stupidfool.org. All rights reserved. perl v5.10.1 2011-01-28 Feed::Find(3pm)
All times are GMT -4. The time now is 10:31 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy