Sponsored Content
Top Forums Shell Programming and Scripting feasibility of opening a website link from unix and get a response in the form of xml or html Post 302655873 by vivek d r on Thursday 14th of June 2012 02:03:48 AM
Old 06-14-2012
feasibility of opening a website link from unix and get a response in the form of xml or html

i just wanted to know whether is it possible to open a website link and get a response in the form of xml or html format...
the website is of local network...
for example something like this

after a similiar stament is executed the output should give the respose got from opening the link in internet explorer.... i know this question might sound stupid but i just wanted to know since i dont know what is unix capable of... i just know basic shell scripting.... any help would be deeply appreciated....

Last edited by vivek d r; 06-14-2012 at 03:13 AM..
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Changing Unix form to Microsoft Word form to be able to email it to someone.

Please someone I need information on how to change a Unix form/document into a microsoft word document in order to be emailed to another company. Please help ASAP. Thankyou :confused: (8 Replies)
Discussion started by: Cheraunm
8 Replies

2. Shell Programming and Scripting

HTML form to cgi help

I wrote a script to automate user account verification against peoplesoft. Now I want to make it available to my peers via the web. It is running on Solaris. I have the form written, but am not sure how to make it work. I think the form should call a perl cgi when submitted. The cgi should call... (7 Replies)
Discussion started by: 98_1LE
7 Replies

3. UNIX for Dummies Questions & Answers

opening non-html files in lynx??

when i try to open a txt file in lynx I need to provide the filename or use wildcards to open. Autocompletion doesn't work for some reason. Also, trying to open files like: .sh, .py etc. ends up in the following error: lynx: Start file could not be found or is not text/html or text/plain ... (0 Replies)
Discussion started by: riwa
0 Replies

4. Web Development

Rewrite rules to change “link.html?hl=es” to “/es/link.html” etc?

Hey! Does anyone know how to create rewrite rules to change: “link.html?hl=en” to “/en/link.html” “link.html?hl=jp” to “/jp/link.html” “link.html?hl=es” to “/es/link.html” etc? Where "link.html" changes based on the page request? (2 Replies)
Discussion started by: Neo
2 Replies

5. Windows & DOS: Issues & Discussions

error opening website

hi I have unusual problem you might say. I can't open microsoft.com , I've checked file hosts located somewhere in windows/system32/drivers .. and its not blocked from there, what else could cause this problem, I need to download microsoft visual studio and I can't cause I can't open the website,... (1 Reply)
Discussion started by: c0mrade
1 Replies

6. Shell Programming and Scripting

Unix Script to read the XML file from Website

Hi Experts, I need a unix shell script which can copy the xml file from the below pasted website and paste in in my unix directory. http://www.westpac.co.nz/olcontent/olcontent.nsf/fx.xml Thanks in Advance... (8 Replies)
Discussion started by: phani333
8 Replies

7. Post Here to Contact Site Administrators and Moderators

Slow response from website

Hi, I am experiencing slow response of unix.com from past 3-4 days. like- - most of the time the page does not reload instantly (when I do a manual reload from browser) - not able to view graphics. ( displays only text). - when posting into forum, the page gets stuck for considerably long... (6 Replies)
Discussion started by: clx
6 Replies

8. Solaris

man pages in html form

Hi I would like to convert standard online man pages from my solaris10 system into html form to publish it on my webpage. How this can be done in Sol10 ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

9. Shell Programming and Scripting

Extract/Parse information from html (website)

Hello, I want to extract some informations from a html (website, http://www.energiecontracting.de/7-mitglieder/von-A-Z.php?a_z=B&seite=2 ) file and save those in a predefined format (.csv).. However it seems that the code on that website is kinda messy and I can't find a way to handle it... (5 Replies)
Discussion started by: TehOne
5 Replies

10. Shell Programming and Scripting

Script to alert about a slow link on the website

Hello all, Currently I am using a script with "curl" to get the an alert if 200 ok would not be grepped.and the link is down. is it possible to get an alert mail if a particular link on a website is not completely down but SLOW?? (0 Replies)
Discussion started by: chirag991
0 Replies
WEB2DISK(1)							      calibre							       WEB2DISK(1)

NAME
web2disk - part of calibre SYNOPSIS
web2disk URL DESCRIPTION
Where URL is for example http://google.com Whenever you pass arguments to web2disk that have spaces in them, enclose the arguments in quotation marks. OPTIONS
--version show program's version number and exit -h, --help show this help message and exit -d, --base-dir Base directory into which URL is saved. Default is . -t, --timeout Timeout in seconds to wait for a response from the server. Default: 10.0 s -r, --max-recursions Maximum number of levels to recurse i.e. depth of links to follow. Default 1 -n, --max-files The maximum number of files to download. This only applies to files from <a href> tags. Default is 2147483647 --delay Minimum interval in seconds between consecutive fetches. Default is 0 s --encoding The character encoding for the websites you are trying to download. The default is to try and guess the encoding. --match-regexp Only links that match this regular expression will be followed. This option can be specified multiple times, in which case as long as a link matches any one regexp, it will be followed. By default all links are followed. --filter-regexp Any link that matches this regular expression will be ignored. This option can be specified multiple times, in which case as long as any regexp matches a link, it will be ignored. By default, no links are ignored. If both filter regexp and match regexp are speci- fied, then filter regexp is applied first. --dont-download-stylesheets Do not download CSS stylesheets. --verbose Show detailed output information. Useful for debugging SEE ALSO
The User Manual is available at http://manual.calibre-ebook.com Created by Kovid Goyal <kovid@kovidgoyal.net> web2disk (calibre 0.8.51) January 2013 WEB2DISK(1)
All times are GMT -4. The time now is 06:15 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy