feasibility of opening a website link from unix and get a response in the form of xml or html
i just wanted to know whether is it possible to open a website link and get a response in the form of xml or html format...
the website is of local network...
for example something like this
after a similiar stament is executed the output should give the respose got from opening the link in internet explorer.... i know this question might sound stupid but i just wanted to know since i dont know what is unix capable of... i just know basic shell scripting.... any help would be deeply appreciated....
Last edited by vivek d r; 06-14-2012 at 03:13 AM..
Please someone I need information on how to change a Unix form/document into a microsoft word document in order to be emailed to another company. Please help ASAP. Thankyou :confused: (8 Replies)
I wrote a script to automate user account verification against peoplesoft. Now I want to make it available to my peers via the web. It is running on Solaris.
I have the form written, but am not sure how to make it work. I think the form should call a perl cgi when submitted. The cgi should call... (7 Replies)
when i try to open a txt file in lynx I need to provide the filename or use wildcards to open. Autocompletion doesn't work for some reason.
Also, trying to open files like: .sh, .py etc. ends up in the following error:
lynx: Start file could not be found or is not text/html or text/plain ... (0 Replies)
Hey!
Does anyone know how to create rewrite rules to change:
“link.html?hl=en” to “/en/link.html”
“link.html?hl=jp” to “/jp/link.html”
“link.html?hl=es” to “/es/link.html”
etc?
Where "link.html" changes based on the page request? (2 Replies)
hi I have unusual problem you might say. I can't open microsoft.com , I've checked file hosts located somewhere in windows/system32/drivers .. and its not blocked from there, what else could cause this problem, I need to download microsoft visual studio and I can't cause I can't open the website,... (1 Reply)
Hi Experts,
I need a unix shell script which can copy the xml file from the below pasted website and paste in in my unix directory.
http://www.westpac.co.nz/olcontent/olcontent.nsf/fx.xml
Thanks in Advance... (8 Replies)
Discussion started by: phani333
8 Replies
7. Post Here to Contact Site Administrators and Moderators
Hi,
I am experiencing slow response of unix.com from past 3-4 days.
like-
- most of the time the page does not reload instantly (when I do a manual reload from browser)
- not able to view graphics. ( displays only text).
- when posting into forum, the page gets stuck for considerably long... (6 Replies)
Hi
I would like to convert standard online man pages from my solaris10 system into html form to publish it on my webpage.
How this can be done in Sol10 ?
thx for help. (2 Replies)
Hello,
I want to extract some informations from a html (website, http://www.energiecontracting.de/7-mitglieder/von-A-Z.php?a_z=B&seite=2 ) file and save those in a predefined format (.csv).. However it seems that the code on that website is kinda messy and I can't find a way to handle it... (5 Replies)
Hello all,
Currently I am using a script with "curl" to get the an alert if 200 ok would not be grepped.and the link is down.
is it possible to get an alert mail if a particular link on a website is not completely down but SLOW?? (0 Replies)
Discussion started by: chirag991
0 Replies
LEARN ABOUT DEBIAN
web2disk
WEB2DISK(1) calibre WEB2DISK(1)NAME
web2disk - part of calibre
SYNOPSIS
web2disk URL
DESCRIPTION
Where URL is for example http://google.com
Whenever you pass arguments to web2disk that have spaces in them, enclose the arguments in quotation marks.
OPTIONS --version
show program's version number and exit
-h, --help
show this help message and exit
-d, --base-dir
Base directory into which URL is saved. Default is .
-t, --timeout
Timeout in seconds to wait for a response from the server. Default: 10.0 s
-r, --max-recursions
Maximum number of levels to recurse i.e. depth of links to follow. Default 1
-n, --max-files
The maximum number of files to download. This only applies to files from <a href> tags. Default is 2147483647
--delay
Minimum interval in seconds between consecutive fetches. Default is 0 s
--encoding
The character encoding for the websites you are trying to download. The default is to try and guess the encoding.
--match-regexp
Only links that match this regular expression will be followed. This option can be specified multiple times, in which case as long
as a link matches any one regexp, it will be followed. By default all links are followed.
--filter-regexp
Any link that matches this regular expression will be ignored. This option can be specified multiple times, in which case as long as
any regexp matches a link, it will be ignored. By default, no links are ignored. If both filter regexp and match regexp are speci-
fied, then filter regexp is applied first.
--dont-download-stylesheets
Do not download CSS stylesheets.
--verbose
Show detailed output information. Useful for debugging
SEE ALSO
The User Manual is available at http://manual.calibre-ebook.com
Created by Kovid Goyal <kovid@kovidgoyal.net>
web2disk (calibre 0.8.51) January 2013 WEB2DISK(1)