Hi. Yes, the commands I have tried are:
wget -r -l 2 url
wget -r -l 3 url
and also with 'url' and "url".
Essentially I want a direct mirror of the site -- so that I can click from the index page above, through to the html pages for the different "issues"; and then to the pdf files. Possibly I will need to expand the "volumes" before doing this?
I've been using this page as a guide:
In fact I haven't managed to copy the html files yet; so obviously my understanding is wrong somewhere!
I've been having problems downloading Red Hat 7.2 from their FTP site. It downloads rather slowly(between 2-3k/sec, I'm on broadband) and after about 10 minutes stops downloading altogether. Am I doing something wrong? (2 Replies)
Hi,
I builded the linux kernel 2.6 with the following tool chain
binutils:2.16
gcc:3.4.4
glibc:2.3.5
kernel:2.6.10
and applied the corresponding patches to it.I got the kernel Image.I downloaded the Image on to the AT91RM9200 board.But when i am booting the image it is showing the... (1 Reply)
Hi All,
I have a requirement of dowloading the dynamic form content displayed in a webpage as a pdf file. The form content is not too complex but intermediate - it has textboxes, images, textarea, radiobuttons,dropdowns etc.
Can anyone suggest how i can achieve this? Your... (0 Replies)
Hello,
I am getting a HTTP error while downloading solaris patches using wget.
'Downloading unsigned patch 113096-03.
--2010-06-18 03:51:15-- http://sunsolve.sun.com/pdownload.pl?target=113096-03&method=h
Resolving sunsolve.sun.com (sunsolve.sun.com)... 192.18.108.40
Connecting to... (5 Replies)
Hi there,
I've got my own domain, ftp etc.. I'm using cPanel and I want to download a file periodically, every say 24 hours.
I've used this command:
wget -t inf http : / / www . somesite . com / webcam.jpg
ftp : / / i @ MyDomain . net : Password @ ftp . MyDomain . net^no spaces... (24 Replies)
Hello everyone. I'm new both to the forum and to unix scripting, and this website has been very useful in putting together a script I am working on. However, I have run into a bit of a snag, which is why I have come here seeking help. First I will say what I am trying to do, and then what I have... (2 Replies)
Hi,
I would like to download a file from a https website. I don't have the file name as it changes every day.
I am using the following command:
wget --no-check-certificate -r -np --user=ABC --password=DEF -O temp.txt https://<website/directory>
I am getting followin error in my... (9 Replies)
wget -i genedx.txt
The code above will download multiple pdf files from a site, but how can i download and convert these to .txt?
I have attached the master list (genedx.txt - which contains the url and file names)
as well as the two PDF's that are downloaded. I am trying to have those... (7 Replies)
I need a hint for using wget for getting a free content from a TV station that is streaming its material for a while until it appears on any video platform, that means no use of illegal methods, because it is on air, recently published and available. But reading the manual for wget I tried the... (5 Replies)
Discussion started by: 1in10
5 Replies
LEARN ABOUT DEBIAN
httpindex
httpindex(1) General Commands Manual httpindex(1)NAME
httpindex - HTTP front-end for SWISH++ indexer
SYNOPSIS
wget [ options ] URL... 2>&1 | httpindex [ options ]
DESCRIPTION
httpindex is a front-end for index++(1) to index files copied from remote servers using wget(1). The files (in a copy of the remote direc-
tory structure) can be kept, deleted, or replaced with their descriptions after indexing.
OPTIONS
wget Options
The wget(1) options that are required are: -A, -nv, -r, and -x; the ones that are highly recommended are: -l, -nh, -t, and -w. (See the
EXAMPLE.)
httpindex Options
httpindex accepts the same short options as index++(1) except for -H, -I, -l, -r, -S, and -V.
The following options are unique to httpindex:
-d Replace the text of local copies of retrieved files with their descriptions after they have been indexed. This is useful to display
file descriptions in search results without having to have complete copies of the remote files thus saving filesystem space. (See
the extract_description() function in WWW(3) for details about how descriptions are extracted.)
-D Delete the local copies of retrieved files after they have been indexed. This prevents your local filesystem from filling up with
copies of remote files.
EXAMPLE
To index all HTML and text files on a remote web server keeping descriptions locally:
wget -A html,txt -linf -t2 -rxnv -nh -w2 http://www.foo.com 2>&1 |
httpindex -d -e'html:*.html,text:*.txt'
Note that you need to redirect wget(1)'s output from standard error to standard output in order to pipe it to httpindex.
EXIT STATUS
Exits with a value of zero only if indexing completed sucessfully; non-zero otherwise.
CAVEATS
In addition to those for index++(1), httpindex does not correctly handle the use of multiple -e, -E, -m, or -M options (because the Perl
script uses the standard GetOpt::Std package for processing command-line options that doesn't). The last of any of those options ``wins.''
The work-around is to use multiple values for those options seperated by commas to a single one of those options. For example, if you want
to do:
httpindex -e'html:*.html' -e'text:*.txt'
do this instead:
httpindex -e'html:*.html,text:*.txt'
SEE ALSO
index++(1), wget(1), WWW(3)AUTHOR
Paul J. Lucas <pauljlucas@mac.com>
SWISH++ August 2, 2005 httpindex(1)