Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

httpindex(1) [debian man page]

httpindex(1)						      General Commands Manual						      httpindex(1)

NAME
httpindex - HTTP front-end for SWISH++ indexer SYNOPSIS
wget [ options ] URL... 2>&1 | httpindex [ options ] DESCRIPTION
httpindex is a front-end for index++(1) to index files copied from remote servers using wget(1). The files (in a copy of the remote direc- tory structure) can be kept, deleted, or replaced with their descriptions after indexing. OPTIONS
wget Options The wget(1) options that are required are: -A, -nv, -r, and -x; the ones that are highly recommended are: -l, -nh, -t, and -w. (See the EXAMPLE.) httpindex Options httpindex accepts the same short options as index++(1) except for -H, -I, -l, -r, -S, and -V. The following options are unique to httpindex: -d Replace the text of local copies of retrieved files with their descriptions after they have been indexed. This is useful to display file descriptions in search results without having to have complete copies of the remote files thus saving filesystem space. (See the extract_description() function in WWW(3) for details about how descriptions are extracted.) -D Delete the local copies of retrieved files after they have been indexed. This prevents your local filesystem from filling up with copies of remote files. EXAMPLE
To index all HTML and text files on a remote web server keeping descriptions locally: wget -A html,txt -linf -t2 -rxnv -nh -w2 http://www.foo.com 2>&1 | httpindex -d -e'html:*.html,text:*.txt' Note that you need to redirect wget(1)'s output from standard error to standard output in order to pipe it to httpindex. EXIT STATUS
Exits with a value of zero only if indexing completed sucessfully; non-zero otherwise. CAVEATS
In addition to those for index++(1), httpindex does not correctly handle the use of multiple -e, -E, -m, or -M options (because the Perl script uses the standard GetOpt::Std package for processing command-line options that doesn't). The last of any of those options ``wins.'' The work-around is to use multiple values for those options seperated by commas to a single one of those options. For example, if you want to do: httpindex -e'html:*.html' -e'text:*.txt' do this instead: httpindex -e'html:*.html,text:*.txt' SEE ALSO
index++(1), wget(1), WWW(3) AUTHOR
Paul J. Lucas <pauljlucas@mac.com> SWISH++ August 2, 2005 httpindex(1)

Check Out this Related Man Page

HXCOPY(1)							  HTML-XML-utils							 HXCOPY(1)

NAME
hxcopy - copy an HTML file and update its relative links SYNOPSIS
hxcopy [ -i old-URL ] [ -o new-URL ] [ file-or-URL [ file-or-URL ] ] DESCRIPTION
The hxcopy command copies its first argument to its second argument, while updating relative links. The input is assumed to be HTML or XHTML and may be slightly reformatted in the process. If the second argument is omitted, hxcopy writes to standard output. In this case the option -o is required. If the first argument is also omitted, hxcopy reads from standard input. In this case the option -i is required. OPTIONS
The following options are supported: -i old-URL For the purposes of updating relative links, act as if old-URL is the location from which the input is copied. If this option is omitted, the actual location of the first argument is used for calculating relative links. -o new-URL For the purposed of updating relative links, act as if new-URL is the location to which the input is copied. If this option is omitted, the actual location of the second argument is used for calculating relative links. ENVIRONMENT
To use a proxy to retrieve remote files, set the environment variables http_proxy and ftp_proxy. E.g., http_proxy="http://localhost:8080/" BUGS
Unlike the last argument of cp(1), the last argument of hxcopy must be a file, not a directory. The second argument must be a local file. Writing to a URL is not yet implemented. To work around this, replace hxcopy file.html http://example.org/file.html by hxcopy -o http://example.org/file.html file.html tmp.html and then upload tmp.html to the given URL with some other command, such as curl(1). The first argument, however, may be a URL. hxcopy will download the given file. (Currently only HTTP is supported.) EXAMPLE
Assume the HTML file foo.html contains a relative link to "../bar.html". Here are some examples of commands: hxcopy foo.html bar/foo.html The file foo.html is copied to ../bar/foo.html and the relative link to "../bar.html" becomes "../../bar.html". hxcopy foo.html ../foo.html The file foo.html is copied to ../foo.html and the relative link to "../bar.html" is rewritten as "bar.html". hxcopy -i http://my.org/dir1/foo.html -o http://my.org/foo.html file1.html file2.html The file file1.html is copied to file2.html and the relative link to "../bar.html" is rewritten as "bar.html". A command like this may be useful to update files that are later uploaded to a server. SEE ALSO
cp(1), curl(1), hxwls(1) 6.x 9 Dec 2008 HXCOPY(1)
Man Page