05-17-2002
really thanks!
But tell me one more things, it's possible to have also a number of all html files contained in all the subfolders?
Thanks
Giuseppe
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
:confused:
I am a programmer, but need to work with UNIX in this particular situation. I am used to the plain "mail -s" command and also familiar with how to send attachments in html... but I now need to send an email (not an attachment) in html format so I can embed links... etc.
I am told... (2 Replies)
Discussion started by: cgardiner
2 Replies
2. UNIX for Advanced & Expert Users
If I try using lp to print html files (being generated by our application) to a regular print queue all it prints is the code.
How can I print the generated html as the user would see in a browser using lp ?
TIA (1 Reply)
Discussion started by: KingOfHearts
1 Replies
3. UNIX for Dummies Questions & Answers
Hello,
On a Centos 5.0 server, Apache 2.2 delivers static html page. How could I compress those html pages to gain speed and save bandwidth? is there a utility that would be effective and save?
Thanks (2 Replies)
Discussion started by: JCR
2 Replies
4. Shell Programming and Scripting
hi friends,
i have to cut a large html file between tag " <!-- DEFACEMENTS ROWS -->"
"<!-- DISCLAIMER FOOTER -->"
and store cut data in other file
please help me!!!! (2 Replies)
Discussion started by: praneshbmishra
2 Replies
5. Web Development
By launching two SQL scripts I get two html files report_1.html and report_2.html with different background and text colors (white/blue for the former and silver/black for the latter) but if I try to concat the two html by using the CAT function on UNIX Server where Oracle is installed (cat... (2 Replies)
Discussion started by: Mark1970
2 Replies
6. Shell Programming and Scripting
Does anybody know how to remove all urls from html files?
all urls are links with anchor texts in the form of
<a href="http://www.anydomain.com">ANCHOR</a>
they may start with www or not.
Goal is to delete all urls and keep the ANCHOR text and if possible to change tags around anchor to... (2 Replies)
Discussion started by: georgi58
2 Replies
7. Shell Programming and Scripting
Hi
does somebody know how to do a word count in a .html file?
Just the text words, without all the html code.
Thanks (4 Replies)
Discussion started by: louisJ
4 Replies
8. Shell Programming and Scripting
Hi!
I just want to count number of files in a directory, and write to new text file, with number of files and their name
output should look like this,,
assume that below one is a new file created by script
Number of files in directory = 25
1. a.txt
2. abc.txt
3. asd.dat... (20 Replies)
Discussion started by: Akshay Hegde
20 Replies
9. OS X (Apple)
Hello
I need to merge several HTML files into one and possibly convert it to ".rtf"
All files are in the same folder.
The files contain Links I need to keep
Any hints?
Thanks (4 Replies)
Discussion started by: Etnad
4 Replies
10. Shell Programming and Scripting
Hi experts,
I am using KSH and I am need to display file with number in front of file names and user can select it by entering the number.
I am trying to use following command to display list with numbers. but I do not know how to capture number and identify what file it is to be used for... (5 Replies)
Discussion started by: mysocks
5 Replies
LEARN ABOUT DEBIAN
httpindex
httpindex(1) General Commands Manual httpindex(1)
NAME
httpindex - HTTP front-end for SWISH++ indexer
SYNOPSIS
wget [ options ] URL... 2>&1 | httpindex [ options ]
DESCRIPTION
httpindex is a front-end for index++(1) to index files copied from remote servers using wget(1). The files (in a copy of the remote direc-
tory structure) can be kept, deleted, or replaced with their descriptions after indexing.
OPTIONS
wget Options
The wget(1) options that are required are: -A, -nv, -r, and -x; the ones that are highly recommended are: -l, -nh, -t, and -w. (See the
EXAMPLE.)
httpindex Options
httpindex accepts the same short options as index++(1) except for -H, -I, -l, -r, -S, and -V.
The following options are unique to httpindex:
-d Replace the text of local copies of retrieved files with their descriptions after they have been indexed. This is useful to display
file descriptions in search results without having to have complete copies of the remote files thus saving filesystem space. (See
the extract_description() function in WWW(3) for details about how descriptions are extracted.)
-D Delete the local copies of retrieved files after they have been indexed. This prevents your local filesystem from filling up with
copies of remote files.
EXAMPLE
To index all HTML and text files on a remote web server keeping descriptions locally:
wget -A html,txt -linf -t2 -rxnv -nh -w2 http://www.foo.com 2>&1 |
httpindex -d -e'html:*.html,text:*.txt'
Note that you need to redirect wget(1)'s output from standard error to standard output in order to pipe it to httpindex.
EXIT STATUS
Exits with a value of zero only if indexing completed sucessfully; non-zero otherwise.
CAVEATS
In addition to those for index++(1), httpindex does not correctly handle the use of multiple -e, -E, -m, or -M options (because the Perl
script uses the standard GetOpt::Std package for processing command-line options that doesn't). The last of any of those options ``wins.''
The work-around is to use multiple values for those options seperated by commas to a single one of those options. For example, if you want
to do:
httpindex -e'html:*.html' -e'text:*.txt'
do this instead:
httpindex -e'html:*.html,text:*.txt'
SEE ALSO
index++(1), wget(1), WWW(3)
AUTHOR
Paul J. Lucas <pauljlucas@mac.com>
SWISH++ August 2, 2005 httpindex(1)