Sponsored Content
Full Discussion: Monitor Website
Top Forums Shell Programming and Scripting Monitor Website Post 302929501 by charli1 on Monday 22nd of December 2014 07:24:15 AM
Old 12-22-2014
Quote:
Originally Posted by RudiC
Run lynx or wget to download the page contents, and check that for the desired keyword(s) with e.g. grep or awk
Thanks for the reply mate,Can you please edit the above script with wgetlynx

Thank you.

Last edited by charli1; 12-23-2014 at 03:51 AM..
 

8 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

website

HELLO FELLOW GEEKS. PLZ CHECK OUT MY FRIENDS SITE AT http://isunshine.dhs.org or u can also join the message board at http://isunshine.dhs.org/scripts/ikonboard.cgi wixifer (1 Reply)
Discussion started by: wixifer
1 Replies

2. UNIX for Dummies Questions & Answers

Website

Hey guys I know you probably get this question a lot but, I want to make a website, and I don't have any experience doing this. I have a iMac and i was wondering if there is someone you could refer me to or a site that will show me how to do it. Thanks. (2 Replies)
Discussion started by: mmecca21
2 Replies

3. Shell Programming and Scripting

Monitor: Read from the monitor

Hello, I would like to write a script that use the display as an input. In the display there is a list of file. I want to use it as an array and this would be the input in my script. Does somebody know how do I make it? (2 Replies)
Discussion started by: mig8
2 Replies

4. Web Development

my website.please. help me.

hello!! well, i am planning to make my own virtual pet site like that of a neopets. unfortunately i don't have any idea on how to do it.. i've tried searching in the net, but the result is really complicated. i don't know where to begin.i have already drawn some that i think would help... (2 Replies)
Discussion started by: ackiemae
2 Replies

5. Shell Programming and Scripting

Website crawler

Hi, I want to build a crawler that seeks for a keyword on certain websites. This is what the website looks like: website.com/xxxxAA11xxxx I want that the crawler automatically changes the letters alphanumerically and if a certain keyword is found, the website got to be logged. But... (12 Replies)
Discussion started by: yaylol
12 Replies

6. Hardware

Fedora 16 dual monitor - dual head - automatic monitor shutdown

Hi, I am experiencing troubles with dual monitors in fedora 16. During boot time both monitors are working, but when system starts one monitor automatically shut down. It happend out of the blue. Some time before when I updated system this happend but then I booted older kernel release and... (0 Replies)
Discussion started by: wakatana
0 Replies

7. Infrastructure Monitoring

Searching for Saas Monitor service which monitor my servers which are sitting in different providers

Sorry if this is the wrong forum Searching for Saas Monitor service which monitor my servers which are sitting in different providers . This monitor tool will take as less CPU as possible , and will send info about the server to main Dashboard. The info I need is CPU / RAM / my servers status (... (1 Reply)
Discussion started by: umen
1 Replies

8. UNIX for Advanced & Expert Users

Script monitor website wth default tomcat script

Hi all, on our application server we have the following script that monitor the status of the website, my problem here is that i have edite the retries from 3 to 5, and the timewait to 120 second, so the script should check 5 times every 2 minutes, and if the fifth check fails it must restart... (0 Replies)
Discussion started by: charli1
0 Replies
httpindex(1)						      General Commands Manual						      httpindex(1)

NAME
httpindex - HTTP front-end for SWISH++ indexer SYNOPSIS
wget [ options ] URL... 2>&1 | httpindex [ options ] DESCRIPTION
httpindex is a front-end for index++(1) to index files copied from remote servers using wget(1). The files (in a copy of the remote direc- tory structure) can be kept, deleted, or replaced with their descriptions after indexing. OPTIONS
wget Options The wget(1) options that are required are: -A, -nv, -r, and -x; the ones that are highly recommended are: -l, -nh, -t, and -w. (See the EXAMPLE.) httpindex Options httpindex accepts the same short options as index++(1) except for -H, -I, -l, -r, -S, and -V. The following options are unique to httpindex: -d Replace the text of local copies of retrieved files with their descriptions after they have been indexed. This is useful to display file descriptions in search results without having to have complete copies of the remote files thus saving filesystem space. (See the extract_description() function in WWW(3) for details about how descriptions are extracted.) -D Delete the local copies of retrieved files after they have been indexed. This prevents your local filesystem from filling up with copies of remote files. EXAMPLE
To index all HTML and text files on a remote web server keeping descriptions locally: wget -A html,txt -linf -t2 -rxnv -nh -w2 http://www.foo.com 2>&1 | httpindex -d -e'html:*.html,text:*.txt' Note that you need to redirect wget(1)'s output from standard error to standard output in order to pipe it to httpindex. EXIT STATUS
Exits with a value of zero only if indexing completed sucessfully; non-zero otherwise. CAVEATS
In addition to those for index++(1), httpindex does not correctly handle the use of multiple -e, -E, -m, or -M options (because the Perl script uses the standard GetOpt::Std package for processing command-line options that doesn't). The last of any of those options ``wins.'' The work-around is to use multiple values for those options seperated by commas to a single one of those options. For example, if you want to do: httpindex -e'html:*.html' -e'text:*.txt' do this instead: httpindex -e'html:*.html,text:*.txt' SEE ALSO
index++(1), wget(1), WWW(3) AUTHOR
Paul J. Lucas <pauljlucas@mac.com> SWISH++ August 2, 2005 httpindex(1)
All times are GMT -4. The time now is 06:10 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy