Help with Perl script that can check a URL and notifiy when changes occur
I'm a scripting newbie and I'm trying to learn. No better way than being assigned a project.
So basically, I'm trying to come up with a script that can periodically check a URL and then notify when changes occur to the file.
So what I'm thinking is that I need to devise a PERL script that can:
check a predefined URL
wget the page
count the number of lines that appear in the document
store that information to a file
and then do the same 10 minutes later
if the number of lines of the file is greater than the previous check
then send an email.
I haven't gotten very far. So far I've got:
Which seems to get the file and save it to tempfile.txt, but it's not counting the lines.
From what I've been reading, I think I need to use PRINT in order to print the number of lines to a document.... but I'm kind of hung up at this point.
Anyone feel like lending a hand?
Thanks
Last edited by adam1mc; 11-02-2012 at 02:01 PM..
Reason: typo
There are already free web sites that do that for you and send an email.
I would wget into cksum and save the cksum, size and url in a sorted file. Make a new sorted file on the next pass and compare files using "comm -13 old new | while read cksum sz url ;do ... done". Add files with a 0 length and 0 cksum, and the next pass will update them, but sensing a pair of zeros, it knows to not notify.
The site that I'm trying to monitor requires login credentials and therefore I can't use any of the free programs out there. I need to write something that can be run on my box, in my IE window so that my credentials are used.
After speaking with a developer, he said my biggest issue would be getting my script to actually save the webpage from IE (View Source > Save). He says wget won't work because the interaction needs to be from the IE browser and not from the script.
I have a script already that opens a particular URL in an IE window...
Apparently what I now need to figure out is how to get the script to save the page as a text file....
I think you can put the return data into a JAVASCRIPT container. The flow for a get is pretty simple. You have to open it like a soap service does. Something like this: get web page text via javascript - Stack Overflow
Hi,
I need to check if the URL exists.
Below is my OS:
SunOS mymac1 Generic_148888-04 sun4v sparc SUNW,SPARC-Enterprise-T5220
I do not have the curl set in the profile nor am i aware about its path.
But i have wget. Please help me with params for the same.
Can you help me check if... (6 Replies)
Hey guys,
currently I'm struggling with a little script to check an active URL in my running Firefox.
What I'm doing:
I'm running a low VPS with about 768mb RAM and Ubuntu on it. I only installed Fluxbox + Firefox to it in order to keep the resource consumption as low as possible. I think i... (8 Replies)
I have a server that keeps going down (503 Service not available). Until we find out the problem I would like to setup a simple ksh script in cron that will query url and report the status code. This way we can get someone to restart the process.
Does anyone know a simple command I can call... (5 Replies)
I all,
I wrote an script which starts a Weblogic server and waits until its loaded to deploy several apps. The way I checked was something like:
while ; do
wget --spider <URL>:<port>/console > /dev/null 2>&1
rc=$?
done
This works perfectly because it's an HTML site and when server is... (2 Replies)
Hi everybody,
I'm currently writing a ksh script which automates the entire startup of a large number of Tibco BusinessWorks domains, as well as all the deployed components running on it.
My script is to be used after an infrastructure release, when the entire environement is down. It... (1 Reply)
I am trying to create a perl script that will make sure a web page can be accessed going through an Apache httpd. The actual content of the web page does not matter. Most likely the web page will just have "You have successfully reached this port." This script will eventually be running... (5 Replies)