12-07-2010
Hello Jim,
Thanks for the response. The script is working fine for me after little modifcation.
h1=`hostname`
#d1=`date +%D::::%T`
d1=`date`
tomcat1logs=/opt/local/tomcat1
tomcat2logs=/opt/local/tomcat2
tomcat3logs=/opt/local/tomcat3
tomcat4logs=/opt/local/tomcat4
tomcat5logs=/opt/local/tomcat5
tomcat6logs=/opt/local/tomcat6
for fname in $tomcat1logs $tomcat2logs $tomcat3logs $tomcat4logs $tomcat5logs $tomcat6logs
do
tom=$(basename $fname)
f=${fname}/logs/catalina.out
var=`tail -10 $f | grep -i outofmemory | sed -n '$p' | sed -n -e "s/.*\(OutOfMemoryError\).*/\1/p"`
if [ ! -z "$var" ] ;
then
echo "$var error on $tom on server $h1 @ $d1" | mailx -s "$var error in $tom" $sms_list
else
echo "No OutofMemory on $tom on server $h1 @ $d1" >>/opt/app/mxora/home/mxora/outofmemory.txt
fi
done
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I'm about 5 months new on an 5 year old unix system. If anyone can help me identify what causing the below errors i'd really appreciate it!
unix: WARNING: /pci@1f,0/pci@1,1/ide@3/dad@1,0 (dad1):
Uncorrectable data Error: Block 57e10
Unix: WARNING: /pci@1f,0/pci@1,1/ide@3/dad@1,0 (dad1):... (1 Reply)
Discussion started by: ByasB
1 Replies
2. UNIX for Advanced & Expert Users
I am getting this message in the log file.
Apr 29 15:32:02 router ppp: Warning: Label COPYRIGHT rejected -direct connection: Configuration label not found
This repeats every so often, the link is up however...Any ideas why i am getting this. Its freebsd 6.1 and pppoE.
Frank (1 Reply)
Discussion started by: frankkahle
1 Replies
3. Shell Programming and Scripting
Hi,
We use an application that is dumping logs to a file on disk. However, this is dumping very verbosely and there is no method of turning down the logging level. We need to remove certain contents from these before they are commited to disk.
Has anybody got any ideas how I can do this... (3 Replies)
Discussion started by: harperonline
3 Replies
4. Shell Programming and Scripting
Hi,
I am trying to write a script which would go search and get the info from the logs based on yesterday timestamp and write yesterday logs in new file. The log file format is as follows:
""""""""""""""""""""""""""... (3 Replies)
Discussion started by: harish.parker
3 Replies
5. Shell Programming and Scripting
(I'm aware log rotation is a common subject, but I tried searching and couldn't find an answer)
For some time now, I've been using the Logfile::Rotate module to rotate logs in a log-monitoring script. So far, I haven't experienced any problems, and it works great because I can use it in Linux... (1 Reply)
Discussion started by: w1r3d
1 Replies
6. Shell Programming and Scripting
I have requirement to prepare script which will grep for latest outofmemory message from the logs. I have used following command to grep the string from the logs,this script is not effective when logs are not getting updated as it will grep for old message.
f=catalina.out
var=`tail -10 $f |... (17 Replies)
Discussion started by: coolguyamy
17 Replies
7. Shell Programming and Scripting
Hello Team,
I need help to improve my script which is used to grep 500 error messages in the logs.
I am using following logic in the script to grep 500 error messages in the logs.
var1=`awk '$9 == "500"' access_log | tail -1`
The above logic is not useful if logs are not getting... (1 Reply)
Discussion started by: coolguyamy
1 Replies
8. Shell Programming and Scripting
Hi all,
. I am developing a log monitoring solution in perl for Windows I am using the CPAN module Win32 ::EventLog (0.076) version for getting the events from windows. The problem which I am facing now is all the Windows 2008 machines are upgraded with Service pack2 from then I couldn’t able... (2 Replies)
Discussion started by: kar_333
2 Replies
9. Shell Programming and Scripting
Hi Guys,
I want to write a script which can grep the logs (server.log) from a file for Error String and output to a other file.
Problems:
How to know about the errors only between the current restart and not in previous as server.log has earlier restarts also?
thanks for the help! Much... (5 Replies)
Discussion started by: ankur328
5 Replies
10. Shell Programming and Scripting
Appreciate help for the below issue.
Im using below code.....I dont want to attach the logs when I ran the perl twice...I just want to take backup with today date and generate new logs...What I need to do for the below scirpt..............
1)if logs exist it should move the logs with extention... (1 Reply)
Discussion started by: Sanjeev G
1 Replies
LEARN ABOUT DEBIAN
save_binary_logs
SAVE_BINARY_LOGS(1p) User Contributed Perl Documentation SAVE_BINARY_LOGS(1p)
NAME
save_binary_logs - Concatenating binary or relay logs from the specified file/position to the end of the log. This command is automatically
executed from MHA Manager on failover, and manual execution should not be needed normally.
SYNOPSIS
# Test
$ save_binary_logs --command=test --binlog_dir=/var/lib/mysql --start_file=mysqld-bin.000002
# Saving binary logs
$ save_binary_logs --command=save --binlog_dir=/var/lib/mysql --start_file=mysqld-bin.000002 --start_pos=312
--output_file=/var/tmp/aggregate.binlog
# Saving relay logs
$ save_binary_logs --command=save --start_file=mysqld-relay-bin.000002 --start_pos=312 --relay_log_info=/var/lib/mysql/relay-log.info
--output_file=/var/tmp/aggregate.binlog
save_binary_logs concatenates binary or relay logs from the specified log file/position to the end of the log. This tool is intended to be
invoked from the master failover script(MHA Manager), and manual execution is normally not needed.
DESCRIPTION
Suppose that master is crashed and the latest slave server has received binary logs up to mysqld-bin.000002:312. It is likely that master
has more binary logs. If it is not sent to the slave, slaves will lose all binlogs from mysqld-bin.000002:312. The purpose of the
save_binary_logs is to save binary logs that are not replicated to slaves. If master is reachable through SSH and binary logs are readable,
saving binary logs is possible.
Here is an example:
$ save_binary_logs --command=save --start_file=mysqld-bin.000002 --start_pos=312 --output_file=/var/tmp/aggregate.binlog
Then all binary logs starting from mysqld-bin.000002:312 are concatenated and stored into /var/tmp/aggregate.binlog. If you have binary
logs up to mysqld-bin.000004, the following mysqlbinlog outputs are written.
mysqld-bin.000002:Format Description Event(FDE), plus from 312 to the tail mysqld-bin.000003:from 0 to the tail, excluding FDE
mysqld-bin.000004:from 0 to the tail, excluding FDE
perl v5.14.2 2012-01-08 SAVE_BINARY_LOGS(1p)