I have some Solaris processes that run weeks at a time that create rather large log files that I would like to archive/compress daily. Instead of stopping the process, what can be done so that the log file is backed up and shrunk, but the process can still log to the open file handle without major interruption.
Depending on the process, it may be possible to send it a HUP to have it reopen it's filehandles.
If that's the case, the steps are relatively simple:
1) move the active file to a different name (process still logging to it)
2) HUP the process (closes current filehandle, opens new one to new file)
3) Then, compress the old file.
Some programs support this - others don't. Some require that the new file be present and have the correct permissions before opening it - others don't, and will create a new file automatically on a HUP.
2) HUP the process (closes current filehandle, opens new one to new file)
Hey correct me, if wrong.
But default action of SIGHUP is abrupt termination of process, unless "handled".
So , if the process, which are logging entries in log file, do not handle SIGHUP, it will not work out.
Yes, that's mostly correct. It's not abrupt termination, like a SIGABRT would be, but it will terminate if it's not handled.
If you've got some means of starting the application using different parameters, so you can set up a dummy instance to test with, you can see how it handles things.
The SIGHUP/SIGABRT solution isn't going to work for some of the items. Is there another solution, like perhaps something where I can lock the file to new updates from the active process, copy it, then somehow zero the file out, then remove the lock.
Just to be clear, SIGHUP and SIGABRT are two very different things. I was just trying to say that while the default action of SIGHUP is to exit the process, SIGABRT actually kills it without calling exit, so it doesn't hit any cleanup/shutdown code in the process, which SIGHUP would.
That being said...permit me to delve into details.
For the purposes of this discussion, there are two general ways that a process might be writing to a file.
1) The file is opened, written to, and closed each time a write is needed.
2) The file is opened at some point, and kept open for the life of the process, and writes happen during that period.
If your process does #1, then you should be able to move the file to a different name, and you won't lose any data. A write that's current taking place when you move the file to a different name won't care (since it's still got an open filehandle) and the next open/write/close will (hopefully) create a new file at the correct name.
If you have a process doing #2, then if you move the file, the filehandle will still point to it, so you will have to either HUP (if the process handles it) or terminate and restart the process to get the process to close the current filehandle, and open a new one to the right name.
So, if it's #2, you have a few options, besides restarting the process.
You could, instead of logging to a file, log to a named pipe (a "fifo") and then have a different program read from the pipe and write to a file, which could then have the needed signal handling to deal with closing and opening a new filehandle on the HUP.
You could write to a socket, and do much the same thing.
You could also create a device and log to that...this is pretty much how /dev/log works...on Linux at least.
An example of how this works, for you to play with:
(from another window)
I'm not actually sure if mkfifo is the right command on Solaris. Also, the syntax above is for bash, FYI.
Honestly, this is somewhat kludgy, and takes some significant systems programming chops to do robustly. Which is why well written unix programs that do logging to files should, IMHO, support HUP as a means to re-open log files. It's really the most straightforward method.
---------- Post updated at 12:20 PM ---------- Previous update was at 12:08 PM ----------
Quote:
Originally Posted by ckmehta
The SIGHUP/SIGABRT solution isn't going to work for some of the items. Is there another solution, like perhaps something where I can lock the file to new updates from the active process, copy it, then somehow zero the file out, then remove the lock.
Oh, and as far as what you propose goes...
File locking is not all that easy to manage.
But, you could perhaps
1) send a SIGSTOP to the process (i.e. ^z, suspend it)
2) copy the data from the file to a new one.
3) cat /dev/null > file (zeroing it out)
4) send a SIGCONT to the process to have it resume
Some programs don't tolerate being STOPed very well. Some terminate when that happens. I have no idea what will happen with yours.
If the process isn't designed with having its log files rotated out from under it, even trying SIGSTOP/process file/SIGCONT may not work. For example, if the file isn't opened in append mode, you'll just wind up with at best a sparse file with nulls where the data used to be as the process picks up writing at the same point in the file it would have written anyway.
Hi,
I try to copy the nohup.out to new file using cp command on solaris 10
However, the new file is very large size compare to nohup.out
The file is in English format text.
Any solution for copying active/open log file without problem with the size.
ex:
/dir > du -sh nohup.out
636K ... (7 Replies)
We have written a bare bones scheduling app using bash scripts. The input to the scheduler is from a mainframe scheduling tool, and the scripts exit code is returned to the MF. The problem is that every now and again I have a script that does not complete and this is left in my Q. I am in the... (1 Reply)
#!/bin/bash
for digit in $(seq 1 10)
do
if ping -c1 -w2 192.168.1.$digit &> /dev/null
then
echo "192.168.1.$digit is UP"
else
echo "192.168.1.$digit is DOWN"
fi
done (3 Replies)
Is it possible to display active processes' Year,Month,Day,Hour,Minute,Second info of process start time ? Preferbly in the format "YYYY/MM/DD HH:MM:SS" ?
I tried to do this with the ps command but it only gets the time or date.
Any help will be greatly appreciated.
Cheers
Steve (4 Replies)
Hi All !
We have to compress a big data file in unix server and transfer it to windows and uncompress it using winzip in windows.
I have used the utility ZIP like the below.
zip -e <newfilename> df2_test_extract.dat
but when I compress files greater than 4 gb using zip utility, it... (4 Replies)
Hi,
I have an application which creates the logs in a date wise.
like,
tomcat_access_log.2009-09-12.log
tomcat_access_log.2009-09-11.log
tomcat_access_log.2009-09-10.log
tomcat_access_log.2009-09-09.log
tomcat_access_log.2009-09-08.log
tomcat_access_log.2009-09-07.logNow my requirement is... (5 Replies)
Hi Gurus,
I am facing a problem with grepping a set of data in active log file which are the info not in uniform, below is the sample data information.
I am trying to grep value.
Connect_PM Connect to APPS gateway failed
ERROR connection to host anyserver.com, service 6600 timed out
... (4 Replies)
hi
i'm running a shell script that checks the amount of cpu idle either using /usr/bin/vmstat 1 2 or sar 1 2 (on unixware) before i run some tests(if cpu idle greater than 89 I run them).
These tests are run on many platforms, linux(suse, redhat) hp-ux, unixware, aix, solaris, tru64.
... (5 Replies)
Hi,
I have my log files in /home/user1/temp2/logs i want to archive
*.log and *.txt files and to store in my /home/user1/temp2/archved/
with *.log with Time stamp ,Please let me know how to do this? (1 Reply)