Syslog Messages from Remote Server are not writing to Log File Anymore
Hello All,
Server: SUSE Linux Enterprise Server 11.3 (x86_64)
Syslog-ng Version: syslog-ng 2.0.9
We have configured a Cisco router to send it's log messages to this server listed above. This has been working just perfectly
for the last couple months, but we had never setup the log rotation part of it and the log file was getting extremely large.
So I decided to manually move the current log file to a new file and then ran touch on the original log filename to re-create it.
I did this so I could compress the original log file with bzip2 seeing how large it was getting... Now, the log messages
from the router are not being written to the log file anymore. I ran Wireshark and I could see the router sending the messages,
to the server, yet nothing is being written to the log file anymore...
Did I mess this up somehow by moving and re-creating the original log file?
If anyone knows how I can fix this please feel free... But any thoughts or suggestions would be greatly appreciated!
No I haven't tried that... But I think your right... Check this out: So would I run something like:
Also, what does the "-hup" part of the kill command do?
Thanks again for the replies, much appreciated!
Thanks,
Matt
---------- Post updated at 04:51 PM ---------- Previous update was at 04:46 PM ----------
Ok, so I ran the following and now the log file is being written to again.
kill (somewhat misleading name!) sends signals to processes; use kill -l to list all of them. Processes react on signals, e.g. by "committting suicide when tapped on their shoulder and asked to do so" by the TERM signal. HUP (hangup) is one of them. syslog uses it to reread its config and if need be start a new logfile. So no restart necessary for a new file...
When a process (your syslog for example) writes to a file it has to open it first. To "open it" means issuing a system call fopen(). The OS gives back a "file handle" by which the process now can access the file (until it closes it, which means issuing another system call).
This file handle now identifies the file not by its name but by a more "personal" identification: the inode number. When you delete the file and create a new one with the same name in its place then exactly this has happened: a new file with the same name is in the place of the old file, but the new file and the old file are still distinct files and they have different inode numbers.
Think of it like this: some "John Smith" lives in an appartment. When he moves out and another guy, incidentally also named "John Smith", moves in, they are still not the same person, yes?
Therefore, until told otherwise, your process still writes into the old file, even if it is no longer visible because you deleted it. It even takes space on your harddisk until your process holds it open. Only when you stop the last process holding it open (more than one process could open a file simultaneously) it will be finally "unlinked" - the space it takes will be relinquished and its data be destroyed.
With sending a signal to the process you tell it to "start over": re-read its configuration files, open the necessary files anew, etc., similar to stopping and restarting it, but without the actual program stop and program start.
Ok gotcha, makes sense... Thanks for the detailed explaination, much appreciated!
Yea, I was slightly familiar with signals but just a little... The only other thing I used kill for (*other then killing a process) was to
send the USR1 or SIGUSR1 signal to check the status of dd. But that's good to know. I assume you could use that for most
daemons that are running?
I have a script, which connecting to remote server and first checks, if the files are there by timestamp. If not I want the script exit without error. Below is a code
TARFILE=${NAME}.tar
TARGZFILE=${NAME}.tar.gz
ssh ${DESTSERVNAME} 'cd /export/home/iciprod/download/let/monthly;... (3 Replies)
Hi guys,
So i am in server1 and i have to login to server 2, 3,4 and run some script there(logging script) and output its result. What i am doing is running the script in server2 and outputting it to a file in server 2 and then Scp'ing the file to server1. Similarly i am doing this for other... (5 Replies)
I found a script for automatically push tomcat logs to syslog server which is locate in same server. How do I change it to push logs to remote server?
log4j.rootLogger=INFO, WARN, console, file, SYSLOG
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.append=true... (2 Replies)
Hi,
I created central syslog server, but it's not accepting the messages from remote. Not sure why I can't start the service with -r options? Please help.
# service syslog restart
Shutting down kernel logger:
Shutting down system logger: ... (1 Reply)
I have several production servers and 1 offline server. Production server continuously generates new log files for my application. Depending on time of day new files may be generated every few seconds and at other times every few hours. I also have an offline server where I would like to pull log... (3 Replies)
Hello Forumers!
Has anyone successfully implemented forwarding of syslog messages to a remote server which is listening on a port other than udp514?
Thanks! (3 Replies)
hello All,
I have the login name and pasword. I want to know how to use this info and open a file and write to it.
Ex: login: expr
pasword: xxxx
file: /expr/tmp.txt
I know how to use ftp (use Net::FTP) and upload files but I want to know how to write to a file.
Thanks, (4 Replies)
Hi,
we are trying to enable auditing for few oracle 9i database. and right now it writies into adump directory. As adump can be read/write by oracle user so could it be possible to write into syslog while oracle keeps writing to adump .
thanks in advance.
Pk (2 Replies)
Hello,
I am New to Unix.
I am Using HP-UX 9000 Series for my Application.
I am Currently Facing an Issue that the error messages are being written in the syslog file instead of the Application Log File. The Codes for that Syslog.h is written in Pro*C.
I want to know how to Redirect these... (3 Replies)