Zero link count files in proc


 
Thread Tools Search this Thread
Operating Systems Solaris Zero link count files in proc
# 8  
Old 08-27-2014
I had actually faced this issue with the process belonging to RMI service. I am trying to learn about the cause which triggered this scenario. Our server is basically running very large applications and very less manual intervention is required. So I suspect that there was any manual intervention in killing the process abruptly or the deletion of the files.

Can we also think that the application issues may also lead to this scenario. Also please share what precautions are required for preventing such issues.

Thank you.
# 9  
Old 08-27-2014
The only way to get a file with a link count of zero is for one or more running processes to have that file open. Once all processes that had the file open have died, the blocks allocated to that file will be freed.

To get to the state that you're seeing, someone or something had to unlink the log file while the process that was writing to it was still running. The most common way that that happens is for someone to notice that a file is big and use rm logfile to remove it without killing the process that was writing to it. As has already been said in this thread; removing the last link to a file does not deallocate any blocks that have been written to that file. All allocated blocks will remain in the file until the last open file descriptor to that file has been closed.

If someone had killed the process, you might find files left around, but they would not have a link count of zero!
# 10  
Old 08-27-2014
Quote:
Originally Posted by Corona688
Even proper logging will misbehave this way if you blithely delete the file while it's in use. It's a very common beginner administration mistake. Just truncate the file instead of deleting it.
Depending on the flags used when the file was opened, where the file is written to by the writing process, and the specifics of the underlying file system, even truncating an open file isn't guaranteed to work.

And since redirected stdout and stderr tend to be done with ">>" pipe operators, the log files get opened with the O_APPEND flag with all writes going to the current file descriptor offset for the running process, truncations of such files rarely work. At best if the file system supports sparse files you will release the space by turning the log file into a sparse file.

Redirecting stdout and/or stderr for log files is a bad idea. The running process is tied to the log file(s) and the log file(s) are tied to the running process. You can't "age" the log files. You can't delete them. You can't even safely truncate them. Not only that,. using redirected stdout/stderr also causes all kinds of problems with log entry atomicity for multithreaded applications.

If logging is important, use a proper logging system designed for logging. Operating systems come with those.
# 11  
Old 08-27-2014
Quote:
Originally Posted by dhanu1985
Can we also think that the application issues may also lead to this scenario.
Likely. As I already wrote, you should start by looking what your process is writing to stderr. Either there are a huge quantity of error or warning messages that need to be seriously investigated instead of being ignored or there are harmless informational messages sent there and in that case, you should check if a configuration wouldn't allow a less verbose output.
Quote:
Also please share what precautions are required for preventing such issues.
As already suggested by achenle, put in place a serious log process, with log rotation, archival and removal.

Note that truncating the log file (or its /proc zero link incarnation) will free disk space in your case as the underlying file system used is supporting sparse files with no doubt.
# 12  
Old 08-27-2014
If it ws worth oogging, it may well be worth keeping and not losing, which can happen with truncate even if you make a copy in the window between the copy and the truncate.

Really nice log files have the date in their names and are changed at midnight, or the like. Some also change every time the log gets big. Then you can compress, archive or purge without stopping. Sometimes a helper app sends it a signal or it sets an alarm for itself, so you do not have to check the time with every log write. Then there is syslog(), where the logging to flat file is moved to a system daemon. There are a ton of posts on log rotation schemes and tools.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Error files count while coping files from source to destination locaton as well count success full

hi All, Any one answer my requirement. I have source location src_dir="/home/oracle/arun/IRMS-CM" My Target location dest_dir="/home/oracle/arun/LiveLink/IRMS-CM/$dc/$pc/$ct" my source text files check with below example.text file content $fn "\t" $dc "\t" $pc "\t" ... (3 Replies)
Discussion started by: sravanreddy
3 Replies

2. Linux

Getting warning that hard link count is wrong

I run a find command to search from root directory. find / -inum 344334 The output gives the below warning: find: WARNING: Hard link count is wrong for /proc/1. This may be a bug in your filesystem driver. Automatically turning on find's -noleaf option. Earlier results may have failed to... (2 Replies)
Discussion started by: ravisingh
2 Replies

3. UNIX for Dummies Questions & Answers

Is there a way to completely remove an inode when the Link count is 2 ?

Currently my data is organised in a volume which has a cache directory (where all the files are first created or transferred). After that there are suitable directories on the volume which in their subdirs, contain files hardlinked to files in the cache. This is done so that the same inode... (1 Reply)
Discussion started by: abcdino
1 Replies

4. Solaris

Impact of zero link count files in proc

Greetings I want to confirm about HUGE and old files with linkcount 0 in proc file system. what is their impact on size of root File system? (3 Replies)
Discussion started by: mr_os
3 Replies

5. Shell Programming and Scripting

How to find link files

How to find the link files. i have main file, now i want to find all the files linked to it. (4 Replies)
Discussion started by: aju_kup
4 Replies

6. UNIX for Dummies Questions & Answers

_/proc/stat vs /proc/uptime

Hi, I am trying to calculate the CPU Usage by getting the difference between the idle time reported by /proc/stat at 2 different intervals. Now the 4th entry in the first line of /proc/stat will give me the 'idle time'. But I also came across /proc/uptime that gives me 2 entries : 1st one as the... (0 Replies)
Discussion started by: coderd
0 Replies

7. Linux

Concept of link count in linux

Hi All, Please explain me the concept of link counts when you try to view the contents of any file or directory using ls command. -sh-3.00$ ls -lrt total 194 drwxr-xr-x 2 root root 4096 Aug 12 2004 srv drwxr-xr-x 2 root root 4096 Aug 12 2004 mnt drwxr-xr-x 2 root root ... (1 Reply)
Discussion started by: vaibhav.kanchan
1 Replies

8. Shell Programming and Scripting

cat /proc/ files

Hi, I need to write a shell script that should lists only the files that starts with alphabet from the /proc dir and then I have to cat those files and redirect to a file. But my below script is not working. It is reading the first file properly but not the subsequent files. Please throw a... (2 Replies)
Discussion started by: royalibrahim
2 Replies

9. Shell Programming and Scripting

link files together

if I have 1000 files, named file1, file2, ... ,file1000. for each one, I want to append a line, for example "this is the end of file#". then, I want to link them all together. how can I do that by using a simple script? (1 Reply)
Discussion started by: fredao
1 Replies

10. UNIX for Dummies Questions & Answers

Link files....

How do you identify the type of a link in the output of the ls -l command? (1 Reply)
Discussion started by: Tom Bombadil
1 Replies
Login or Register to Ask a Question