Not able to delete log file


 
Thread Tools Search this Thread
Operating Systems Solaris Not able to delete log file
# 1  
Old 10-17-2013
Not able to delete log file

On my solaris-10 box, there are two log files which are being used (written by) some application. Those are hige files of 3 gb and 5 gb. I tried to nullify it, but it shows me zero size and after few seconds it will show me its original size. "ls -ltr" shows me big size, but "du -sh file" shows me in kb. We do not want to stop application to clear these files. Is there any way to clear them without stopping application. Below outputs should say my problem more clearly.
Code:
root@prd_db07:/oimDomain/logs# ls -l soa.out admin.out
-rw-r--r--   1 iamswl1  dggess     330644050 Oct 17 16:53 admin.out
-rw-r--r--   1 iamswl1  dggess     5163031517 Oct 17 16:53 soa.out
root@prd_db07:/oimDomain/logs# du -sh soa.out admin.out
 134K   soa.out
 260K   admin.out
root@prd_db07:/oimDomain/logs# fuser soa.out
soa.out:    21268o   21243o   21242o
root@prd_db07:/oimDomain/logs# fuser admin.out
admin.out:    19903o   19878o
root@prd_db07:/oimDomain/logs#
root@prd_db07:/oimDomain/logs# ptree 21268
3118  zsched
  21242 /bin/sh /dggess/envs/stage/domains/oimDomain/bin/startManagedWebLogic.sh soa_serv
    21243 /bin/sh /dggess/envs/stage/domains/oimDomain/bin/startWebLogic.sh nodebug noderby
      21268 /dggess/apps/jdk/jdk1.6.0_29/bin/sparcv9/java -server -Xmx4096m -Xms4096m -XX:Per
root@prd_db07:/dggess/apps/logs/oimDomain#
root@prd_db07:/dggess/apps/logs/oimDomain# ptree 19903
3118  zsched
  19878 /bin/sh /dggess/envs/stage/domains/oimDomain/bin/startWebLogic.sh
    19903 /dggess/apps/jdk/jdk1.6.0_29/bin/sparcv9/java -server -Xms2048m -Xmx2048m -XX:Max
root@prd_db07:/oimDomain/logs# >soa.out
root@prd_db07:/oimDomain/logs# >admin.out
root@prd_db07:/oimDomain/logs# ls -l soa.out admin.out
-rw-r--r--   1 iamswl1  dggess           0 Oct 17 16:54 admin.out
-rw-r--r--   1 iamswl1  dggess           0 Oct 17 16:54 soa.out
root@prd_db07:/oimDomain/logs#
root@prd_db07:/oimDomain/logs# ls -l soa.out admin.out   ---> AFTER A WAIT OF 3-4 SECONDS, IT IS AGAIN BACK TO SAME SIZE
-rw-r--r--   1 iamswl1  dggess     330651280 Oct 17 16:54 admin.out
-rw-r--r--   1 iamswl1  dggess     5163034962 Oct 17 16:54 soa.out
root@prd_db07:/oimDomain/logs#

# 2  
Old 10-18-2013
The commands:
Code:
>soa.out
>admin.out

deallocate all blocks allocated to those files at that time, but it doesn't close the file descriptors and does not reset the file offset that determines the position in the file where the next data written will be placed by in the processes that are writing to those files.

The next time the process writes something to one of those files, it will write it to the spot in the file just after the last place it wrote into that file. That will not allocate any disk blocks for the bytes in the file you previously deallocated, so what you end up with is known as a holey file which contains unallocated blocks that have never been written. If you try to read data from those blocks (such as by running cat soa.out), those unallocated blocks will appear as though null bytes had been written into those bytes.

If you change the program(s) that are writing those log files to add the O_APPEND flag to the oflag argument to the call to open() that opens the log files, it will reset the position in the log file where it writes data to the current end of file every time it writes to the log file. So, if you clear the log file using >logfile, the next write to the log file will be at the start of the file instead of leaving a huge hole at the start of the file.
# 3  
Old 10-18-2013
Thanks Don for explaining it so well. I will need to go long way to get this change on the program which is writing this file.
Is there something, I can do something from OS side now ?
Is there any other better way from OS side, which I should adopt going forward ? Because this will fill up again and then application guys will come to System Admin again.
# 4  
Old 10-18-2013
You have cleared the files content. The size reported by ls is irrelevant except for processes reading the files. As far as disk usage is concerned, the space has been recovered and the disk will only fill up with the new data written.
# 5  
Old 10-18-2013
jlliagre, when I do do "cat soa.out", it was taking very long time to give output, so I did Ctlr+C and came out. This made me think that data is still so big here, even after nullify it. Am I wrong here ?
In future, what will be best to nullify it again, without taking application downtime ?
# 6  
Old 10-18-2013
Quote:
Originally Posted by solaris_1977
jlliagre, when I do do "cat soa.out", it was taking very long time to give output, so I did Ctlr+C and came out. This made me think that data is still so big here, even after nullify it. Am I wrong here ?
In future, what will be best to nullify it again, without taking application downtime ?
If you're going to use cat filename, you have a huge file and cat will read every byte of it. If you look at du filename, the number of disk blocks used to store the contents of the file may be small (as I explained before).

You haven't told us anything about what this application does nor why it is writing gigabytes into a file that the people using the application don't want to see. If you have a process that your users say you have to run continuously and it writes gigabytes of logs that no one wants to see, you can choose one of several options (including, but not limited to):
  1. Get the people to wrote the application to change it.
  2. Restart it regularly (and rotate or delete the log files while it is stopped).
  3. Buy bigger disks to hold all of the data you nobody wants.
  4. Patch the kernel to set the O_APPEND flag in the kernel table entry for the file descriptor involved and truncate it the way you're doing it now.
  5. Refuse to run an application that fills up your filesystems with huge amounts of unwanted data until your users provide a version of the application that work correctly.
  6. Try changing the log file to be a symlink pointing to /dev/null the next time you reboot your system before you start the application. (If the application removes or rotates log files when it restarts, this won't work.)
# 7  
Old 10-18-2013
Also many start/stop scripts use #!/bin/sh which on Solaris 10 is classic Bourne shell. If those log files are created by redirecting stdout with the >> operator then normally that file can be truncated properly, but not when using classic Bourne Shell. Try running the startup script with #!/usr/xpg4/bin/sh instead...
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

Delete from another file that matched on log file

I want to write a script using Oscam Cardsharing server this is my test: cat oscam.log | grep "error" sample output: 2018/10/17 16:43:07 5C94A12E p (cccam) cccam(r) test.dyndns.org: login failed, error Once I've found an error, I need to remove its information inside another file : ... (5 Replies)
Discussion started by: vzoli1987
5 Replies

2. UNIX for Dummies Questions & Answers

Log file - Delete duplicate line & keep last date

Hello All ! I need your help on this case, I have a csv file with this: ITEM105;ARI FSR;2016-02-01 08:02;243 ITEM101;ARI FSR;2016-02-01 06:02;240 ITEM032;RNO TLE;2016-02-01 11:03;320 ITEM032;RNO TLE;2016-02-02 05:43;320 ITEM032;RNO TLE;2016-02-01 02:03;320 ITEM032;RNO... (2 Replies)
Discussion started by: vadim-bzh
2 Replies

3. Shell Programming and Scripting

Delete all files from the directory except the ones in the log file

I have a log file with contents like below. Repository: https://someserver:9443/ Workspace: (1000) "test_scripts_ws" Component: (1001) "some_Automated_Scripts" Change sets: (1002) ---$ john "test memory" 17-Sep-2014 02:24 PM Changes: --a--... (9 Replies)
Discussion started by: gaurav99
9 Replies

4. Red Hat

Need Script to ZIP/SAVE & then DELETE Log file & DELETE ZIPS older than 12 months

ENVIROMENT Linux: Fedora Core release 1 (Yarrow) iPlanet: iPlanet-WebServer-Enterprise/6.0SP1 Log Path: /usr/iplanet/servers/https-company/logs I have iPlanet log rotation enabled rotating files on a daily basis. The rotated logs are NOT compressed & are taking up too much space. I... (7 Replies)
Discussion started by: zachs
7 Replies

5. Shell Programming and Scripting

Delete log files content older than 30 days and append the lastest date log file date

To delete log files content older than 30 days and append the lastest date log file date in the respective logs I want to write a shell script that deletes all log files content older than 30 days and append the lastest log file date in the respective logs This is my script cd... (2 Replies)
Discussion started by: sreekumarhari
2 Replies

6. Shell Programming and Scripting

Log file - Delete lines

Hello, I tried to search on the site a way to delete lines on log files but I didn't find what I am looking for... I hope someone will be able to help me. I do not know how to explain this, so I will do my best. I have a log file and I want to delete all second lines. Example : ... (3 Replies)
Discussion started by: Aswex
3 Replies

7. Shell Programming and Scripting

Delete log file entries based on the Date/Timestamp within log file

If a log file is in the following format 28-Jul-10 ::: Log message 28-Jul-10 ::: Log message 29-Jul-10 ::: Log message 30-Jul-10 ::: Log message 31-Jul-10 ::: Log message 31-Jul-10 ::: Log message 1-Aug-10 ::: Log message 1-Aug-10 ::: Log message 2-Aug-10 ::: Log message 2-Aug-10 :::... (3 Replies)
Discussion started by: vikram3.r
3 Replies

8. Shell Programming and Scripting

Delete lines prior to a specific date in a log file.

Hi all. I have a database log file in which log data get appended to it daily. I want to do a automatic maintainence of this log by going through the log and deleting lines belonging to a certain date. How should i do it? Please help. Thanks. Example. To delete all lines prior to Jun... (4 Replies)
Discussion started by: ahSher
4 Replies

9. Shell Programming and Scripting

how to delete blank rows in a log file

Help How to delete all blank rows in log file (4 Replies)
Discussion started by: suryanarayana
4 Replies

10. UNIX for Dummies Questions & Answers

mass delete a certain string in a .log file

Hey all. I have a file that has roughly 115,000 lines in it. There are a few lines of information that I don't want in it, but I don't want to search through all of the lines to find the ones that I don't want. Is there a way to do a mass delete of the lines that I don't want? Thanks for the... (4 Replies)
Discussion started by: jalge2
4 Replies
Login or Register to Ask a Question