growing files


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting growing files
# 1  
Old 01-05-2011
growing files

I am trying to be pro-active and prevent FS from filling up. I know about
the df/du command also find -size -mtime .......

What I want to know is there a way I can do a find to see which files have been accessed or modified after a specifc YYYYMMDD-HHMMSS. What I am really looking for is to see what files are constantly growing...


If anybody has a script or command or some other idea on how to find
this informaiton out it would be greatly appreciated.

Thanks

---------- Post updated at 12:49 PM ---------- Previous update was at 11:16 AM ----------

I got what I wanted.

touch -t 201101051245.15 /tmp/xxxx

find . -newer /tmp/xxxx -exec ls -lt {} ";"
# 2  
Old 01-05-2011
-exec is a bit slow (a fork/exec per hit), so try:
Code:
find . . . | xargs -n999 ls -l

Your -t was a misfiring neuron, as a one file list is always sorted.
Code:
ls -ltd $( find . -mtime -1 )

is nice for one day if you do not swamp ls's file count limit, else see above.

Space can disappear into files being actively appended, so fuser on large, growing files will show the process that is writing.

Space can disappear into huge directories, which have to be recreated to get the space back.

Space can disappear into many new little files, which can be helped a lot with zip archiving!

---------- Post updated at 10:00 PM ---------- Previous update was at 09:46 PM ----------

-exec is a bit slow (a fork/exec per hit), so try:
Code:
find . . . | xargs -n999 ls -l

Your -t was a misfiring neuron, as a one file list is always sorted.
Code:
ls -ltd $( find . -mtime -1 )

is nice for one day if you do not swamp ls's file count limit, else see above. Time sorting can be made more versatile and robust if you can put mtime with the file name and sort -n, so I wrote a stat() wrapper mystat.c that reads file names from stdin and writes mtime\tfile_name\n to stdout.
Code:
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>

static void psf( char * buf )
{
    struct stat64 s ;

    if ( 0 > stat64( buf, &s ) )
    {
        perror( buf );
        return ; ;
    }

    if ( 0 > printf( "%u\t%s\n", s.st_mtime, buf ) )
    {
        if ( !ferror( stdout ) )
            exit( 0 );
        perror( "stdout" );
        exit( 1 );
    }
}

int main( int argc, char **argv )
{
    char buf[PATH_MAX + 2];
    char *cp ;
    int i ;
    int clf = 0 ;

    for ( i = 1 ; i < argc ; i++ )
    {
        clf ++ ;
        psf( argv[i] );
    }

    if ( !clf ) while ( fgets( buf, sizeof buf, stdin ) )
    {
        if ( !( cp = strchr( buf, '\n' ) ) )
        {
            fprintf( stderr, "File path too long: '%s'\n", buf );
            continue ;
        }

        *cp = NULL ;
        psf( buf );
    }

    if ( !ferror( stdin ) )
        exit( 0 );

    perror( "stdin" );
    exit( 1 );
}

Space can disappear into files being actively appended, so fuser on large, growing files will show the process that is writing.

Space can disappear into huge directories, which have to be recreated to get the space back.

Space can disappear into many new little files, which can be helped a lot with zip archiving!
# 3  
Old 01-05-2011
lsof can also be helpfull in spotting processes with open files, for example:
Code:
lsof /var

will list all open files in the var filesystem (pay particular attention to the "SIZE/OFF" column as large numbers here indicat appending/reading large files). This command may even spot files that find cannot see (eg files that have been unlinked but the process still has an open file pointer).
# 4  
Old 01-06-2011
Quote:
Originally Posted by DGPickett
-exec is a bit slow (a fork/exec per hit), so try:
Code:
find . . . | xargs -n999 ls -l

Your -t was a misfiring neuron, as a one file list is always sorted.
Code:
ls -ltd $( find . -mtime -1 )

is nice for one day if you do not swamp ls's file count limit, else see above.

Space can disappear into files being actively appended, so fuser on large, growing files will show the process that is writing.

Space can disappear into huge directories, which have to be recreated to get the space back.

Space can disappear into many new little files, which can be helped a lot with zip archiving!

---------- Post updated at 10:00 PM ---------- Previous update was at 09:46 PM ----------

-exec is a bit slow (a fork/exec per hit), so try:
Code:
find . . . | xargs -n999 ls -l

Your -t was a misfiring neuron, as a one file list is always sorted.
Code:
ls -ltd $( find . -mtime -1 )

is nice for one day if you do not swamp ls's file count limit, else see above. Time sorting can be made more versatile and robust if you can put mtime with the file name and sort -n, so I wrote a stat() wrapper mystat.c that reads file names from stdin and writes mtime\tfile_name\n to stdout.
Code:
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
 
static void psf( char * buf )
{
    struct stat64 s ;
 
    if ( 0 > stat64( buf, &s ) )
    {
        perror( buf );
        return ; ;
    }
 
    if ( 0 > printf( "%u\t%s\n", s.st_mtime, buf ) )
    {
        if ( !ferror( stdout ) )
            exit( 0 );
        perror( "stdout" );
        exit( 1 );
    }
}
 
int main( int argc, char **argv )
{
    char buf[PATH_MAX + 2];
    char *cp ;
    int i ;
    int clf = 0 ;
 
    for ( i = 1 ; i < argc ; i++ )
    {
        clf ++ ;
        psf( argv[i] );
    }
 
    if ( !clf ) while ( fgets( buf, sizeof buf, stdin ) )
    {
        if ( !( cp = strchr( buf, '\n' ) ) )
        {
            fprintf( stderr, "File path too long: '%s'\n", buf );
            continue ;
        }
 
        *cp = NULL ;
        psf( buf );
    }
 
    if ( !ferror( stdin ) )
        exit( 0 );
 
    perror( "stdin" );
    exit( 1 );
}

Space can disappear into files being actively appended, so fuser on large, growing files will show the process that is writing.

Space can disappear into huge directories, which have to be recreated to get the space back.

Space can disappear into many new little files, which can be helped a lot with zip archiving!
Nice solution, but my problem is I don't know what file is growing. I was
just doing a df and saw I was running out of space. Your program expects
to a filename. Thanks for responding
# 5  
Old 01-07-2011
What's big, recent, open and by what process:
Code:
 
find /mount_point -size +100 -mtime -1 | xargs -n999 fuser 2>&1 | grep ':.*[1-9]'

You have to go to lsof, truss, tusc, or strace to see if it is open to write and being written. You can attach to a running process with truss, tusc, or strace -p pid if you are root or owner.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. AIX

Help growing fs

Hello, I have a filesystem that I'm trying to grow but it's giving me the error: 0516-404 allocp: This system cannot fulfill the allocation request. There are not enough free partitions or not enough physical volumes to keep strictness and satisfy allocation requests. The... (5 Replies)
Discussion started by: bbbngowc
5 Replies

2. AIX

/usr is growing fast

I'm having a strange Phenomenon here in one of my servers /usr file system is growing fast and it went wild. I have searched the entire file system for large , growing and newly created files with no clue what's going on I have found nothing strange. Any further idea a snapshot from df's ... (4 Replies)
Discussion started by: h@foorsa.biz
4 Replies

3. Red Hat

Growing filesystem using LVM

Hi, I have a LUN presented to a Linux system and would like to ask if someone can advise if the logical volume /dev/mapper/VGOra-LVOra 12G 11G 659M 95% /usr/app/oracle can be extended. Is there any free space to allocate.... The LUN (25G) has been configured as follows: LUN - ROOT...... (4 Replies)
Discussion started by: jamba1
4 Replies

4. Shell Programming and Scripting

find the rate of growing of filesystem

Hi, I need to find out the gigabytes/hour growth in filesystem.:confused: i am using df command for finding the filesystem size and free space every hour. But how can i find the increase in size per hour? :rolleyes: Do i have to store the last hour entries in a file and comapre with... (2 Replies)
Discussion started by: kichu
2 Replies

5. Solaris

Help with Growing FS

Ok so I just installed Solaris 10 on my x86 laptop. But I too the defaults and now all of the FS's are very small. I can't install anything. The drive is a 40GB but only about 11GB is being seen and used. How can I get the OS to see and use the rest of the drive? I was just going to reinstall, but... (3 Replies)
Discussion started by: bbbngowc
3 Replies

6. Filesystems, Disks and Memory

Growing a FS over 1T - can it be done ?

Greeting Forumers! I've been asked to increase space in a FS that is currently 740G in size: Filesystem size used avail capacity Mounted on /dev/md/dsk/d664 740G 424G 308G 58% /ora_back My SAN administrator has allocated 5 LUNs of 200G each - this will make... (3 Replies)
Discussion started by: bluescreen
3 Replies

7. Solaris

Growing /opt

Hi, /opt on my disk is almost 90%. I thought of growing it. I followed the below procedure: 1. added a new hard disk 2. formatted the same with ufs 3. created a slice and tried to label it as "opt" with "wm" permissions. but got stuck at 3 as it is not allowing me to label the slice... (9 Replies)
Discussion started by: EmbedUX
9 Replies

8. AIX

Help growing iscsi lun

Hi, I have an iSCSI LUN of 200GB. I increased it to 250GB and when I try to increase the size of the vg, I'm getting an error that none of the volumes have increased in size. How can I get the OS to see the additional 50GB? ---------- Post updated at 03:22 PM ---------- Previous update... (9 Replies)
Discussion started by: bbbngowc
9 Replies

9. UNIX for Dummies Questions & Answers

.osm file growing

my /etc/.osm file is growing rapidly and logging large amounts of activity. Can anyone tell me what this file is for and what types of information is logged in this file. Thanks in advance for your help!! (1 Reply)
Discussion started by: golfs4us
1 Replies

10. AIX

AIX Growing Files

Hi Everybody, I want to know the names & locations of the common AIX files which it's size keep growing. I think there is a procedure to clean these files to avoid the space overflow, I wish also if anybody can tell me what is the proper procedure to make more available space. Another issue, that... (2 Replies)
Discussion started by: aldowsary
2 Replies
Login or Register to Ask a Question