Sponsored Content
Operating Systems Linux Red Hat Empty directory, large size and performance Post 302591666 by bdx on Friday 20th of January 2012 09:56:02 AM
Old 01-20-2012
Empty directory, large size and performance

Hi,

I've some directory that I used as working directory for a program. At the end of the procedure, the content is deleted. This directory, when I do a ls -l, appears to still take up some space. After a little research, I've seen on a another board of this forum that it's not really taking space, but it's still reporting the inodes that it was using. Those inodes are marked as reusable so I don't loose any space.

My question is, does this affect the performance of the filesystem at all?
I know the easy solution is to erase the directory and recreate it.
But for my personal enlightenment I would really like to know.

Thanks in advance
Benoit Desmarais
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Unix File System performance with large directories

Hi, how does the Unix File System perform with large directories (containing ~30.000 files)? What kind of structure is used for the organization of a directory's content, linear lists, (binary) trees? I hope the description 'Unix File System' is exact enough, I don't know more about the file... (3 Replies)
Discussion started by: dive
3 Replies

2. UNIX for Dummies Questions & Answers

Empty directories having different size

$ls -lrt mydir total 12 drwxrwxrwx 2 nobody nobody 512 Aug 8 11:51 tmp drwxrwxrwx 2 nobody nobody 4608 Jan 19 12:20 web.cache $ ls -lrt mydir/web.cache/ total 0 $ ls -lrt mydir/tmp/ total 0 Can anyone explain me the above results? I know the o/p of ls, but this... (3 Replies)
Discussion started by: rahulrathod
3 Replies

3. UNIX for Dummies Questions & Answers

Empty Directory, big size

Hello, Can somebody please explain why I have EMPTY directories on HP-UX with BIG SIZE ? Thank you ! Double post, continued here (0 Replies)
Discussion started by: drbiloukos
0 Replies

4. HP-UX

Empty Directory, big size

Hello, Can you please explain why I have various empty directories with large size ? OS is B.11.11 (3 Replies)
Discussion started by: drbiloukos
3 Replies

5. Shell Programming and Scripting

Severe performance issue while 'grep'ing on large volume of data

Background ------------- The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files. File-1 ------ Contains 50,000 rows with 2 fields in each row, separated by pipe. Row structure is like Object_Id|Object_Name, as following: 111|XXX 222|YYY 333|ZZZ ... (6 Replies)
Discussion started by: Souvik
6 Replies

6. Shell Programming and Scripting

How to delete some of the files in the directory, if the directory size limits the specified size

To find the whole size of a particular directory i use "du -sk /dirname".. but after finding the direcory's size how do i make conditions like if the size of the dir is more than 1 GB i hav to delete some of the files inside the dir (0 Replies)
Discussion started by: shaal89
0 Replies

7. Shell Programming and Scripting

Performance issue in Grepping large files

I have around 300 files(*.rdf,*.fmb,*.pll,*.ctl,*.sh,*.sql,*.prog) which are of large size. Around 8000 keywords(which will be in the file $keywordfile) needed to be searched inside those files. If a keyword is found in a file..I have to insert the filename,extension,catagoery,keyword,occurrence... (8 Replies)
Discussion started by: millan
8 Replies

8. UNIX for Beginners Questions & Answers

Command to extract empty field in a large UNIX file?

Hi All, I have records in unix file like below. In this file, we have empty fields from 4th Column to 22nd Column. I have some 200000 records in a file. I want to extract records only which have empty fields from 4th field to 22nd filed. This file is comma separated file. what is the unix... (2 Replies)
Discussion started by: rakeshp
2 Replies

9. Shell Programming and Scripting

Bash script search, improve performance with large files

Hello, For several of our scripts we are using awk to search patterns in files with data from other files. This works almost perfectly except that it takes ages to run on larger files. I am wondering if there is a way to speed up this process or have something else that is quicker with the... (15 Replies)
Discussion started by: SDohmen
15 Replies
Df(3pm) 						User Contributed Perl Documentation						   Df(3pm)

NAME
Filesys::Df - Perl extension for filesystem disk space information. SYNOPSIS
use Filesys::Df; #### Get information by passing a scalar directory/filename value my $ref = df("/tmp"); # Default output is 1K blocks if(defined($ref)) { print "Total 1k blocks: $ref->{blocks} "; print "Total 1k blocks free: $ref->{bfree} "; print "Total 1k blocks avail to me: $ref->{bavail} "; print "Total 1k blocks used: $ref->{used} "; print "Percent full: $ref->{per} "; if(exists($ref->{files})) { print "Total inodes: $ref->{files} "; print "Total inodes free: $ref->{ffree} "; print "Inode percent full: $ref->{fper} "; } } #### Get information by passing a filehandle open(FILE, "some_file"); # Get information for filesystem at "some_file" my $ref = df(*FILE); #### or my $ref = df(*FILE); #### or my $fhref = *FILE; my $ref = df($fhref); #### Get information in other than 1k blocks my $ref = df("/tmp", 8192); # output is 8K blocks my $ref = df("/tmp", 1); # output is bytes DESCRIPTION
This module provides a way to obtain filesystem disk space information. This is a Unix only distribution. If you want to gather this information for Unix and Windows, use "Filesys::DfPortable". The only major benefit of using "Filesys::Df" over "Filesys::DfPortable", is that "Filesys::Df" supports the use of open filehandles as arguments. The module should work with all flavors of Unix that implement the "statvfs()" and "fstatvfs()" calls, or the "statfs()" and "fstatfs()" calls. This would include Linux, *BSD, HP-UX, AIX, Solaris, Mac OS X, Irix, Cygwin, etc ... "df()" requires a argument that represents the filesystem you want to query. The argument can be either a scalar directory/file name or a open filehandle. There is also an optional block size argument so you can tailor the size of the values returned. The default block size is 1024. This will cause the function to return the values in 1k blocks. If you want bytes, set the block size to 1. "df()" returns a reference to a hash. The keys available in the hash are as follows: "{blocks}" = Total blocks on the filesystem. "{bfree}" = Total blocks free on the filesystem. "{bavail}" = Total blocks available to the user executing the Perl application. This can be different than "{bfree}" if you have per-user quotas on the filesystem, or if the super user has a reserved amount. "{bavail}" can also be a negative value because of this. For instance if there is more space being used then you have available to you. "{used}" = Total blocks used on the filesystem. "{per}" = Percent of disk space used. This is based on the disk space available to the user executing the application. In other words, if the filesystem has 10% of its space reserved for the superuser, then the percent used can go up to 110%. You can obtain inode information through the module as well, but you must call "exists()" on the "{files}" key first, to make sure the information is available. Some filesystems may not return inode information, for example some NFS filesystems. Here are the available inode keys: "{files}" = Total inodes on the filesystem. "{ffree}" = Total inodes free on the filesystem. "{favail}" = Total inodes available to the user executing the application. See the rules for the "{bavail}" key. "{fused}" = Total inodes used on the filesystem. "{fper}" = Percent of inodes used on the filesystem. See rules for the "{per}" key. There are some undocumented keys that are defined to maintain backwards compatibilty: "{su_blocks}", "{user_blocks}", etc ... If the "df()" call fails for any reason, it will return undef. This will probably happen if you do anything crazy like try to get information for /proc, or if you pass an invalid filesystem name, or if there is an internal error. "df()" will "croak()" if you pass it a undefined value. Requirements: Your system must contain "statvfs()" and "fstatvfs()", or "statfs()" and "fstatfs()" You must be running Perl 5.6 or higher. AUTHOR
Ian Guthrie IGuthrie@aol.com Copyright (c) 2006 Ian Guthrie. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO
statvfs(2), fstatvfs(2), statfs(2), fstatfs(2), df(1), Filesys::DfPortable perl(1). perl v5.14.2 2006-06-25 Df(3pm)
All times are GMT -4. The time now is 08:10 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy