Empty directory, large size and performance

 
Thread Tools Search this Thread
Operating Systems Linux Red Hat Empty directory, large size and performance
# 1  
Old 01-20-2012
Empty directory, large size and performance

Hi,

I've some directory that I used as working directory for a program. At the end of the procedure, the content is deleted. This directory, when I do a ls -l, appears to still take up some space. After a little research, I've seen on a another board of this forum that it's not really taking space, but it's still reporting the inodes that it was using. Those inodes are marked as reusable so I don't loose any space.

My question is, does this affect the performance of the filesystem at all?
I know the easy solution is to erase the directory and recreate it.
But for my personal enlightenment I would really like to know.

Thanks in advance
Benoit Desmarais
# 2  
Old 01-20-2012
ls -l does not report overall directory size as far as I know.

Are you deleting the content of the files or the files themselves?

Inode usage does not affect performance in any way, there isn't any performance penalty for using them (or not using them).

Disk performance is usually affected by the number of reads/writes you do at a single time (aka I/O operations), the physical area of the disk you use (the inner tracks of the disk rim are faster) and the spin velocity of the platter - This of course does not apply for solid state disks.

As long as you don't run out of inodes, the only problem that may arise is that you run out of disk space.
# 3  
Old 01-20-2012
I know it's not reporting the overall size.
Perhaps I should link to that thread: www .unix. com/hp-ux/148724-empty-directory-big-size.html (Remove the spaces, I cannot link it directly, seems you need at least 5 posts to link directly and I'm a new member here....)

It's exactly as described in the thread. I wanted to know if there was in performance issue to that.
# 4  
Old 01-20-2012
Well, that thread belongs to HP-UX and I wouldn't expect an inode to work the same on VxFS and Ext3/Ext4; things can be a lot different from platform to platform.

In RHEL, a file or a directory (which by strict linx definition is also a type of file) only uses a single inode, no matter what.

For a better overview of the real disk usage you can use "df" or "du".
# 5  
Old 01-20-2012
I understand that things can be very different from a HPUX to a RHEL. But I do have the same behavior. I know that it's not taking any space on my filesystem, as seen with the du command. But since ls is reporting a number, I was wondering if there was a downside using that directory instead of creating a new one. Performance wise and not disk space wise. Does the filesystem check somethings beforehand related to that number on the directory that could impact performance in the long run?
# 6  
Old 01-20-2012
Quote:
Does the filesystem check somethings beforehand related to that number on the directory that could impact performance in the long run?
No. No performance hits are directly related to inode information - not since 2.6 kernel branch.
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Bash script search, improve performance with large files

Hello, For several of our scripts we are using awk to search patterns in files with data from other files. This works almost perfectly except that it takes ages to run on larger files. I am wondering if there is a way to speed up this process or have something else that is quicker with the... (15 Replies)
Discussion started by: SDohmen
15 Replies

2. UNIX for Beginners Questions & Answers

Command to extract empty field in a large UNIX file?

Hi All, I have records in unix file like below. In this file, we have empty fields from 4th Column to 22nd Column. I have some 200000 records in a file. I want to extract records only which have empty fields from 4th field to 22nd filed. This file is comma separated file. what is the unix... (2 Replies)
Discussion started by: rakeshp
2 Replies

3. Shell Programming and Scripting

Performance issue in Grepping large files

I have around 300 files(*.rdf,*.fmb,*.pll,*.ctl,*.sh,*.sql,*.prog) which are of large size. Around 8000 keywords(which will be in the file $keywordfile) needed to be searched inside those files. If a keyword is found in a file..I have to insert the filename,extension,catagoery,keyword,occurrence... (8 Replies)
Discussion started by: millan
8 Replies

4. Shell Programming and Scripting

How to delete some of the files in the directory, if the directory size limits the specified size

To find the whole size of a particular directory i use "du -sk /dirname".. but after finding the direcory's size how do i make conditions like if the size of the dir is more than 1 GB i hav to delete some of the files inside the dir (0 Replies)
Discussion started by: shaal89
0 Replies

5. Shell Programming and Scripting

Severe performance issue while 'grep'ing on large volume of data

Background ------------- The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files. File-1 ------ Contains 50,000 rows with 2 fields in each row, separated by pipe. Row structure is like Object_Id|Object_Name, as following: 111|XXX 222|YYY 333|ZZZ ... (6 Replies)
Discussion started by: Souvik
6 Replies

6. HP-UX

Empty Directory, big size

Hello, Can you please explain why I have various empty directories with large size ? OS is B.11.11 (3 Replies)
Discussion started by: drbiloukos
3 Replies

7. UNIX for Dummies Questions & Answers

Empty Directory, big size

Hello, Can somebody please explain why I have EMPTY directories on HP-UX with BIG SIZE ? Thank you ! Double post, continued here (0 Replies)
Discussion started by: drbiloukos
0 Replies

8. UNIX for Dummies Questions & Answers

Empty directories having different size

$ls -lrt mydir total 12 drwxrwxrwx 2 nobody nobody 512 Aug 8 11:51 tmp drwxrwxrwx 2 nobody nobody 4608 Jan 19 12:20 web.cache $ ls -lrt mydir/web.cache/ total 0 $ ls -lrt mydir/tmp/ total 0 Can anyone explain me the above results? I know the o/p of ls, but this... (3 Replies)
Discussion started by: rahulrathod
3 Replies

9. UNIX for Dummies Questions & Answers

Unix File System performance with large directories

Hi, how does the Unix File System perform with large directories (containing ~30.000 files)? What kind of structure is used for the organization of a directory's content, linear lists, (binary) trees? I hope the description 'Unix File System' is exact enough, I don't know more about the file... (3 Replies)
Discussion started by: dive
3 Replies
Login or Register to Ask a Question