What is the limitation in AIX?


 
Thread Tools Search this Thread
Operating Systems AIX What is the limitation in AIX?
# 1  
Old 05-07-2013
IBM What is the limitation in AIX?

Hi All,

i got few questions...

1) What is the maximum number of files that we can save under a single directory in AIX ? (* we have enough storage/disk space)

2) And what is the maximum number of sub - directories in side a directory?
I know that...every directory is a (special) file...so if i get answer for my 1st qn....it will answer 2nd qn too. * correct me if i'm wrong

Any idea is highly appreciated.
# 2  
Old 05-07-2013
Using JFS2, there is no hard limit as far as I know.

There might be some limitations on the number of inodes your filesystem can allocate, although JFS2 can also perform on-demand inode allocation.

From IBM's official documentation:
Quote:
[...] the number of i-nodes available is limited by the size of the file system itself.
Theoretically JFS2 filesystems can support files up to 2 PBs in size. In reality however there's a pseudo-hard limit (the OS will warn you if you try to exceed this limit) set to 32 TB with files no larger than 16 TB.

So, if you were given an infinite amount of disk space under JFS2 it would be possible to have an infinite amount of files as long as the sum of their size did not exceed 2 PBs.

This means you still won't be able to store the whole Internet in your system. Smilie

EDIT: And yes, to the eyes of the OS, a directory is still a file.
This User Gave Thanks to verdepollo For This Post:
# 3  
Old 05-07-2013
1) lots, but large dirs are slow to process, so nobody goes there. Think of a path name for a complex object, now in place of 30k of them in one dir, look for separations and put slashes in there, and voila, smaller directories.

2) limited only by path length. Welcome to recursion. Lots of JAVA guys go nuts under windows' 255 char limit. UNIX is usually 1024 but I believe you can compile a more generous number into your kernel.

Each directory is an inode, just like a file but marked for directory handling. Think of it as a big dumb list of entry name and inode #, nothing else. Things like pipes and devices are a lot more 'special'.

Lots of O/S have just directory and flat file. Soft and hard links are not always there. Devices live somewhere outside the file tree, and if you want pipe behavior, you have to program.
# 4  
Old 05-07-2013
Thanks for your prompt response.......verdepollo & DGPickett
Appreciate your ideas... Smilie

And i found some ibm link on this

pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%2Fcom.ibm.aix.prftungd%2Fdoc%2Fprftungd%2Fdiffs_jfs_enhanced_jfs.htm

@Verdepollo -

as you said, we don't have limitation on number of i - nodes in JFS2
So' I guess...there is no limitation on number of files in a JFS2 filesystem (directory) .
Correct me if i'm wrong.
# 5  
Old 05-07-2013
There is no standard tool for cleaning up a bloated directory, either -- you must move the keepers to a new dir, trash the old dir and rename the new into the old place. Every reference usually must scan the whole space, usually not a structured scan. One standard check on slow computers is directories too big! It's not a hash map or even a tree.
# 6  
Old 05-07-2013
@DGPickett

thanks for your reply..but.....sorry
i really do not understand..your comment.
# 7  
Old 05-08-2013
What DGPickett means is the following:

A directory is quite similar to a file and the bigger a file gets the longer it takes the system to read it, which is to be expected. Run a "grep" against a file of 10GB and it will take longer than against a file of 1k size.

Let us consider the case where you issue a command

Code:
grep regexp /path/to/some/file

What happens? Before "grep" can start its work the operating system has to find out which file to open. So it looks in the directory "/path/to/some" and searches there for the inode of "file". A "directory" now is nothing else than a (quite unsorted) list of file names and inode-numbers. The longer this list is the longer it will take the take the OS to search it and find the inode it is interested in.

Usually you won't notice even this difference because the OS uses otherwise unused parts of the memory to buffer such information. This is part of the "file system cache": the system won't read the directory information from disk, but use the copy it has already stored in memory. As memory is much faster than disk this will speed up things considerably. But as the directory gets bigger and bigger and memory is a limited resource at some point the list might not fit in memory any more additionally hurting the speed with which this list is searched.

Bottom line: even if there are no theoretical limits there is some practical limit to directory sizes. This practical limit is pushed as hardware gets faster and memory keeps getting bigger, disks getting faster, etc.., but it still remains.

To split a large directory there is no "standard tool" like there is "split" for files. Just create new directories and use "mv" to move files from one to the other. A command like

Code:
mv /path/to/file /other/path

will physically move a file only of the directories "/path/to" and "/other/path" are not part of the same filesystem. If they are it is simply a matter of removing the directory information from the one list and putting it into the other. It will take the same time regardless of file size, because the file itself is not touched, just "file metadata" - information about files instead of files themselves.

I hope this clears things up.

bakunin
These 2 Users Gave Thanks to bakunin For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. AIX

AIX lpar bad disk I/O performance - 4k per IO limitation ?

Hi Guys, I have fresh new installed VIO 2.2.3.70 on a p710, 3 physical SAS disks, rootvg on hdisk0 and 3 VIO clients through vscsi, AIX7.1tl4 AIX6.1tl9 RHEL6.5ppc, each lpar has its rootvg installed on a LV on datavg (hdisk2) mapped to vhost0,1,2 There is no vg on hdisk1, I use it for my... (1 Reply)
Discussion started by: frenchy59
1 Replies

2. UNIX for Dummies Questions & Answers

Limitation in addition

whats wrong with this addition? Whats the maximum number of digits can be handled? pandeeswaran@ubuntu:~/Downloads$ const=201234454654768979799999 pandeeswaran@ubuntu:~/Downloads$ let new+=const pandeeswaran@ubuntu:~/Downloads$ echo $new -2152890657037557890 pandeeswaran@ubuntu:~/Downloads$ (4 Replies)
Discussion started by: pandeesh
4 Replies

3. Shell Programming and Scripting

Limitation on rm command

Hi all, does any one know ,if there is any limitation on rm command limitation referes here as a size . Ex:when my script try to rum rm command which have size of nearly 20-22 GB ..CPU load gets high ? if anyone know the relation of CPU load and limitation of rm command . (8 Replies)
Discussion started by: niteshagrawal06
8 Replies

4. AIX

Limitation for SFTP on AIX number of sessions

Hello. I am using AIX 6 and If wish to receive more than 500 files via SFTP, I get some time out errors. Could you please advise where is the limit for number of concurrent transfers setup in AIX Box or what is the limit and can that be changed? Many Thanks (3 Replies)
Discussion started by: panchpan
3 Replies

5. Shell Programming and Scripting

SED on AIX Limitation

Hello, I have a problem running a script created in ksh for Linux (Tested on Debian 5.0, Ubuntu Server 10.04 and RHEL 5.1), it works properly. :b: I trying to pass it to a AIX 5.3. :wall: The problem is the character limit of 256 on a command system and SED. I need to cut the contents of... (8 Replies)
Discussion started by: nemesis.spa
8 Replies

6. AIX

AIX 5.3 : Limitation to 1 telnet session for some users

Hi, I search the way to limit, for a group on a AIX 5.3, one telnet session by user (Simultaneous). I search a lot in /etc/security but the only way found is with the pam authentication that i not use. No solution found also in smit menu... Thanks for your help. (2 Replies)
Discussion started by: feilong
2 Replies

7. Shell Programming and Scripting

Is this a bug or a limitation?

Hi, I'm having a problem with a while loop syntax that doesn't seem to loop correctly. TODAY=`date +%d%m%Y` while read hostname #for hostname in $(cat $CONFIG) do OUTFILE=/tmp/health_check.$hostname.$TODAY if then touch $OUTFILE func_header else rm $OUTFILE ... (2 Replies)
Discussion started by: gilberteu
2 Replies

8. Shell Programming and Scripting

Limitation of ls command

Hi, Iam using an alias to get the file count from one directory using normal ls command like ls file*|wc -l.If my file increases more than 35,000 ,my alias is not working.It shows that arg list too long. is that can be limitation of ls or problem in alias? I would appreciate if anyone can... (2 Replies)
Discussion started by: cskumar
2 Replies

9. HP-UX

HP-UX 11i - File Size Limitation And Number Of Folders Limitation

Hi All, Can anyone please clarify me the following questions: 1. Is there any file size limitation in HP-UX 11i, that I can able to create upto certain size of file (say 2 GB) and not more then that???? 2. At max. how many files we can able to keep inside a folder???? 3. How many... (2 Replies)
Discussion started by: sundeep_mohanty
2 Replies

10. Shell Programming and Scripting

find limitation

Hi , i'm trying to use "find "command with "-size "option but i encounter 2gb file limitation. Can you confirm this limitation ? Is there a simple way to do the same thing ? My command is : <clazz01g-notes01>/base/base01 # find /base/base01 -name '*.nsf' -size +5242880000c -exec ls... (2 Replies)
Discussion started by: Nicol
2 Replies
Login or Register to Ask a Question