What is the limitation in AIX? | Unix Linux Forums | AIX

  Go Back    


AIX AIX is IBM's industry-leading UNIX operating system that meets the demands of applications that businesses rely upon in today's marketplace.

What is the limitation in AIX?

AIX


Tags
aix, unix

Closed Thread    
 
Thread Tools Search this Thread Display Modes
    #1  
Old 05-07-2013
System Admin 77's Avatar
System Admin 77 System Admin 77 is offline
Registered User
 
Join Date: May 2013
Last Activity: 2 July 2014, 10:47 AM EDT
Location: USA
Posts: 68
Thanks: 23
Thanked 2 Times in 2 Posts
IBM What is the limitation in AIX?

Hi All,

i got few questions...

1) What is the maximum number of files that we can save under a single directory in AIX ? (* we have enough storage/disk space)

2) And what is the maximum number of sub - directories in side a directory?
I know that...every directory is a (special) file...so if i get answer for my 1st qn....it will answer 2nd qn too. * correct me if i'm wrong

Any idea is highly appreciated.
Sponsored Links
    #2  
Old 05-07-2013
verdepollo's Avatar
verdepollo verdepollo is offline
Registered User
 
Join Date: Mar 2010
Last Activity: 2 September 2014, 6:07 PM EDT
Location: Mexico
Posts: 725
Thanks: 11
Thanked 105 Times in 101 Posts
Using JFS2, there is no hard limit as far as I know.

There might be some limitations on the number of inodes your filesystem can allocate, although JFS2 can also perform on-demand inode allocation.

From IBM's official documentation:
Quote:
[...] the number of i-nodes available is limited by the size of the file system itself.
Theoretically JFS2 filesystems can support files up to 2 PBs in size. In reality however there's a pseudo-hard limit (the OS will warn you if you try to exceed this limit) set to 32 TB with files no larger than 16 TB.

So, if you were given an infinite amount of disk space under JFS2 it would be possible to have an infinite amount of files as long as the sum of their size did not exceed 2 PBs.

This means you still won't be able to store the whole Internet in your system.

EDIT: And yes, to the eyes of the OS, a directory is still a file.
The Following User Says Thank You to verdepollo For This Useful Post:
System Admin 77 (05-07-2013)
Sponsored Links
    #3  
Old 05-07-2013
DGPickett DGPickett is offline Forum Advisor  
Registered User
 
Join Date: Oct 2010
Last Activity: 29 August 2014, 5:00 PM EDT
Location: Southern NJ, USA (Nord)
Posts: 4,409
Thanks: 8
Thanked 539 Times in 517 Posts
1) lots, but large dirs are slow to process, so nobody goes there. Think of a path name for a complex object, now in place of 30k of them in one dir, look for separations and put slashes in there, and voila, smaller directories.

2) limited only by path length. Welcome to recursion. Lots of JAVA guys go nuts under windows' 255 char limit. UNIX is usually 1024 but I believe you can compile a more generous number into your kernel.

Each directory is an inode, just like a file but marked for directory handling. Think of it as a big dumb list of entry name and inode #, nothing else. Things like pipes and devices are a lot more 'special'.

Lots of O/S have just directory and flat file. Soft and hard links are not always there. Devices live somewhere outside the file tree, and if you want pipe behavior, you have to program.
    #4  
Old 05-07-2013
System Admin 77's Avatar
System Admin 77 System Admin 77 is offline
Registered User
 
Join Date: May 2013
Last Activity: 2 July 2014, 10:47 AM EDT
Location: USA
Posts: 68
Thanks: 23
Thanked 2 Times in 2 Posts
Thanks for your prompt response.......verdepollo & DGPickett
Appreciate your ideas...

And i found some ibm link on this

pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%2Fcom.ibm.aix.prftungd%2Fdoc%2Fprftungd%2Fdiffs_jfs_enhanced_jfs.htm

@Verdepollo -

as you said, we don't have limitation on number of i - nodes in JFS2
So' I guess...there is no limitation on number of files in a JFS2 filesystem (directory) .
Correct me if i'm wrong.
Sponsored Links
    #5  
Old 05-07-2013
DGPickett DGPickett is offline Forum Advisor  
Registered User
 
Join Date: Oct 2010
Last Activity: 29 August 2014, 5:00 PM EDT
Location: Southern NJ, USA (Nord)
Posts: 4,409
Thanks: 8
Thanked 539 Times in 517 Posts
There is no standard tool for cleaning up a bloated directory, either -- you must move the keepers to a new dir, trash the old dir and rename the new into the old place. Every reference usually must scan the whole space, usually not a structured scan. One standard check on slow computers is directories too big! It's not a hash map or even a tree.
Sponsored Links
    #6  
Old 05-07-2013
System Admin 77's Avatar
System Admin 77 System Admin 77 is offline
Registered User
 
Join Date: May 2013
Last Activity: 2 July 2014, 10:47 AM EDT
Location: USA
Posts: 68
Thanks: 23
Thanked 2 Times in 2 Posts
@DGPickett

thanks for your reply..but.....sorry
i really do not understand..your comment.
Sponsored Links
    #7  
Old 05-07-2013
bakunin bakunin is offline Forum Staff  
Bughunter Extraordinaire
 
Join Date: May 2005
Last Activity: 2 September 2014, 8:39 AM EDT
Location: In the leftmost byte of /dev/kmem
Posts: 4,210
Thanks: 44
Thanked 801 Times in 632 Posts
What DGPickett means is the following:

A directory is quite similar to a file and the bigger a file gets the longer it takes the system to read it, which is to be expected. Run a "grep" against a file of 10GB and it will take longer than against a file of 1k size.

Let us consider the case where you issue a command


Code:
grep regexp /path/to/some/file

What happens? Before "grep" can start its work the operating system has to find out which file to open. So it looks in the directory "/path/to/some" and searches there for the inode of "file". A "directory" now is nothing else than a (quite unsorted) list of file names and inode-numbers. The longer this list is the longer it will take the take the OS to search it and find the inode it is interested in.

Usually you won't notice even this difference because the OS uses otherwise unused parts of the memory to buffer such information. This is part of the "file system cache": the system won't read the directory information from disk, but use the copy it has already stored in memory. As memory is much faster than disk this will speed up things considerably. But as the directory gets bigger and bigger and memory is a limited resource at some point the list might not fit in memory any more additionally hurting the speed with which this list is searched.

Bottom line: even if there are no theoretical limits there is some practical limit to directory sizes. This practical limit is pushed as hardware gets faster and memory keeps getting bigger, disks getting faster, etc.., but it still remains.

To split a large directory there is no "standard tool" like there is "split" for files. Just create new directories and use "mv" to move files from one to the other. A command like


Code:
mv /path/to/file /other/path

will physically move a file only of the directories "/path/to" and "/other/path" are not part of the same filesystem. If they are it is simply a matter of removing the directory information from the one list and putting it into the other. It will take the same time regardless of file size, because the file itself is not touched, just "file metadata" - information about files instead of files themselves.

I hope this clears things up.

bakunin
The Following 2 Users Say Thank You to bakunin For This Useful Post:
Pedro72 (04-15-2014), System Admin 77 (05-08-2013)
Sponsored Links
Closed Thread

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

More UNIX and Linux Forum Topics You Might Find Helpful
Thread Thread Starter Forum Replies Last Post
Limitation in addition pandeesh UNIX for Dummies Questions & Answers 4 06-23-2012 03:24 AM
Limitation for SFTP on AIX number of sessions panchpan AIX 3 02-09-2012 09:18 PM
SED on AIX Limitation nemesis.spa Shell Programming and Scripting 8 02-11-2011 06:20 AM
AIX 5.3 : Limitation to 1 telnet session for some users feilong AIX 2 09-30-2010 09:14 AM
HP-UX 11i - File Size Limitation And Number Of Folders Limitation sundeep_mohanty HP-UX 2 08-01-2005 07:58 PM



All times are GMT -4. The time now is 07:50 PM.