Thanks for posting the update.
Quote:
Originally Posted by
-=XrAy=-
This system has a lot of filesystems with a lot of small files (cobol sources, etc.).
Maybe the insufficent Inode-cache prevent the System to use the whole FS-cache?
I noticed in another project and under different circumstances that the speed with which to acquire file metadata (basically the contents of inodes) can dramatically speed up file operations:
A network relied heavily on NFS-shares and was redesigned to operate from one huge GPFS-datapool (~500TB). The first thing noticeable was that KDE had to be removed from all the clients, because the damn thing tries to create a hidden file-DB at startup. This is fine when you have a local disk with 20k files, but not when you see several millions of them. (It might be possible to tweak KDE somehow to stop that, but nobody bothered to do so. Desktops are a waste of resources anyway.)
The second observable phenomenon was that backup/restore times could be dramatically improved by moving the metadata onto a SSD. It didn't even have to be big in size: 200-300GB would suffice.
Now this fits in well with what you say about cache sizes and metadata caching. Probably AIX file I/O can be improved by tweaking the resources set aside specifically for dealing with file metadata.
Thanks again for sharing.
bakunin