If "myfolder" is a directory try limiting the search with -type d
I suspect the websphere directory has loads of files and subdirectories. Some filesystems like ufs lose performance as the number of files in a directory grows in size.
The only other possibility I can think of is your inode cache is horribly small.
run as root in the global zone will show what the ufs inode cache is set to.
I cannot recommend a good value offhand, but if it less than 129797 which is the default with 2048 set as maxusers, then it needs help.
For inode hit rate see if the sar -g column %ufs_ipf shows several non-zero entries in a 2 hour period. If so, bump ufs_ninode by 50%.
Of all of these I vote for the "too many files" in a directory problem.
Is that with the find actually running? Not much IO going on...
OK, so what does a few sets of output from "mpstat 1" show?
And if you have root, what does "echo ::memstat | mdb -k" show?
---------- Post updated at 05:33 PM ---------- Previous update was at 05:27 PM ----------
FWIW, I don't think using the "-type d" argument to find is going to help at all as "find" is going to have to do a stat() call on every entry in the directory tree anyway. That's just filtering the output.
I was thinking the problem was caused by having to wade through all the stat() calls on every file in the directory tree, with disk IOs being the dominant performance problem. But the iostat output doesn't seem to show that.
Of all of these I vote for the "too many files" in a directory problem.
Me too, but i'd like to offer another possibility: a filesystem (or several FSes) mounted with concurrent I/O. This would bypass OS caching completely and while it speeds database operations with concurrent writer processes it reduces random (non-concurrent) I/O to awfully slow. Check the mount options for the FSes involved to find out if this is the case.
I need to check if the files returned by ls command in the below script is a sub-string of the argument passed to the script i.e $1
The below script works fine but is too slow.
If the ls command take 12 secs to complete printing all files with while loop then; using posix substring check... (6 Replies)
Hi,
I have a lengthy script which i have trimmed down for a test case as below.
more run.sh
#!/bin/bash
paths="allpath.txt"
while IFS= read -r loc
do
echo "Working on $loc"
startdir=$loc
find "$startdir" -type f \( ! -name "*.log*" ! -name "*.class*" \) -print |
while read file
do... (8 Replies)
Hi,
I am trying to search for a Directory called "mont" under a directory path "/opt/app/var/dumps"
Although "mont" is in the very parent directory called "dumps" i.e "/opt/app/var/dumps/mont" and it can never be inside any Sub-Directory of "dumps"; my below find command which also checks... (5 Replies)
Hi,
I am running a ssh connection test in a script, how can I add a timeout to abolish the process if it takes too long?
ssh -i ~/.ssh/ssl_key useraccount@computer1
Thank you.
- j (1 Reply)
Hi,
I wish to check the return value for wget $url.
However, some urls are designed to take 45 minutes or more to return.
All i need to check if the URL can be reached or not using wget.
How can i get wget to return the value in a few seconds ? (8 Replies)
I have a file called "library" with the following content
libnxrdbmgr.a
libnxrdbmgr.so
libnxtk.a
libnxtk.so
libora0d_nsc_osi.so
I am trying to locate if these libraries are on my machine or not. find command runs for about few seconds and hangs after this.
Can someone please help me and... (3 Replies)
Hi all,
I wrote this shell script to validate filed numbers for input file. But it take forever to complete validation on a file. The average speed is like 9mins/MB.
Can anyone tell me how to improve the performance of a shell script?
Thanks (12 Replies)
Hello,
I have a C program that takes anywhere from 5 to 100 arguments and I'd like to run it from a script that makes sure it doesnt take too long to execute. If the C program takes more than 5 seconds to execute, i would like the shell script to kill it and return a short message to the user. ... (3 Replies)
Hello,
I create a file touch 1201093003 fichcomp
and inside a repertory (which hava a lot of files) I want to list all files created before this file :
find *.* \! -maxdepth 1 - newer fichcomp but this command returned bash: /usr/bin/find: Argument list too long
but i make a filter all... (1 Reply)
Hi I am trying to find out the best way to find out how long a command takes to run in miliseconds ..
Is there such a way of doing this in Unix ?
Thanks (3 Replies)