Filesystem mystery: disks are not busy on one machine, very busy on a similar box
We have a filesystem mystery on our hands. Given:
2 machines, A and Aa.
Machine Aa is the problem machine.
Machine A is running Ubuntu, kernel 184.108.40.206 #1 SMP Wed Feb 20 08:46:16 CST 2008 x86_64 GNU/Linux. Machine Aa is running RHEL5.3, kernel 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux.
Both are running the anticipatory i/o scheduler.
Both are running two software applications, x and y
x reads from the network and writes to disk.
y reads from x' file, filters, and writes annotated data about x
Both x and y perform their work in partition W. They are the only applications that open files in that partition. I have used lsof and also shut down the applications and immediately unmounted the partition (successfully), so I know those are the only two things with files open in there.
On machine A, our disk utilization for the device whereupon partition W is mounted is very low. Interactive response is very good.
On machine Aa, our disk utilization is very high. Interactive response can sometimes lag; e.g., an ls of a directory in the busy partition can take 10-15 seconds to return.
So, how can we determine what causes the slowness on machine Aa?
Here are some example iostat readings, using
First, machine A (the good machine):
Now machine Aa. Notice how the w/s values are so much smaller, yet %util is 100:
We are at a loss. What can we look for? I have changed the i/o scheduler around (to cpq, noop, deadline); this has made no difference to the stats of Aa. There is perhaps 60 Gig/day written to partition W. While an appreciable amount, the busier machine of the two seems to have no problem with it.
On machine Aa I have done
and get 189 MB/sec to partition W. If I try it to the regular root disk, I get 213 MB/sec. So it seems to be fairly fast.
Both machine A and machine Aa have an HP P400i RAID card. The card is set to cache writes, and the cache is 256 Meg Read and 256 Meg Write. Both machines' disks are configured in a RAID 5 hardware array. The devices, then, use HP's cciss driver and there is no software RAID or logical volumes built on top of them.
Machine Aa is an HP DL360 G6. Machine A is a G5 machine (older). But machine Aa has the problem...?
The HP Proliant utilities run on machine Aa show no issues with the drives. There are no orange warning lights on the front of the machine.
Is there anything else we can look at? Thanks.
Are the machines reading/writing the same directories?. Directory size can really affect performance of ls and other file operations.
Are the files mounted with NSF? If so are the mountpoints off the root directory / ?
IO request queue lengths are huge on the bad box as well.
Thank for the reply. Here's my replies:
No they are not the same size. For the bad machine, the directories were 10 times as large. I have asked the developer to tune his application so the directory sizes are comparable.
No they are not mounted on NFS. They are on locally-mounted hardware RAID5 partition.
Yes, I expect queue lengths to be long since the machine is backed up writing to the disk.
Now the developer has reduced the number of files being written to. I have run these commands in the directories where the files live:
This will show me how many files have been written to in the last 10 seconds. On the good machine, I am writing to 3000 files in that time period. On the bad machine, I am writing to 125 files. This is an average, more or less.
Now I am still seeing this on the bad machine:
Note that at 100.10% utilization, we're doing only writes, and the "await" value is very high- 500 ms. or so. It seems that writes are killing us for some reason.
---------- Post updated 08-13-09 at 12:56 PM ---------- Previous update was 08-12-09 at 02:52 PM ----------
OK, we figured it out: our partition is a RAID-5 controlled by an HP P400i RAID controller. We turned on the Drive Write Cache, but the Array Accelerator needs to be turned on for *each logical drive*. We only had it on for drive 1. Our problematic partition is logical drive 2. Once we turned it on, using
we saw much much much better iostat results. Using the same iostat command as above, we now have:
First time posting on this board hopefully someone will able to help me.
I am looking for some examples of c programs which use a signal to notify the program to begin a certain action instead of having it continually loop to check to see if condition is attained(busy wait). Any... (5 Replies)
Anyone have any idea why I cannot umount a directory even though fuser shows no process attached to it?
fuser -cu /data/oracle/GMPSHRDM/export
umount: cannot unmount /data/oracle/GMPSHRDM/export : Device busy
umount:... (2 Replies)
one problem which seems simple but i've faced many times is when i've used cd & want to eject, however i'm not in /dev/cdrom/... in any of my Xterms but it gives the message that the the CD drive is busy.
the only thing i could do is /etc/init.d/volmgt stop & eject the CD manually but it doesn't... (6 Replies)
I'm trying to unmount a file system, but umount says the device is busy. A fuser -c /myfs tells me that nothing on this fs is in use. Any idea?
Oh, and btw, why isn't my signature beeing displayed? Show user's signature is enabled and I have entered one :/ (10 Replies)