Sponsored Content
Full Discussion: Linux disk performance
Operating Systems Linux Red Hat Linux disk performance Post 302340817 by jimthompson on Tuesday 4th of August 2009 11:27:32 AM
Old 08-04-2009
Linux disk performance

I am getting absolutely dreadful iowait stats on my disks when I am trying to install some applications.

I have 2 physical disks on which I have created 2 separate logical volume groups and a logical volume in each. I have dumped some stats as below

My dual core CPU is not being over utilised - 30 to 40% utilisation but the disk i/o wait is in the 70 to 80% range.

Any ideas of what could be degrading disk performance so ?

[root@ebiz1 ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
71609640 50967164 16946248 76% /
/dev/mapper/VolGroup01-LogVol03
721077416 35042564 649406168 6% /u01
/dev/hdc1 101086 11871 83996 13% /boot
tmpfs 1545640 0 1545640 0% /dev/shm



[root@ebiz1 ~]# sar 5 5

Linux 2.6.18-128.el5 (ebiz1.northgate-is.com) 08/04/2009

04:34:08 PM CPU %user %nice %system %iowait %steal %idle

04:34:13 PM all 3.46 0.00 14.17 61.18 0.00 21.2 0

04:34:18 PM all 3.23 0.00 19.40 60.12 0.00 17.2 5

04:34:23 PM all 2.11 0.00 14.08 80.75 0.00 3.0 5

04:34:28 PM all 1.14 0.00 12.31 86.55 0.00 0.0 0

04:34:33 PM all 5.99 0.00 19.98 74.03 0.00 0.00

Average: all 3.14 0.00 15.90 72.60 0.00 8.36

[root@ebiz1 ~]# vmstat -d
disk- ------------reads------------ ------------writes----------- -----IO------
total merged sectors ms total merged sectors ms cur sec
ram0 0 0 0 0 0 0 0 0 0 0
ram1 0 0 0 0 0 0 0 0 0 0
ram2 0 0 0 0 0 0 0 0 0 0
ram3 0 0 0 0 0 0 0 0 0 0
ram4 0 0 0 0 0 0 0 0 0 0
ram5 0 0 0 0 0 0 0 0 0 0
ram6 0 0 0 0 0 0 0 0 0 0
ram7 0 0 0 0 0 0 0 0 0 0
ram8 0 0 0 0 0 0 0 0 0 0
ram9 0 0 0 0 0 0 0 0 0 0
ram10 0 0 0 0 0 0 0 0 0 0
ram11 0 0 0 0 0 0 0 0 0 0
ram12 0 0 0 0 0 0 0 0 0 0
ram13 0 0 0 0 0 0 0 0 0 0
ram14 0 0 0 0 0 0 0 0 0 0
ram15 0 0 0 0 0 0 0 0 0 0
hdc 55278 62611 12423468 14122068 284273 127680 3329180 44306342 0 9170
hdd 22246 4615 999212 777089 86526 6531679 52845920 1606956424 0 12056
dm-0 116332 0 12419066 35397657 416117 0 3328936 339026889 0 9169
dm-1 113 0 904 2199 29 0 232 22845 0 3
hda 0 0 0 0 0 0 0 0 0 0
md0 0 0 0 0 0 0 0 0 0 0
dm-2 25977 0 997514 1013661 6620817 0 52966536 357126262 15 12053
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

optimizing disk performance

I have some questions regarding disk perfomance, and what I can do to make it just a little (or much :)) more faster. From what I've heard the first partitions will be faster than the later ones because tracks at the outer edges of a hard drive platter simply moves faster. But I've also read in... (4 Replies)
Discussion started by: J.P
4 Replies

2. AIX

AIX System paramerter for Disk performance

Can I change any AIX System paramerter for speeding the data Disk performance? Currently it slows with writing operations. (1 Reply)
Discussion started by: gogogo
1 Replies

3. News, Links, Events and Announcements

Announcing collectl - new performance linux performance monitor

About 4 years ago I wrote this tool inspired by Rob Urban's collect tool for DEC's Tru64 Unix. What makes this tool as different as collect was in its day is its ability to run at a low overhead and collect tons of stuff. I've expanded the general concept and even include data not available in... (0 Replies)
Discussion started by: MarkSeger
0 Replies

4. AIX

disk performance

Hello, I have a aix 570 system with san disk. I do write test of performance in a lv with four disk. While the test I run filemon tools for trace the disk activity. The outputs of filemon are at the en of this message. I see my lV(logical volume) throughput at 100 meg by second. 2 of 4 disk... (0 Replies)
Discussion started by: Hugues
0 Replies

5. Red Hat

Disk performance problem on login

Running CentOS 5.5: I've come across a relatively recent problem, where in the last 2 months or so, the root disk goes to 99% utilization for about 20 seconds when a user logs in. This occurs whether a user logs in locally or via ssh. I have tried using lsof to track down the process that is... (5 Replies)
Discussion started by: dangral
5 Replies

6. Solaris

disk performance

What tools/utilities do you use to generate metrics on disk i/o throughput on Solaris. For example, if I want to see the i/o rate of random or sequential r/w. (2 Replies)
Discussion started by: dangral
2 Replies

7. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

8. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies

9. Linux

Disk Performance

I have a freshly installed Oracle Linux 7.1 ( akin to RHEL ) server. However after installing some Oracle software, I have noticed that my hard disk light is continually on and the system performance is slow. So I check out SAR and IOSTAT lab3:/root>iostat Linux... (2 Replies)
Discussion started by: jimthompson
2 Replies
VGREDUCE(8)						      System Manager's Manual						       VGREDUCE(8)

NAME
vgreduce - reduce a volume group SYNOPSIS
vgreduce [-a|--all] [-A|--autobackup y|n] [-d|--debug] [-h|-?|--help] [--removemissing] [-t|--test] [-v|--verbose] VolumeGroupName [Physi- calVolumePath...] DESCRIPTION
vgreduce allows you to remove one or more unused physical volumes from a volume group. OPTIONS
See lvm for common options. -a, --all Removes all empty physical volumes if none are given on command line. --removemissing Removes all missing physical volumes from the volume group, if there are no logical volumes allocated on those. This resumes normal operation of the volume group (new logical volumes may again be created, changed and so on). If this is not possible (there are logical volumes referencing the missing physical volumes) and you cannot or do not want to remove them manually, you can run this option with --force to have vgreduce remove any partial LVs. Any logical volumes and dependent snapshots that were partly on the missing disks get removed completely. This includes those parts that lie on disks that are still present. If your logical volumes spanned several disks including the ones that are lost, you might want to try to salvage data first by acti- vating your logical volumes with --partial as described in lvm (8). SEE ALSO
lvm(8), vgextend(8) Sistina Software UK LVM TOOLS 2.02.95(2) (2012-03-06) VGREDUCE(8)
All times are GMT -4. The time now is 07:24 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy