disk performance


 
Thread Tools Search this Thread
Operating Systems AIX disk performance
# 1  
Old 04-20-2008
disk performance

Hello,

I have a aix 570 system with san disk. I do write test of performance
in a lv with four disk. While the test I run filemon tools for trace
the disk activity. The outputs of filemon are at the en of this message. I
see my lV(logical volume) throughput at 100 meg by second. 2 of 4
disk have 48 and 50 meg by sec. When I check in "Detailed Logical Volume"
section I see the write times (mesc): avg is a at ((165.017)). I think
it very bad performance. Other side my lv have throughput at 100 meg it
not bad. When I check the stats disk use by the LV I see this value .

hdisk21: write times (msec): avg ((2.472 ))
hdisk19: write times (msec): avg ((2.141 ))
hdisk20: write times (msec): avg ((3.103 ))
hdisk18: write times (msec): avg ((2.941 ))

The peformance was good, I don't undertand why le logical volume have
165.017 avg write time and there disk have (~2.9 avg) ?

Please somebody can help me to understand that!


filemon output:

Mon Apr 7 14:00:18 2008
System: AIX corbeau2 Node: 5 Machine:

Cpu utilization: 87.7%

Most Active Logical Volumes
------------------------------------------------------------------------
util #rblk #wblk KB/s volume description
------------------------------------------------------------------------
0.94 1648 4002960 (100)692.7 /dev/lvu03 /u03
0.76 97080 81184 4482.3 /dev/lvu01 /u01
0.09 2200 592 70.2 /dev/hd6 paging
0.08 5264 0 132.4 /dev/hd2 /usr
0.03 224 3440 92.1 /dev/lvu02 /u02
0.00 8 168 4.4 /dev/lvu99 /u99
0.00 0 56 1.4 /dev/lvxperf /home/xperf
0.00 16 16 0.8 /dev/hd3 /tmp
0.00 0 64 1.6 /dev/hd8 jfs2log
0.00 56 0 1.4 /dev/hd4 /
0.00 56 16 1.8 /dev/lvtivoli /usr/local/Tivoli
0.00 112 0 2.8 /dev/hd10opt /opt
0.00 16 8 0.6 /dev/hd1 /home
0.00 0 16 0.4 /dev/hd9var /var

Most Active Physical Volumes
------------------------------------------------------------------------
util #rblk #wblk KB/s volume description
------------------------------------------------------------------------
0.88 0 1942264 48836.7 /dev/hdisk21 Hitachi Disk Array (Fibre)
0.80 0 1990240 50043.0 /dev/hdisk19 Hitachi Disk Array (Fibre)
0.32 7184 1840 226.9 /dev/hdisk7 Hitachi Disk Array (Fibre)
0.26 7096 1864 225.3 /dev/hdisk3 Hitachi Disk Array (Fibre)
0.26 41720 39544 2043.3 /dev/hdisk6 Hitachi Disk Array (Fibre)
0.22 40824 35968 1930.9 /dev/hdisk2 Hitachi Disk Array (Fibre)
0.17 6104 774 172.9 /dev/hdisk0 N/A
0.05 1616 774 60.1 /dev/hdisk1 N/A
0.04 944 12744 344.2 /dev/hdisk18 Hitachi Disk Array (Fibre)
0.04 704 11880 316.4 /dev/hdisk20 Hitachi Disk Array (Fibre)
0.02 200 1408 40.4 /dev/hdisk5 Hitachi Disk Array (Fibre)
0.01 24 1680 42.8 /dev/hdisk9 Hitachi Disk Array (Fibre)
0.00 0 152 3.8 /dev/hdisk17 Hitachi Disk Array (Fibre)
0.00 0 320 8.0 /dev/hdisk4 Hitachi Disk Array (Fibre)
0.00 0 208 5.2 /dev/hdisk8 Hitachi Disk Array (Fibre)
0.00 8 8 0.4 /dev/hdisk14 Hitachi Disk Array (Fibre)
0.00 0 8 0.2 /dev/hdisk13 Hitachi Disk Array (Fibre)




------------------------------------------------------------------------
Detailed Logical Volume Stats (512 byte blocks)
------------------------------------------------------------------------

VOLUME: /dev/lvu03 description: /u03
reads: 108 (0 errs)
read sizes (blks): avg 15.3 min 8 max 32 sdev 3.0
read times (msec): avg 13.097 min 0.549 max 59.028 sdev 9.786
read sequences: 108
read seq. lengths: avg 15.3 min 8 max 32 sdev 3.0
writes: 28970 (0 errs)
write sizes (blks): avg 138.2 min 8 max 256 sdev 120.0
write times (msec): avg ((165.017)) min 0.193 max 984.655 sdev 204.232
write sequences: 622
write seq. lengths: avg 6435.6 min 8 max 373752 sdev 22977.0
seeks: 730 (2.5%)
seek dist (blks): init 26105968,
avg 25171381.9 min 8 max 61346840 sdev 22164062.4
time to next req(msec): avg 0.671 min 0.000 max 395.235 sdev 4.141
throughput: 100692.7 KB/sec
utilization: 0.94

------------------------------------------------------------------------
Detailed Physical Volume Stats (512 byte blocks)
------------------------------------------------------------------------

VOLUME: /dev/hdisk21 description: Hitachi Disk Array (Fibre)
writes: 13968 (0 errs)
write sizes (blks): avg 139.1 min 8 max 256 sdev 120.0
write times (msec): avg ((2.472 min)) 0.180 max 83.713 sdev 3.145
write sequences: 724
write seq. lengths: avg 2682.7 min 8 max 5120 sdev 835.2
seeks: 724 (5.2%)
seek dist (blks): init 15442432,
avg 10645.7 min 8 max 2879776 sdev 151047.3
seek dist (%tot blks):init 29.44188,
avg 0.02030 min 0.00002 max 5.49046 sdev 0.28798
time to next req(msec): avg 1.359 min 0.001 max 852.818 sdev 9.000
throughput: 48836.7 KB/sec
utilization: 0.88

VOLUME: /dev/hdisk19 description: Hitachi Disk Array (Fibre)
writes: 14335 (0 errs)
write sizes (blks): avg 138.8 min 8 max 256 sdev 120.0
write times (msec): avg ((2.141 min)) 0.185 max 46.215 sdev 2.416
write sequences: 755
write seq. lengths: avg 2636.1 min 8 max 5120 sdev 869.9
seeks: 755 (5.3%)
seek dist (blks): init 20762912,
avg 29970.3 min 8 max 9420592 sdev 415219.3
seek dist (%tot blks):init 39.58568,
avg 0.05714 min 0.00002 max 17.96090 sdev 0.79164
time to next req(msec): avg 1.357 min 0.001 max 752.214 sdev 8.334
throughput: 50043.0 KB/sec
utilization: 0.80


VOLUME: /dev/hdisk18 description: Hitachi Disk Array (Fibre)
reads: 62 (0 errs)
read sizes (blks): avg 15.2 min 8 max 32 sdev 3.4
read times (msec): avg 11.522 min 0.536 max 59.010 sdev 9.029
read sequences: 62
read seq. lengths: avg 15.2 min 8 max 32 sdev 3.4
writes: 133 (0 errs)
write sizes (blks): avg 95.8 min 8 max 256 sdev 112.7
write times (msec): avg ((2.941 min)) 0.376 max 14.409 sdev 2.888
write sequences: 96
write seq. lengths: avg 132.8 min 16 max 1792 sdev 403.6
seeks: 158 (81.0%)
seek dist (blks): init 34316440,
avg 11553363.9 min 224 max 44481264 sdev 9664640.5
seek dist (%tot blks):init 65.42626,
avg 22.02715 min 0.00043 max 84.80608 sdev 18.42619
time to next req(msec): avg 100.068 min 0.003 max 2513.966 sdev 315.817
throughput: 344.2 KB/sec
utilization: 0.04

VOLUME: /dev/hdisk20 description: Hitachi Disk Array (Fibre)
reads: 46 (0 errs)
read sizes (blks): avg 15.3 min 8 max 16 sdev 2.3
read times (msec): avg 15.112 min 0.795 max 50.735 sdev 10.322
read sequences: 46
read seq. lengths: avg 15.3 min 8 max 16 sdev 2.3
writes: 134 (0 errs)
write sizes (blks): avg 88.7 min 16 max 256 sdev 109.3
write times (msec): avg ((3.103 min)) 0.316 max 14.647 sdev 3.158
write sequences: 102
write seq. lengths: avg 116.5 min 16 max 1792 sdev 372.0
seeks: 148 (82.2%)
seek dist (blks): init 5138800,
avg 10379053.0 min 64 max 38712400 sdev 8700956.2
seek dist (%tot blks):init 9.79742,
avg 19.78826 min 0.00012 max 73.80741 sdev 16.58887
time to next req(msec): avg 89.540 min 0.002 max 4046.617 sdev 392.143
throughput: 316.4 KB/sec
utilization: 0.04
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Linux

Disk Performance

I have a freshly installed Oracle Linux 7.1 ( akin to RHEL ) server. However after installing some Oracle software, I have noticed that my hard disk light is continually on and the system performance is slow. So I check out SAR and IOSTAT lab3:/root>iostat Linux... (2 Replies)
Discussion started by: jimthompson
2 Replies

2. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies

3. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

4. Solaris

disk performance

What tools/utilities do you use to generate metrics on disk i/o throughput on Solaris. For example, if I want to see the i/o rate of random or sequential r/w. (2 Replies)
Discussion started by: dangral
2 Replies

5. Solaris

Hard disk write performance very slow

Dear All, I have a hard disk in solaris on which the write performanc is too slow. The CPU , RAM memory are absolutely fine. What might be reason. Kindly explain. Rj (9 Replies)
Discussion started by: jegaraman
9 Replies

6. Red Hat

Disk performance problem on login

Running CentOS 5.5: I've come across a relatively recent problem, where in the last 2 months or so, the root disk goes to 99% utilization for about 20 seconds when a user logs in. This occurs whether a user logs in locally or via ssh. I have tried using lsof to track down the process that is... (5 Replies)
Discussion started by: dangral
5 Replies

7. Red Hat

Linux disk performance

I am getting absolutely dreadful iowait stats on my disks when I am trying to install some applications. I have 2 physical disks on which I have created 2 separate logical volume groups and a logical volume in each. I have dumped some stats as below My dual core CPU is not being over utilised... (3 Replies)
Discussion started by: jimthompson
3 Replies

8. AIX

AIX 5.2 5.3 disk performance exerciser tool

I'm search for a disk exerciser / load tool like iometer, iozone, diskx for IBM AIX 5.2 and 5.3 Because of a very bad disk performance on several AIX systems, I need to have a tool which is able to generate a disk load on my local and SAN disks. Does somebody knows a kind of tool which is... (5 Replies)
Discussion started by: funsje
5 Replies

9. Filesystems, Disks and Memory

optimizing disk performance

I have some questions regarding disk perfomance, and what I can do to make it just a little (or much :)) more faster. From what I've heard the first partitions will be faster than the later ones because tracks at the outer edges of a hard drive platter simply moves faster. But I've also read in... (4 Replies)
Discussion started by: J.P
4 Replies
Login or Register to Ask a Question