AIX lpar bad disk I/O performance - 4k per IO limitation ?


 
Thread Tools Search this Thread
Operating Systems AIX AIX lpar bad disk I/O performance - 4k per IO limitation ?
# 1  
Old 08-12-2016
AIX lpar bad disk I/O performance - 4k per IO limitation ?

Hi Guys,

I have fresh new installed VIO 2.2.3.70 on a p710, 3 physical SAS disks, rootvg on hdisk0
and 3 VIO clients through vscsi, AIX7.1tl4 AIX6.1tl9 RHEL6.5ppc, each lpar has its rootvg installed on a LV on datavg (hdisk2) mapped to vhost0,1,2

There is no vg on hdisk1, I use it for my investigation.
hdisk1 is mapped to all 3 lpar AIX7.1tl4 AIX6.1tl9 RHEL6.5ppc through the same vhost0,1,2

from VIO server, "dd if=/dev/hdisk1 of=/dev/null bs=1M" running in parallel "iostat -DlT 1", gives me a throughput of 93.2MB/s (AS EXPECTED)
from lpar1 AIX7.1tl4 , "dd if=/dev/hdisk1 of=/dev/null bs=1M" running in parallel "iostat -DlT 1" on the VIOserver, gives me a throughput of 10MB/s (TOO LOW !)
from lpar2 AIX6.1tl9 , "dd if=/dev/hdisk1 of=/dev/null bs=1M" running in parallel "iostat -DlT 1" on the VIOserver, gives me a throughput of 10MB/s (TOO LOW !)
from lpar3 RHEL6.5ppc, "dd if=/dev/sdb of=/dev/null bs=1M" running in parallel "iostat -DlT 1" on the VIOserver, gives me a throughput of 113MB/s (AS EXPECTED)

ofcourse the dd commands are run separetely each time. there is clearly bad performance on the AIX lpars, I tried to modify few parameters such queue_depth or max_transfer with the same result.

See below the iostat outputs on the VIO for each test.
If I compare the bps and tps value :
on the VIO where the disk is local, 22761 I/O requests give 93MB/s (about 4k per I/O as per my calculation)
on the VIO clients which run AIX, 2500 I/O requests give 10MB/s (about 4k per I/O as per my calculation)
on the VIO client which runs RHEL, 1730 I/O requests give 113MB/s (about 66k per I/O as per my calculation)

it seems the io size is limited to 4k on AIX, and that could be the thing to modify ...
Do you have any idea which parameter I could modify to get acceptable performance on aix lpar ?

Please note that there is no filesystem nor LVM in the picture, I access directly the hdisk through vscsi, that's it. so the Filesystem tuning nor LVM tuning is not relevant in my case.

Code:
Disks:                      xfers                                read                                write                                  queue                    time
--------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
                  %tm    bps   tps  bread  bwrtn   rps    avg    min    max time fail   wps    avg    min    max time fail    avg    min    max   avg   avg  serv
                  act                                    serv   serv   serv outs              serv   serv   serv outs        time   time   time  wqsz  sqsz qfull
dd on VIO server:
hdisk1           97.0  93.2M 22761.0  93.2M   0.0  22761.0   0.1    0.1    5.8     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   1.0   0.0  19:19:29

dd on AIX7.1tl4 :
hdisk2          100.0  10.3M 2506.0  10.3M   0.0  2506.0   0.1    0.1   32.4     0    0   0.0   0.0    1.7   30.5     0    0   0.0    0.0    0.0    0.0   1.0   0.0  19:30:21

dd on AIX6.1tl9 :
hdisk1           80.0  10.5M 2559.0  10.5M   0.0  2559.0   0.1    0.1    5.9     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   1.0   0.0  19:35:43

dd on RHEL6.5ppc, :
hdisk1           99.0 113.4M 1730.0 113.4M   0.0  1730.0   0.9    0.3    8.6     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.1    0.0   3.0   0.0  19:33:25

---------- Post updated at 10:43 PM ---------- Previous update was at 09:34 PM ----------

I think I got my answer, I was simply "dd if=/dev/hdiskX ..." and I should have done "dd if=/dev/rhdiskX ..."
using the raw device instead of the block device increased the iostat result significantly !
Although on linux I was using the block device, I didn't get such difference with the raw device.

To dd the raw device on linux I have done :
Code:
[root@lpar3 ~]# raw /dev/raw/raw1 /dev/sdb
/dev/raw/raw1:  bound to major 8, minor 16
[root@lpar3 ~]# dd if=/dev/raw/raw1 of=/dev/null bs=1M

I have now similar results in all 4 scenarios :
Code:
Disks:                      xfers                                read                                write                                  queue                    time
--------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
                  %tm    bps   tps  bread  bwrtn   rps    avg    min    max time fail   wps    avg    min    max time fail    avg    min    max   avg   avg  serv
                  act                                    serv   serv   serv outs              serv   serv   serv outs        time   time   time  wqsz  sqsz qfull
dd on VIO server:
hdisk1           98.0 172.0M 164.0 172.0M   0.0  164.0   6.0    4.3   13.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  21:25:12

dd on AIX7.1tl4 :
hdisk1          100.0 163.8M 1250.0 163.8M   0.0  1250.0   3.0    0.1   17.7     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   6.0   0.0  21:28:54

dd on AIX6.1tl9 :
hdisk1          100.0 164.4M 627.0 164.4M   0.0  627.0   3.3    0.2   16.7     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   2.0   0.0  21:29:57

dd on RHEL6.5ppc, :
hdisk1          100.0 118.0M 1801.0 118.0M   0.0  1801.0   0.9    0.3    6.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.1    0.0   3.0   0.0  21:32:07

dd on RHEL6.5ppc, using RAW
hdisk1           95.0 171.9M 2623.0 171.9M   0.0  2623.0   3.1    0.3   14.3     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.2    0.0   7.0 164.0  21:34:02

Thanks for your help guys.
This User Gave Thanks to frenchy59 For This Post:
# 2  
Old 08-13-2016
Thank you for posting a follow-up with the solution. i mark the thread as solved.

bakunin
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. AIX

LPAR cannot added disk

Dear All, I created a new partition through "Integrated Virtualization Manager" but there have an error when I added a new disk to the partition. The disk already created without any issue, Below error is to add the disk to the partition An error occured while modifying the assignments... (5 Replies)
Discussion started by: lckdanny
5 Replies

2. AIX

Adding disk to my lpar

hi all i have entered Aix environment 4 months had experienced in linux what i am facing is i am unable to do sort of RnD with aix like installation on my own, creating vgs managing networks, the VIOS, storage,lpars, So we have a setup here almost all are in live production environment with... (4 Replies)
Discussion started by: vax
4 Replies

3. AIX

Will it affect my AIX LPAR security, when i set up email alerts on AIX server.

Hello, I've set up email alerts on AIX Servers. so that i can get email notifications (via mail relay server) when ever there is abnormal behavior. for example 1) my script monitors CPU/disk/memory etc... when it reaches high water ark, it will send an email alert. 2) disk usage alerts 3)... (5 Replies)
Discussion started by: System Admin 77
5 Replies

4. AIX

Newbie - AIX LPAR performance problem

Running into performance issues with WAS application servers on two of LPAR's or like configuration under high load web pages crawl. Please forgive me I'm new to AIX and most my expertise is in the Linux space. Thanks for your help!! Here's the run down: The problem appears to be CPU... (3 Replies)
Discussion started by: margeson
3 Replies

5. AIX

What is the limitation in AIX?

Hi All, i got few questions... 1) What is the maximum number of files that we can save under a single directory in AIX ? (* we have enough storage/disk space) 2) And what is the maximum number of sub - directories in side a directory? I know that...every directory is a (special)... (11 Replies)
Discussion started by: System Admin 77
11 Replies

6. AIX

Performance issues for LPAR with GPFS 3.4

Hi, We have GPFS 3.4 Installed on two AIX 6.1 Nodes. We have 3 GPFS Mount points: /abc01 4TB (Comprises of 14 x 300GB disks from XIV SAN) /abc02 4TB (Comprises of 14 x 300GB disks from XIV SAN) /abc03 1TB ((Comprises of Multiple 300GB disks from XIV SAN) Now these 40... (1 Reply)
Discussion started by: aixromeo
1 Replies

7. Shell Programming and Scripting

SED on AIX Limitation

Hello, I have a problem running a script created in ksh for Linux (Tested on Debian 5.0, Ubuntu Server 10.04 and RHEL 5.1), it works properly. :b: I trying to pass it to a AIX 5.3. :wall: The problem is the character limit of 256 on a command system and SED. I need to cut the contents of... (8 Replies)
Discussion started by: nemesis.spa
8 Replies

8. AIX

LPAR and AIX VIO Disk Mappring for Linux Clients

VIO Server is managing both AIX Clients and Linux Clients. For AIX Clients, we could do a disk mapping from slot numbers to VIO and also uname -L to determine the lparid and serial number frame its running on. From a Linux Client, How do I know which IBM frame its running on? Any command to... (4 Replies)
Discussion started by: rpangulu
4 Replies

9. AIX

AIX 5.2 5.3 disk performance exerciser tool

I'm search for a disk exerciser / load tool like iometer, iozone, diskx for IBM AIX 5.2 and 5.3 Because of a very bad disk performance on several AIX systems, I need to have a tool which is able to generate a disk load on my local and SAN disks. Does somebody knows a kind of tool which is... (5 Replies)
Discussion started by: funsje
5 Replies

10. AIX

AIX System paramerter for Disk performance

Can I change any AIX System paramerter for speeding the data Disk performance? Currently it slows with writing operations. (1 Reply)
Discussion started by: gogogo
1 Replies
Login or Register to Ask a Question