Poor Performance of server


 
Thread Tools Search this Thread
Operating Systems AIX Poor Performance of server
# 1  
Old 02-07-2012
Poor Performance of server

Hi,

I am new registered user here in this UNIX forums.
I am a new system administrator for AIX 6.1. One of our servers performs poorly every time our application (FINACLE) runs many processes/instances. (see below for topas snapshot)

I use NMON or Topas to monitor the server utilization. I checked the the CPU Idle% and the idle percent is high, however the DISK Busy% is constantly high (during real poor performance, the DISK Busy% is most of the time 100%). Also, I noticed that the FILE/TTY Readch and Writech are constantly high too. See topas snapshot below:
Code:
CPU User% Kern% Wait% Idle% Physc Entc
ALL 0.7 0.4 5.5 93.5 0.10 1.7
 
Disk Busy% KBPS TPS KB-Read KB-Writ
Total 100.0 10.8K 219.0 0.0 10.8K
 
FileSystem KBPS TPS KB-Read KB-Writ
Total 6.5K 648.6 3.8K 2.7K
 
Name PID CPU% PgSp Owner
oracle 1311046 0.6 10.6 oracle
vmmd 458766 0.2 1.2 root
aioserve 10682432 0.0 0.4 uatadm2
topas 25755848 0.0 8.9 bankadm
tnslsnr 25493548 0.0 20.4 oracle
oracle 5439982 0.0 14.7 oracle
 
EVENTS/QUEUES FILE/TTY
Cswitch 585 Readch 3915.9K
Syscall 2055 Writech 2759.0K
Reads 630 Rawin 0
Writes 90 Ttyout 1628
Forks 1 Igets 0
Execs 0 Namei 99
Runqueue 1.1 Dirblk 0
Waitqueue 0.5
Memory
PAGING Real,MB 43776
Faults 1639 % Comp 47
Steals 0 % Noncomp 51
PgspIn 0 % Client 51
PgspOut 0
PageIn 0 PAGING SPACE
PageOut 691 Size,MB 12288
Sios 700 % Used 1
% Free 99
NFS (calls/sec)
SerV2 0 WPAR Activ 0
CliV2 0 WPAR Total 0
SerV3 0 Press: "h"-help
CliV3 0 "q"-quit



here's our server specs:
Code:
System Model: IBM,8205-E6B
Machine Serial Number: 0678F8P
Processor Type: PowerPC_POWER7
Processor Implementation Mode: POWER 7
Processor Version: PV_7_Compat
Number Of Processors: 6
Processor Clock Speed: 3720 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 CARD_DB
Memory Size: 43776 MB
Good Memory Size: 43776 MB
Platform Firmware level: AL720_082
Firmware Version: IBM,AL720_082
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: CARDDB
IP Address: 10.10.10.100
Sub Netmask: 255.255.255.0
Gateway: 10.10.10.10
Name Server:
Domain Name:
Paging Space Information
Total Paging Space: 12288MB
Percent Used: 1%
Volume Groups Information
==============================================================================
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 546 458 109..48..83..109..109
hdisk1 active 546 390 29..60..83..109..109
==============================================================================
oravg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk8 active 4228 68 00..00..00..00..68
==============================================================================

Everytime this happens, we try to kill processes that is CPU consuming, but still, the DISK Busy% is high. If we reboot the server, the performance becomes okay, but we can't do this during production. Any suggestion on how to optimize this? is it our architecture (having only 1 hard disk for our data)? Does bottle-necking takes place here? What can we do to optimize our server? Any upgrades shall we make? for example increasing physical memory.

Thank you very much. I hope you can help since I am not a UNIX expert.

Last edited by jim mcnamara; 02-08-2012 at 09:49 AM.. Reason: code tags please
# 2  
Old 02-08-2012
Killing processes to free resources is not a good idea. You might shoot something you still need.

Yes, from the look of it you have a severe bottleneck with your 1 hdisk. Is this hdisk a physical disk or a LUN from SAN storage?
Do you use asynchronous I/O (AIO) and have it tuned? Oracle will most probably benefit from it as well as getting additional disks.

nmon/topaz has a page that displays AIO stats, I think it was shift + a, not sure though, easy to try it out anyway.

You could post the output of
Code:
iostat -A 2 10
# and
vmstat -wt 2 10
# and
lsattr -El aio0

(the 1st 2 commands when there is traffic on your box) and use code tags when doing so, thanks.
# 3  
Old 02-08-2012
and post the filesystem_io_options of oracle + oracle version + something about your disk layout - so are your filesystems setup with min or max distribution, blocksize ...
output of mount command will help and definitely mounting your oracle filesystems with noatime option and if you have a dedicated dump device with rbrw
If you dont want to use SETALL in filesystem_io_options than you might want to consider the filesystems containing oracle data + redologs to be mounted with cio, how many volumegroups with how many disks do you have and similar things
In many cases a hot disk is easily avoidable by changing your filesystems from minimum to maximum distribution and reorganize the volumegroup
I would be in addition interested in vmstat -v and vmstat -s outputs on top of what zaxxon asked for already.
Please gather all data during the time where the system is busy and slow - not during an idle timeframe or the data wont help
# 4  
Old 02-09-2012
Thanks Zaxxon & zxmaus,
I don't know where to begin before this thread opened.

For iostat -A 2 10, vmstat -wt 2 10, vmstat -v and vmstat -s, I will post a snapshot for these once the issue occurs again.

For lsattr -El aio0, i did't get anything so i tried lsattr -El sys0 (i hope it will do).
--> See attachment - lsattr sys0.jpg

For "Do you use asynchronous I/O (AIO) and have it tuned?"
--> I have no idea for this since I am new here and I came here in the middle of the application roll-out to production. I wish I had a clue. No knowledge on the history of the servers here.
However i checked the I/O stat in nmon and here it is:
-->
Code:
Total AIO processes=  72 Actually in use=   0  CPU used=   1.1%
         All time peak=  90     Recent peak=   7      Peak=   3.4%

If physical disk or LUN from SAN
-->Im not entirely sure if it's LUN from SAN but here's what i gathered:
from prtcfg/lsdev:
Code:
hdisk8     Available 05-00-00    SAS RAID 10 Disk Array
from lspv hdisk8
PHYSICAL VOLUME:    hdisk8                   VOLUME GROUP:     oravg
PV IDENTIFIER:      00f678f86bb5b458 VG IDENTIFIER     00f678f800004c00000001326bb5b750
PV STATE:           active
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            256 megabyte(s)          LOGICAL VOLUMES:  8
TOTAL PPs:          4228 (1082368 megabytes) VG DESCRIPTORS:   2
FREE PPs:           68 (17408 megabytes)     HOT SPARE:        no
USED PPs:           4160 (1064960 megabytes) MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  00..00..00..00..68
USED DISTRIBUTION:  846..846..845..845..778
MIRROR POOL:        None

For oracle version:
-->Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi

For disk layout/fs setup:

-->In sumarry we have three hdisks. rootvg resides in 2 hdisk and applications(oravg) resides in hdisk8.
Below are the details:
Code:
lspv
hdisk0          00f678f866fa237c                    rootvg          active
hdisk1          00f678f86b3707b7                    rootvg          active
hdisk8          00f678f86bb5b458                    oravg           active

lsvg -l oravg
oravg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
fslv00              jfs2       210     210     1    open/syncd    /u01
fslv01              jfs2       391     391     1    open/syncd    /bankadm
fslv02              jfs2       200     200     1    open/syncd    /smeadm
fslv03              jfs2       200     200     1    open/syncd    /infosys
fslv04              jfs2       1758    1758    1    open/syncd    /uatadm1
fslv05              jfs2       600     600     1    open/syncd    /uatadm2
fslv06              jfs2       800     800     1    open/syncd    /DB_Backups

there are several oracle database instances in oravg also. here they are:
Code:
/u01/oracle/oracle/dbs
# ls -lrt *.ora
-rwxr-xr-x    1 oracle   oinstall       8385 Sep 12 1998  init.ora
-rw-r--r--    1 oracle   oinstall      12920 May 03 2001  initdw.ora
-rw-r-----    1 oracle   oinstall        922 Sep 16 09:54 initorcl.ora
-rw-r-----    1 oracle   oinstall       3584 Sep 16 10:15 spfileorcl.ora
-rw-rw-r--    1 oracle   oinstall       5149 Oct 17 02:07 initU1SISDB.ora
-rw-rw-r--    1 oracle   oinstall       5176 Oct 17 02:15 initUASISDB.ora
-rw-rw-r--    1 oracle   oinstall       5172 Feb 05 00:40 initBANKDB.ora
-rw-rw-r--    1 oracle   oinstall       5161 Feb 05 00:40 initSMEDB.ora
-rw-rw-r--    1 oracle   oinstall       5172 Feb 05 00:40 initUAT1DB.ora
-rw-rw-r--    1 oracle   oinstall       5174 Feb 05 00:41 initUAT2DB.ora

for filesystemio_options:
-->I have no idea where to locate this? is this executed or set in a configuration file?

for "...min or max distribution, blocksize ...
output of mount command will help and definitely mounting your oracle filesystems with noatime option and if you have a dedicated dump device with rbrw..."
--> I am totally alost with the min/max tuning. no idea for this yet.


Again. Thanks very much for the help. It's greatly appreciated
Poor Performance of server-lsattr-sys0jpg

Last edited by zxmaus; 02-10-2012 at 08:26 AM.. Reason: added tags
# 5  
Old 02-09-2012
if the attachment is not viewable here's the lsattr -El sys0:
Code:
SW_dist_intr    false              Enable SW distribution of interrupts              True
autorestart     true               Automatically REBOOT OS after a crash             True
boottype        disk               N/A                                               False
capacity_inc    0.01               Processor capacity increment                      False
capped          true               Partition is capped                               False
conslogin       enable             System Console Login                              False
cpuguard        enable             CPU Guard                                         True
dedicated       false              Partition is dedicated                            False
enhanced_RBAC   true               Enhanced RBAC Mode                                True
ent_capacity    6.00               Entitled processor capacity                       False
frequency       6400000000         System Bus Frequency                              False
fullcore        false              Enable full CORE dump                             True
fwversion       IBM,AL720_082      Firmware version and revision levels              False
ghostdev        0                  Recreate devices in ODM on system change          True
id_to_partition 0X80000B9662900002 Partition ID                                      False
id_to_system    0X80000B9662900000 System ID                                         False
iostat          false              Continuously maintain DISK I/O history            True
keylock         normal             State of system keylock at boot time              False
log_pg_dealloc  true               Log predictive memory page deallocation events    True
max_capacity    12.00              Maximum potential processor capacity              False
max_logname     9                  Maximum login name length at boot time            True
maxbuf          20                 Maximum number of pages in block I/O BUFFER CACHE True
maxmbuf         0                  Maximum Kbytes of real memory allowed for MBUFS   True
maxpout         8193               HIGH water mark for pending write I/Os per file   True
maxuproc        2048               Maximum number of PROCESSES allowed per user      True
min_capacity    3.00               Minimum potential processor capacity              False
minpout         4096               LOW water mark for pending write I/Os per file    True
modelname       IBM,8205-E6B       Machine name                                      False
ncargs          256                ARG/ENV list size in 4K byte blocks               True
nfs4_acl_compat secure             NFS4 ACL Compatibility Mode                       True
pre430core      false              Use pre-430 style CORE dump                       True
pre520tune      disable            Pre-520 tuning compatibility mode                 True
realmem         44826624           Amount of usable physical memory in Kbytes        False
rtasversion     1                  Open Firmware RTAS version                        False
sed_config      select             Stack Execution Disable (SED) Mode                True
systemid        IBM,020678F8P      Hardware system identifier                        False
variable_weight 0                  Variable processor capacity weight                False


Last edited by zxmaus; 02-10-2012 at 08:27 AM.. Reason: Code tags
# 6  
Old 02-09-2012
From the data you provided so far, you have 1 raidset raid 10 from SAS (so internal storage) disks of a total of 1 TB (presented to the system as 1 disk) for 6 DBs and anything else running on the system excluding root - this just asks for problems as you access all your storage just with one serial path.

Even worse all your filesystems are sharing the same logfile and if I assume correctly and your filesystems are not mounted with noatime option that means that every single read (which includes as simple things as ls) and every single write of 8 different filesystems concur about access to the logfile which by nature makes this logfile naturally the hotspot of the entire system.

Still waiting for the vmstat outputs but I bet that your system has only the default filesystem tuning and is running out of buffers most of the time.

Can you post lvmo -a -v oravg output please to confirm?

Regarding aio - dont worry - on AIX 6.1 you find it with the ioo -a | grep aio command but AIX will turn it on automatically if oracle or any other application wants to use it.

filesystem_io_options is a variable set within oracle (ask your DBA) and can be set to none (standard I think in your oracle version), async or setall - the setall option lets decide oracle to use cio with async IO but wont let you access open database files outside of the database itself other than with rman which might be a problem if you dont do rman backups.

Please run a simple mount on the box to allow us to see if you are using any mount options on the filesystems.

So far

- consider to give each of your oravg filesystems its very own logfile
- consider another storage solution and a different filesystem layout if possible since 6 DBs in the same filesystem - even if this filesystem has its own logfile, are still not such a great idea. If that is not possible, than your disk will naturally stay busy since you only have one.

Regards
zxmaus
# 7  
Old 02-10-2012
for
Code:
lvmo -a -v oravg
 
vgname = oravg
pv_pbuf_count = 512
total_vg_pbufs = 512
max_vg_pbufs = 16384
pervg_blocked_io_count = 2848
pv_min_pbuf = 512
max_vg_pbuf_count = 0
global_blocked_io_count = 2848
 
for ioo -a | grep aio
 
 
aio_active = 1
aio_maxreqs = 65536
aio_maxservers = 30
aio_minservers = 3
aio_server_inactivity = 300
posix_aio_active = 0
posix_aio_maxreqs = 65536
posix_aio_maxservers = 30
posix_aio_minservers = 3
posix_aio_server_inactivity = 300

For mount:


Code:
node mounted mounted over vfs date options
-------- --------------- --------------- ------ ------------ ---------------
/dev/hd4 / jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd2 /usr jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd9var /var jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd3 /tmp jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd1 /home jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd11admin /admin jfs2 Feb 02 07:31 rw,log=/dev/hd8
/proc /proc procfs Feb 02 07:31 rw
/dev/hd10opt /opt jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/livedump /var/adm/ras/livedump jfs2 Feb 02 07:31 rw,log=/dev/ hd8
/dev/fslv00 /u01 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv01 /bankadm jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv02 /smeadm jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv03 /infosys jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv04 /uatadm1 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv05 /uatadm2 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv06 /DB_Backups jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv07 /REPORTS jfs2 Feb 02 07:31 rw,log=/dev/hd8

Hi here it is. servers going insane again.

for
Code:
iostat -A 2 10
 
System configuration: lcpu=24 drives=4 ent=6.00 paths=3 vdisks=0 maxserver=720
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     842.8  0.0    18     0     130             5.7   1.1   84.7      8.6   0.6   10.6
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          90.0     11755.1     873.2         64      6896
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     354.8  0.0    25     0     130             5.7   2.4   90.0      1.8   0.7   12.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          77.0     4971.9     364.1         32      5812
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     40.1  0.0    13     0     130             6.5  34.0   59.1      0.4   2.6   42.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          39.8     514.4      45.0         24      2592
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     182.0  0.0    41     0     130            15.2  61.4   20.7      2.8   5.2   86.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          86.0     3339.4     215.7        120      7992
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     376.2  0.0    46     0     130             8.3   1.5   81.5      8.7   1.0   16.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          87.5     5979.9     361.2        124      9080
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     421.0  0.0    16     0     130             8.0   1.3   83.6      7.1   0.9   14.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          85.5     7500.7     465.6        316      7416
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     733.6  0.0    16     0     130            10.3   2.0   80.2      7.5   1.2   19.6
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          89.5     12812.5     807.1        372      9216
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     809.0  0.0    14     0     130             9.0   2.5   79.8      8.7   1.1   18.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           5.5     113.1      29.6          0        84
hdisk1           5.0     113.1      29.6          0        84
hdisk8          99.0     13659.7     1177.8        516      9632
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     870.7  0.0    30     0     130             8.4   1.9   79.8      9.9   1.0   16.5
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           2.5     118.7      28.0          0        89
hdisk1           2.5     118.7      28.0          0        89
hdisk8         100.0     13466.7     1176.0        316      9784
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     110.7  0.0    25     0     130             9.2   1.9   87.8      1.2   1.1   17.7
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          96.0     1717.1     125.7        408      9956
cd0              0.0       0.0       0.0          0         0

for
Code:
vmstat -wt 2 10
 
System configuration: lcpu=24 mem=43776MB ent=6.00
 kthr          memory                         page                       faults                 cpu             time
------- --------------------- ------------------------------------ ------------------ ----------------------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa    pc    ec hr mi se
  3   5    5332179     267293     0     0     0     0      0     0   872  67095 12478  6  2 87  5  0.72  12.0 08:17:38
  3   5    5333619     265694     0     0     0     0      0     0   407  36802  8098  6  2 88  4  0.76  12.7 08:17:40
  4   3    5241734     357241     0     0     0     0      0     0   347  24027  4011  5  4 89  2  0.71  11.9 08:17:42
 14   1    5240262     358637     0     0     0     0      0     0    81  12622  1707  7 47 46  1  3.33  55.6 08:17:44
 12   5    5334239     264497     0     0     0     0      0     0   353  58126  8237 13 47 34  5  4.26  71.0 08:17:46
  5   3    5334903     263760     0     0     0     0      0     0   869  56980 14657  7  2 83  8  0.83  13.8 08:17:48
  3   2    5335191     263413     0     0     0     0      0     0   661  51832 12989  5  1 84 10  0.58   9.7 08:17:50
  3   2    5335417     263121     0     0     0     0      0     0     0      0     0  4  1 86  9  0.51   8.5 08:17:52
  2   2    5335725     262748     0     0     0     0      0     0   170  14532  3256  5  1 92  1  0.65  10.8 08:17:54
  1   2    5335968     262442     0     0     0     0      0     0   574  39562  9331  4  2 84 10  0.55   9.2 08:17:56


for
Code:
iostat -A 2 10
 
System configuration: lcpu=24 drives=4 ent=6.00 paths=3 vdisks=0 maxserver=720
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     842.8  0.0    18     0     130             5.7   1.1   84.7      8.6   0.6   10.6
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          90.0     11755.1     873.2         64      6896
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     354.8  0.0    25     0     130             5.7   2.4   90.0      1.8   0.7   12.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          77.0     4971.9     364.1         32      5812
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     40.1  0.0    13     0     130             6.5  34.0   59.1      0.4   2.6   42.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          39.8     514.4      45.0         24      2592
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     182.0  0.0    41     0     130            15.2  61.4   20.7      2.8   5.2   86.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          86.0     3339.4     215.7        120      7992
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     376.2  0.0    46     0     130             8.3   1.5   81.5      8.7   1.0   16.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          87.5     5979.9     361.2        124      9080
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     421.0  0.0    16     0     130             8.0   1.3   83.6      7.1   0.9   14.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          85.5     7500.7     465.6        316      7416
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     733.6  0.0    16     0     130            10.3   2.0   80.2      7.5   1.2   19.6
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          89.5     12812.5     807.1        372      9216
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     809.0  0.0    14     0     130             9.0   2.5   79.8      8.7   1.1   18.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           5.5     113.1      29.6          0        84
hdisk1           5.0     113.1      29.6          0        84
hdisk8          99.0     13659.7     1177.8        516      9632
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     870.7  0.0    30     0     130             8.4   1.9   79.8      9.9   1.0   16.5
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           2.5     118.7      28.0          0        89
hdisk1           2.5     118.7      28.0          0        89
hdisk8         100.0     13466.7     1176.0        316      9784
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     110.7  0.0    25     0     130             9.2   1.9   87.8      1.2   1.1   17.7
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          96.0     1717.1     125.7        408      9956
cd0              0.0       0.0       0.0          0         0

for
Code:
vmstat -wt 2 10
 
System configuration: lcpu=24 mem=43776MB ent=6.00
 kthr          memory                         page                       faults                 cpu             time
------- --------------------- ------------------------------------ ------------------ ----------------------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa    pc    ec hr mi se
  3   5    5332179     267293     0     0     0     0      0     0   872  67095 12478  6  2 87  5  0.72  12.0 08:17:38
  3   5    5333619     265694     0     0     0     0      0     0   407  36802  8098  6  2 88  4  0.76  12.7 08:17:40
  4   3    5241734     357241     0     0     0     0      0     0   347  24027  4011  5  4 89  2  0.71  11.9 08:17:42
 14   1    5240262     358637     0     0     0     0      0     0    81  12622  1707  7 47 46  1  3.33  55.6 08:17:44
 12   5    5334239     264497     0     0     0     0      0     0   353  58126  8237 13 47 34  5  4.26  71.0 08:17:46
  5   3    5334903     263760     0     0     0     0      0     0   869  56980 14657  7  2 83  8  0.83  13.8 08:17:48
  3   2    5335191     263413     0     0     0     0      0     0   661  51832 12989  5  1 84 10  0.58   9.7 08:17:50
  3   2    5335417     263121     0     0     0     0      0     0     0      0     0  4  1 86  9  0.51   8.5 08:17:52
  2   2    5335725     262748     0     0     0     0      0     0   170  14532  3256  5  1 92  1  0.65  10.8 08:17:54
  1   2    5335968     262442     0     0     0     0      0     0   574  39562  9331  4  2 84 10  0.55   9.2 08:17:56

for
Code:
vmstat -v
 

             11206656 memory pages
             10828768 lruable pages
               248616 free pages
                    3 memory pools
              1343188 pinned pages
                 80.0 maxpin percentage
                  3.0 minperm percentage
                 90.0 maxperm percentage
                 51.4 numperm percentage
              5573666 file pages
                  0.0 compressed percentage
                    0 compressed pages
                 51.4 numclient percentage
                 90.0 maxclient percentage
              5573666 client pages
                    0 remote pageouts scheduled
                    0 pending disk I/Os blocked with no pbuf
                    0 paging space I/Os blocked with no psbuf
                 2484 filesystem I/Os blocked with no fsbuf
                    0 client filesystem I/Os blocked with no fsbuf
                17400 external pager filesystem I/Os blocked with no fsbuf
                 48.0 percentage of memory used for computational pages


for
Code:
vmstat -s
 

           8287091950 total address trans. faults
            145986799 page ins
            252876407 page outs
                    0 paging space page ins
                    0 paging space page outs
                    0 total reclaims
           6351948830 zero filled pages faults
            372268119 executable filled pages faults
            308073096 pages examined by clock
                    0 revolutions of the clock hand
            174117315 pages freed by the clock
              9608977 backtracks
              2540378 free frame waits
                    0 extend XPT waits
             11749011 pending I/O waits
            398863545 start I/Os
             59679605 iodones
           9815133595 cpu context switches
             81257292 device interrupts
            823962209 software interrupts
            445828327 decrementer interrupts
                63094 mpc-sent interrupts
                63094 mpc-receive interrupts
               287741 phantom interrupts
                    0 traps
          12314453621 syscalls


Last edited by zxmaus; 02-10-2012 at 08:24 AM.. Reason: added tags
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Windows & DOS: Issues & Discussions

Poor Windows 10 Performance of Parallels Desktop 15 on macOS Catalina

Just a quick note for macOS users. I just installed (and removed) Parallels Desktop 15 Edition on my MacPro (2013) with 64GB memory and 12-cores, which is running the latest version of macOS Catalina as of this post. The reason for this install was to test some RIGOL test gear software which... (6 Replies)
Discussion started by: Neo
6 Replies

2. AIX

ISCSI poor performance 1.5MB/s fresh install AIX7.1

Hi Everyone, I have been struggling for few days with iSCSI and thought I could get some help on the forum... fresh install of AIX7.1 TL4 on Power 710, The rootvg relies on 3 SAS disks in RAID 0, 32GB Memory The lpar Profile is using all of the managed system's resources. I have connected... (11 Replies)
Discussion started by: frenchy59
11 Replies

3. Solaris

Poor performance on an M3000

Hi We have an M3000 single physical processor and 8gb of memory running Solaris 10. This system runs two Oracle Databases one on Oracle 9i and One on Oracle 10g. As soon as the Oracle 10g database starts we see an immediate drop in system performance, for example opening an ssh session can... (6 Replies)
Discussion started by: gregsih
6 Replies

4. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies

5. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

6. UNIX for Advanced & Expert Users

HW Raid poor io performance

Hello all We just built a storage cluster for our new xenserver farm. Using 3ware 9650SE raid controllers with 8 x 1TB WD sata disks in a raid 5, 256KB stripe size. While making first performance test on the local storage server using dd (which simulates the read/write access to the disk... (1 Reply)
Discussion started by: roli8200
1 Replies

7. UNIX for Dummies Questions & Answers

poor performance processing file with awk

Hello, I'm running a script on AIX to process lines in a file. I need to enclose the second column in quotation marks and write each line to a new file. I've come up with the following: #!/bin/ksh filename=$1 exec >> $filename.new cat $filename | while read LINE do echo $LINE | awk... (2 Replies)
Discussion started by: scooter53080
2 Replies

8. Filesystems, Disks and Memory

Poor read performance on sun storedge a1000

Hello, i have a a1000 connected to an e6500. There's a raid 10 (12 disks) on the a1000. If i do a dd if=/dev/zero of=/mnt/1 bs=1024k count=1000 and then look at iostat it tells me there's a kw/s of 25000. But if i do a dd of=/dev/zero if=/mnt/1 bs=1024k count=1000 then i see only a... (1 Reply)
Discussion started by: mbrenner
1 Replies

9. UNIX for Advanced & Expert Users

Samba on E3500 Poor Performance!!!

Hi you all, I have a BIG performance problem on an Sun E3500, the scenario is described below: I have several users (30) accessing via samba to the E3500 using an application built on Visual Foxpro from their Windows PC , the problem is that the first guy that logs in demands 30% of the E3500... (2 Replies)
Discussion started by: alex blanco
2 Replies
Login or Register to Ask a Question