02-10-2012
for
Code :
lvmo -a -v oravg
vgname = oravg
pv_pbuf_count = 512
total_vg_pbufs = 512
max_vg_pbufs = 16384
pervg_blocked_io_count = 2848
pv_min_pbuf = 512
max_vg_pbuf_count = 0
global_blocked_io_count = 2848
for ioo -a | grep aio
aio_active = 1
aio_maxreqs = 65536
aio_maxservers = 30
aio_minservers = 3
aio_server_inactivity = 300
posix_aio_active = 0
posix_aio_maxreqs = 65536
posix_aio_maxservers = 30
posix_aio_minservers = 3
posix_aio_server_inactivity = 300
For mount:
Code :
node mounted mounted over vfs date options
-------- --------------- --------------- ------ ------------ ---------------
/dev/hd4 / jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd2 /usr jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd9var /var jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd3 /tmp jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd1 /home jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd11admin /admin jfs2 Feb 02 07:31 rw,log=/dev/hd8
/proc /proc procfs Feb 02 07:31 rw
/dev/hd10opt /opt jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/livedump /var/adm/ras/livedump jfs2 Feb 02 07:31 rw,log=/dev/ hd8
/dev/fslv00 /u01 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv01 /bankadm jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv02 /smeadm jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv03 /infosys jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv04 /uatadm1 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv05 /uatadm2 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv06 /DB_Backups jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv07 /REPORTS jfs2 Feb 02 07:31 rw,log=/dev/hd8
Hi here it is. servers going insane again.
for
Code :
iostat -A 2 10
System configuration: lcpu=24 drives=4 ent=6.00 paths=3 vdisks=0 maxserver=720
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
842.8 0.0 18 0 130 5.7 1.1 84.7 8.6 0.6 10.6
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 90.0 11755.1 873.2 64 6896
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
354.8 0.0 25 0 130 5.7 2.4 90.0 1.8 0.7 12.2
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 77.0 4971.9 364.1 32 5812
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
40.1 0.0 13 0 130 6.5 34.0 59.1 0.4 2.6 42.8
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 39.8 514.4 45.0 24 2592
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
182.0 0.0 41 0 130 15.2 61.4 20.7 2.8 5.2 86.8
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 86.0 3339.4 215.7 120 7992
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
376.2 0.0 46 0 130 8.3 1.5 81.5 8.7 1.0 16.2
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 87.5 5979.9 361.2 124 9080
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
421.0 0.0 16 0 130 8.0 1.3 83.6 7.1 0.9 14.8
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 85.5 7500.7 465.6 316 7416
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
733.6 0.0 16 0 130 10.3 2.0 80.2 7.5 1.2 19.6
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 89.5 12812.5 807.1 372 9216
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
809.0 0.0 14 0 130 9.0 2.5 79.8 8.7 1.1 18.2
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 5.5 113.1 29.6 0 84
hdisk1 5.0 113.1 29.6 0 84
hdisk8 99.0 13659.7 1177.8 516 9632
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
870.7 0.0 30 0 130 8.4 1.9 79.8 9.9 1.0 16.5
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 2.5 118.7 28.0 0 89
hdisk1 2.5 118.7 28.0 0 89
hdisk8 100.0 13466.7 1176.0 316 9784
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
110.7 0.0 25 0 130 9.2 1.9 87.8 1.2 1.1 17.7
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 96.0 1717.1 125.7 408 9956
cd0 0.0 0.0 0.0 0 0
for
Code :
vmstat -wt 2 10
System configuration: lcpu=24 mem=43776MB ent=6.00
kthr memory page faults cpu time
------- --------------------- ------------------------------------ ------------------ ----------------------- --------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hr mi se
3 5 5332179 267293 0 0 0 0 0 0 872 67095 12478 6 2 87 5 0.72 12.0 08:17:38
3 5 5333619 265694 0 0 0 0 0 0 407 36802 8098 6 2 88 4 0.76 12.7 08:17:40
4 3 5241734 357241 0 0 0 0 0 0 347 24027 4011 5 4 89 2 0.71 11.9 08:17:42
14 1 5240262 358637 0 0 0 0 0 0 81 12622 1707 7 47 46 1 3.33 55.6 08:17:44
12 5 5334239 264497 0 0 0 0 0 0 353 58126 8237 13 47 34 5 4.26 71.0 08:17:46
5 3 5334903 263760 0 0 0 0 0 0 869 56980 14657 7 2 83 8 0.83 13.8 08:17:48
3 2 5335191 263413 0 0 0 0 0 0 661 51832 12989 5 1 84 10 0.58 9.7 08:17:50
3 2 5335417 263121 0 0 0 0 0 0 0 0 0 4 1 86 9 0.51 8.5 08:17:52
2 2 5335725 262748 0 0 0 0 0 0 170 14532 3256 5 1 92 1 0.65 10.8 08:17:54
1 2 5335968 262442 0 0 0 0 0 0 574 39562 9331 4 2 84 10 0.55 9.2 08:17:56
for
Code :
iostat -A 2 10
System configuration: lcpu=24 drives=4 ent=6.00 paths=3 vdisks=0 maxserver=720
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
842.8 0.0 18 0 130 5.7 1.1 84.7 8.6 0.6 10.6
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 90.0 11755.1 873.2 64 6896
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
354.8 0.0 25 0 130 5.7 2.4 90.0 1.8 0.7 12.2
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 77.0 4971.9 364.1 32 5812
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
40.1 0.0 13 0 130 6.5 34.0 59.1 0.4 2.6 42.8
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 39.8 514.4 45.0 24 2592
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
182.0 0.0 41 0 130 15.2 61.4 20.7 2.8 5.2 86.8
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 86.0 3339.4 215.7 120 7992
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
376.2 0.0 46 0 130 8.3 1.5 81.5 8.7 1.0 16.2
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 87.5 5979.9 361.2 124 9080
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
421.0 0.0 16 0 130 8.0 1.3 83.6 7.1 0.9 14.8
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 85.5 7500.7 465.6 316 7416
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
733.6 0.0 16 0 130 10.3 2.0 80.2 7.5 1.2 19.6
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 89.5 12812.5 807.1 372 9216
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
809.0 0.0 14 0 130 9.0 2.5 79.8 8.7 1.1 18.2
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 5.5 113.1 29.6 0 84
hdisk1 5.0 113.1 29.6 0 84
hdisk8 99.0 13659.7 1177.8 516 9632
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
870.7 0.0 30 0 130 8.4 1.9 79.8 9.9 1.0 16.5
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 2.5 118.7 28.0 0 89
hdisk1 2.5 118.7 28.0 0 89
hdisk8 100.0 13466.7 1176.0 316 9784
cd0 0.0 0.0 0.0 0 0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
110.7 0.0 25 0 130 9.2 1.9 87.8 1.2 1.1 17.7
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk8 96.0 1717.1 125.7 408 9956
cd0 0.0 0.0 0.0 0 0
for
Code :
vmstat -wt 2 10
System configuration: lcpu=24 mem=43776MB ent=6.00
kthr memory page faults cpu time
------- --------------------- ------------------------------------ ------------------ ----------------------- --------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hr mi se
3 5 5332179 267293 0 0 0 0 0 0 872 67095 12478 6 2 87 5 0.72 12.0 08:17:38
3 5 5333619 265694 0 0 0 0 0 0 407 36802 8098 6 2 88 4 0.76 12.7 08:17:40
4 3 5241734 357241 0 0 0 0 0 0 347 24027 4011 5 4 89 2 0.71 11.9 08:17:42
14 1 5240262 358637 0 0 0 0 0 0 81 12622 1707 7 47 46 1 3.33 55.6 08:17:44
12 5 5334239 264497 0 0 0 0 0 0 353 58126 8237 13 47 34 5 4.26 71.0 08:17:46
5 3 5334903 263760 0 0 0 0 0 0 869 56980 14657 7 2 83 8 0.83 13.8 08:17:48
3 2 5335191 263413 0 0 0 0 0 0 661 51832 12989 5 1 84 10 0.58 9.7 08:17:50
3 2 5335417 263121 0 0 0 0 0 0 0 0 0 4 1 86 9 0.51 8.5 08:17:52
2 2 5335725 262748 0 0 0 0 0 0 170 14532 3256 5 1 92 1 0.65 10.8 08:17:54
1 2 5335968 262442 0 0 0 0 0 0 574 39562 9331 4 2 84 10 0.55 9.2 08:17:56
for
Code :
vmstat -v
11206656 memory pages
10828768 lruable pages
248616 free pages
3 memory pools
1343188 pinned pages
80.0 maxpin percentage
3.0 minperm percentage
90.0 maxperm percentage
51.4 numperm percentage
5573666 file pages
0.0 compressed percentage
0 compressed pages
51.4 numclient percentage
90.0 maxclient percentage
5573666 client pages
0 remote pageouts scheduled
0 pending disk I/Os blocked with no pbuf
0 paging space I/Os blocked with no psbuf
2484 filesystem I/Os blocked with no fsbuf
0 client filesystem I/Os blocked with no fsbuf
17400 external pager filesystem I/Os blocked with no fsbuf
48.0 percentage of memory used for computational pages
for
Code :
vmstat -s
8287091950 total address trans. faults
145986799 page ins
252876407 page outs
0 paging space page ins
0 paging space page outs
0 total reclaims
6351948830 zero filled pages faults
372268119 executable filled pages faults
308073096 pages examined by clock
0 revolutions of the clock hand
174117315 pages freed by the clock
9608977 backtracks
2540378 free frame waits
0 extend XPT waits
11749011 pending I/O waits
398863545 start I/Os
59679605 iodones
9815133595 cpu context switches
81257292 device interrupts
823962209 software interrupts
445828327 decrementer interrupts
63094 mpc-sent interrupts
63094 mpc-receive interrupts
287741 phantom interrupts
0 traps
12314453621 syscalls
Last edited by zxmaus; 02-10-2012 at 08:24 AM ..
Reason: added tags
9 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hi you all, I have a BIG performance problem on an Sun E3500, the scenario is described below:
I have several users (30) accessing via samba to the E3500 using an application built on Visual Foxpro from their Windows PC , the problem is that the first guy that logs in demands 30% of the E3500... (2 Replies)
Discussion started by: alex blanco
2 Replies
2. Filesystems, Disks and Memory
Hello,
i have a a1000 connected to an e6500. There's a raid 10 (12 disks) on the a1000.
If i do a
dd if=/dev/zero of=/mnt/1 bs=1024k count=1000
and then look at iostat it tells me there's a kw/s of 25000.
But if i do a
dd of=/dev/zero if=/mnt/1 bs=1024k count=1000
then i see only a... (1 Reply)
Discussion started by: mbrenner
1 Replies
3. UNIX for Dummies Questions & Answers
Hello,
I'm running a script on AIX to process lines in a file. I need to enclose the second column in quotation marks and write each line to a new file. I've come up with the following:
#!/bin/ksh
filename=$1
exec >> $filename.new
cat $filename | while read LINE
do
echo $LINE | awk... (2 Replies)
Discussion started by: scooter53080
2 Replies
4. UNIX for Advanced & Expert Users
Hello all
We just built a storage cluster for our new xenserver farm. Using 3ware 9650SE raid controllers with 8 x 1TB WD sata disks in a raid 5, 256KB stripe size.
While making first performance test on the local storage server using dd (which simulates the read/write access to the disk... (1 Reply)
Discussion started by: roli8200
1 Replies
5. Solaris
Hello,
we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks.
Each disk has a sequential performance of 220/230 MB/s and in fact if I do a
dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies
6. Solaris
Hello guys,
I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies
7. Solaris
Hi
We have an M3000 single physical processor and 8gb of memory running Solaris 10. This system runs two Oracle Databases one on Oracle 9i and One on Oracle 10g.
As soon as the Oracle 10g database starts we see an immediate drop in system performance, for example opening an ssh session can... (6 Replies)
Discussion started by: gregsih
6 Replies
8. AIX
Hi Everyone,
I have been struggling for few days with iSCSI and thought I could get some help on the forum...
fresh install of AIX7.1 TL4 on Power 710, The rootvg relies on 3 SAS disks in RAID 0, 32GB Memory
The lpar Profile is using all of the managed system's resources.
I have connected... (11 Replies)
Discussion started by: frenchy59
11 Replies
9. Windows & DOS: Issues & Discussions
Just a quick note for macOS users.
I just installed (and removed) Parallels Desktop 15 Edition on my MacPro (2013) with 64GB memory and 12-cores, which is running the latest version of macOS Catalina as of this post. The reason for this install was to test some RIGOL test gear software which... (6 Replies)
Discussion started by: Neo
6 Replies