01-12-2017
Sorry for not replying earlier, starting a new project kept me busy the lst few days.
There are a few things that don't quite add up IMHO:
First, the initial vmstat-output says ~200GB memory, but the avm column only shows ~21 mio of pages, which is ~80GB. Where is the difference? Please post the output of lsattr -El mem0 to verify how much (real) memory you really have.
Second, you said you have 15 cores configured, but the vmstat output shows 16. I presume that was just a typo on your part, but please confirm.
Third, further posted outputs of ps suggest that you have different DB instances running (fininddb and finabrodb). How many database instances are running simultaneously?
Fourth, i don't understand why there are so many archiver processes shown in the ps-outputs. What exactly is/are the DB(s) doing (in terms of how many requests and of which size typically) and how many logs (of which size) are typically produced per time unit? Are there any dumps being taken, exports running or the like?
In light of further information i am of the same opinion as Scrutinizer: your are perhaps a victim of double caching. The high number of pending I/Os and fs I/O blocked with no pbuf are further indicative of this assumption. If (see above, this is why this information is important) you have only one DB instance and you have 80GB of RAM and nothing else running on the system increase the SGA to ~60-70GB and see how that works. If you have set FILESYSTEMIO_OPTIONS=SETALL as suggested by agent.kgb Oracle should open its DB files with concurrent I/O even if the filesystem is not mounted with the CIO option. Concurrent I/O bypasses the OS caching of FS operations but i presume you haven't activated that yet otherwise the picture of two different caching systems blocking each other should not be seen even if the SGA is too small (as it probably is right now).
Finally, a suggestion: when you use vmstat on concurrent systems use the "-w" option. This way you get a neat table as output and it is easier to assess the picture.
I hope this helps.
bakunin
This User Gave Thanks to bakunin For This Post:
9 More Discussions You Might Find Interesting
1. AIX
Hello everybody.
I have a problem with my AIX 5.3. Recently my unix shows a high cpu utilization with sar or topas.
I need to find what I have to do to solve this problem, in fact, I don't know what is my problem.
I had the same problem with another AIX 5.3 running the same... (2 Replies)
Discussion started by: wilder.mellotto
2 Replies
2. News, Links, Events and Announcements
About 4 years ago I wrote this tool inspired by Rob Urban's collect tool for DEC's Tru64 Unix. What makes this tool as different as collect was in its day is its ability to run at a low overhead and collect tons of stuff. I've expanded the general concept and even include data not available in... (0 Replies)
Discussion started by: MarkSeger
0 Replies
3. Solaris
Hello Friends,
On one of my Solaris 10 box, CPU usage shows 100% using "sar", "vmstat". However, it has 4 CPUs and prstat and glance are not showing enough processes to justify high CPU utilization.
=========================================================================
$ prstat -a
... (4 Replies)
Discussion started by: mahive
4 Replies
4. Solaris
Hi All,
While creating zone we will mention min and max cpu cores, like
add dedicated-cpu
set ncpus=NUM_CPUS_MIN-NUM_CPUS_MAX
end
Ques1:
Suppose thing that non global zone uses only minimum cores at particular time What the other cores will do, Will it shared to global zone?
Ques:2... (1 Reply)
Discussion started by: vijaysachin
1 Replies
5. HP-UX
There might be some problem with my server,
because every morning at 7, it's performance become bad with no DB extra deadlock.
But I just couldn't figure it out.
Please give me some advise, thanks a lot...
According to the CPU performace chart, Daily CPU loading Maximum: 42 %, Average:36%.
... (8 Replies)
Discussion started by: GreenShery
8 Replies
6. SCO
hi
We have migrated SCO 5.0.6 into ESX4, but the VM eats 100% of the virtual CPU.
Here is top print from the SCO VM:
last pid: 16773; load averages: 1.68, 1.25, 0.98 02:08:41
79 processes: 75 sleeping, 2 running, 1 zombie, 1 onproc
CPU states: 0.0% idle, 17.0% user,... (7 Replies)
Discussion started by: ccc
7 Replies
7. HP-UX
We have a DB server which is constantly utilised above 95% above.
This is becoming nuisance when the monitoring team frequently calls to check on it. Frankly I do not know what to tweak or even interpret the outputs.
I noticed constant 30 to 60% in wio column of the cpu utilisation.
There... (1 Reply)
Discussion started by: sundar63
1 Replies
8. Shell Programming and Scripting
I want to write a shell script which will print AIX
CPU utilization
memory utilization
every 5 mins redirect to file. How do i do it? Please advise.
Which commands I should use? (3 Replies)
Discussion started by: vegasluxor
3 Replies
9. Solaris
Hi all,
Been reading a lot of the cpu load and its "analogy of it to car traffic path of expressway"
From wiki
Most UNIX systems count only processes in the running (on CPU) or runnable (waiting for CPU) states. However, Linux also includes processes in uninterruptible sleep states... (13 Replies)
Discussion started by: javanoob
13 Replies
LEARN ABOUT OSF1
unix_master
unix_master(9r) unix_master(9r)
NAME
unix_master - General: Forces execution onto the master CPU
SYNOPSIS
void unix_master(
void );
ARGUMENTS
None
DESCRIPTION
The unix_master routine forces execution of the kernel thread onto the master CPU (also called the boot CPU). In other words, unix_master
binds the kernel thread to the master CPU. To release the kernel thread from the bind to the master CPU, call the unix_release routine. You
can make recursive calls to unix_master as long as you make an equal number of calls to unix_release.
The unix_master routine provides another way besides the simple and complex lock routines to make a kernel module symmetric multiprocessing
(SMP) safe. Although calling unix_master is not optimal for performance on an SMP CPU, it does provide third-party kernel module writers
with an easy way to make their modules SMP safe without using the lock routines.
NOTES
Device drivers should not directly call the unix_master and unix_release routines. One exception to this recommendation is when you want a
device driver's kernel threads to run only on the master CPU. This situation occurs when your driver creates and starts its own kernel
threads and you set the d_funnel member of the associated dsent structure to the value DEV_FUNNEL. In this case, each kernel thread must
call unix_master once to ensure that the kernel thread runs only on the master CPU. Remember to make a corresponding call to unix_release.
CAUTIONS
To avoid deadlock, do not call the unix_master routine under the following circumstances: When holding a simple lock In the driver's inter-
rupt service routine
RETURN VALUES
None
SEE ALSO
routines: unix_release(9r)
unix_master(9r)