I have no idea why harmad has grown to 250MB. On the other hand 250MB is not that big compared to the memory size of your machine. I'd watch it closely but wouldn't be too concerned for the moment.
Your layout is somewhat astonishing insofar as the system has relatively many CPUs compared to the size of the main memory. As a rule of thumb a modern processor core can efficiently work programs fitting in roughly 1-1.5GB of memory. This of course says nothing about your system at all, rules of thumb don't necessarily cover a specific case.
Looking at your vmstat output your system is near to being memory-bound. The "avm"- and "fre"-columns are in memory pages and "fre" shows roughly 160MB memory to be free - not too much, considering the overall size of the system. For further investigation issue
(only as root) and look at the results (compare with this thread), maybe there is a memory shortage on your system.
Further places to investigate would be the tuning of the system: analyse (and/or post) the output of
To find out which tuning parameters are in effect. The values of tunables are also stored in the files "/etc/tunables/[lastboot]|[nextboot]". Also a place to investigate is
which might tell you about I/O-problems (see # of "filesystem I/Os blocked with no fsbuf").
Also something to worry about is the blocked-queue (column "b"), which is non-zero. This corresponds with some (light) paging activity and some wait (rightmost column "wa"). Generally this column being non-zero means some process could run, but has to wait for some reason, usually memory to become freed.
As your "id" column is most times quite high you seem to have no CPU problems at all and your "iostat" output shows relatively low I/O bandwidth. I'd not aggregate over adapters but look at the disk statistics instead, you might want to look at the output of
instead to identify probably hostspots. On the other hand the data you presented don't suggest any I/O-problems at all.
Another point is shared memory: "ps -o vsz" will tell you only about memory which belongs to one process, but will neglect shared memory. You might want to issue a
and investigate possible shared memory pools which might consume large amounts of memory.
We run two p5 nodes running AIX 5L in a cluster mode (HACMP), both the nodes share external disk arrays. Only the primary node can access the shared disks at a given point of time.
We are in the process of adding two new disks to the disk arrays so as to make them available to the existing... (3 Replies)
Hello,
I was wondering if I have 3 nodes (A, B, C) all configured to startup with HACMP, but I would like to configure HACMP in such a way:
1) Node B should startup first. After the cluster successfully starts up and mounts all the filesystems, then
2) Node A, and Node C should startup !
... (4 Replies)
hi guys,
I am new to linux. I want to install it on my home computer. I have a few questions.
1) if an exploit is found on linux, how long is it before it gets patched up? My worry is that because there are not many linux users, if a big is found, then it will be a long time before others... (4 Replies)
Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC.
Am thinking how to migrate to sun cluster setup instead.
My plan as follows leave the existing vcs intact as a fallback plan.
Then install and build suncluster on... (5 Replies)
Hello experts -
I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts.
(1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over)
(2) Or should i perform the installation on... (0 Replies)
As i have updated a lot of HACMP-nodes lately the question arises how to do it with minimal downtime. Of course it is easily possible to have a downtime and do the version update during this. In the best of worlds you always get the downtime you need - unfortunately we have yet to find this best of... (4 Replies)
Hello,
I am working on applications on an AIX 6.1 two-node cluster, with an active and passive node. Is there a command that will show me which mount points / file systems are shared and 'swing' from one node to the other when the active node changes, and which mount points are truly local to... (6 Replies)