09-08-2009
Your server is part of an HACMP cluster that uses heartbeats to detect all nodes are alive. The error messages tells you that one of those heartbeats is not working properly because of high (CPU) load. You said that you use 70% of the server's paging space. If this paging space is on rootvg and if rootvg disks are 100% busy during paging space usage that is the likely cause for heartbeat pakets getting lost. So the second relevant part of the error message is the hint about excessive disk I/O which is causing high memory contention
You don't not to worry as long as at least one heartbeat keeps on working. However make sure that the paging space is not being used frequently! You DB has grown over the time and you may need to get more RAM for the (all) cluster nodes
10 More Discussions You Might Find Interesting
1. AIX
Hi, I'a a new member here.
My company just bought p570 with 8 LPAR (previously we have p650 with 4 LPAR).
Did anyone have procedure how to setup NIM server (NIM LPAR) and how to install other new LPAR to use the NIM server (as client).
Appreciate your help and thank you very much.
Rgds,
David (0 Replies)
Discussion started by: dshg
0 Replies
2. AIX
Hi-
I'm using NIM functionality (AIX5.3) to backup all AIX Servers but some Servers are in the DmZ and many tcpip ports (nfs, ping,etc...) should be open and... it's really a security risks!
As anyone experience about NIM Backup through Firewall?
Which tcpip ports should be open?
Could we... (1 Reply)
Discussion started by: nymus7
1 Replies
3. AIX
I try to migrate a NIM server from one server to another.
I try to do a mksysb on NIM server
restore the NIM server's mksysb to a client through NIM installation
shutdown NIM server
start newly installed client as NIM server
Does anyone do this before? who can give me some suggestion? (1 Reply)
Discussion started by: yanzhang
1 Replies
4. AIX
Can any one help.....
How will do migration through NIM server? (4 Replies)
Discussion started by: AIXlearner
4 Replies
5. AIX
:b:Hi...
I need help to configure nim client on nim server..
can i define aix 5.3.4.0 on aix 5.3.7.0 nim server.. while i m configuring nim client on nim server its getting msg that images not same.. i need to confirm that both version should be same or not..
Thanks.. (5 Replies)
Discussion started by: sumathi.k
5 Replies
6. AIX
Hi All,
Please excuse the possibly naive question but I'm trying to clone/install a new AIX 5.3 LPAR on a p570 from a mksysb image file using nim. Has anyone done this before and if so, what would the exact command look like?
Does it even remotely resemble something like
nim -o... (1 Reply)
Discussion started by: combustables
1 Replies
7. AIX
Guys,
We are planning to upgrade one of our NIM server to AIX 6.1 from 5.3...
Since the server itself is a NIM Server we can't perform it via NIM & I'm choosing to do CD install.. The Install method would be Upgrade installation.
Is there anything special that I need to consider before... (5 Replies)
Discussion started by: kkeng808
5 Replies
8. AIX
Could you please let me know, if it is possible to have NIM server running on one volume group and other applications like oracle running on other volume group. Do we need to have a dedicated server for only AIX NIM server?. I am new to AIX and planning to install NIM server on a test server. which... (3 Replies)
Discussion started by: saikiran_1984
3 Replies
9. AIX
HELLO ALL
i have installed aix 6.2 , and install sysback 6.1 over Nim , and cinfigure it by Nim AND sysback smitty menu with create spot and lppsource and make TSM configration for that, i take image backup(installation image) successfully but when i want to restore this image , the boot cycle... (5 Replies)
Discussion started by: nancy_ghawanmeh
5 Replies
10. AIX
Using nimadm:
nimadm -j nimadmvg -c sap024 -s spot_6100 -l lpp_6100 -d "hdisk1" -Y
Initializing the NIM master.
Initializing NIM client sap024.
0505-205 nimadm: The level of bos.alt_disk_install.rte installed in SPOT
spot_6100 (6.1.3.4) does not match the NIM master's level (7.1.1.2).... (2 Replies)
Discussion started by: sciacca75
2 Replies
LEARN ABOUT REDHAT
mlockall
MLOCKALL(2) Linux Programmer's Manual MLOCKALL(2)
NAME
mlockall - disable paging for calling process
SYNOPSIS
#include <sys/mman.h>
int mlockall(int flags);
DESCRIPTION
mlockall disables paging for all pages mapped into the address space of the calling process. This includes the pages of the code, data and
stack segment, as well as shared libraries, user space kernel data, shared memory and memory mapped files. All mapped pages are guaranteed
to be resident in RAM when the mlockall system call returns successfully and they are guaranteed to stay in RAM until the pages are
unlocked again by munlock or munlockall or until the process terminates or starts another program with exec. Child processes do not
inherit page locks across a fork.
Memory locking has two main applications: real-time algorithms and high-security data processing. Real-time applications require determin-
istic timing, and, like scheduling, paging is one major cause of unexpected program execution delays. Real-time applications will usually
also switch to a real-time scheduler with sched_setscheduler. Cryptographic security software often handles critical bytes like passwords
or secret keys as data structures. As a result of paging, these secrets could be transfered onto a persistent swap store medium, where they
might be accessible to the enemy long after the security software has erased the secrets in RAM and terminated. For security applications,
only small parts of memory have to be locked, for which mlock is available.
The flags parameter can be constructed from the bitwise OR of the following constants:
MCL_CURRENT Lock all pages which are currently mapped into the address space of the process.
MCL_FUTURE Lock all pages which will become mapped into the address space of the process in the future. These could be for instance new
pages required by a growing heap and stack as well as new memory mapped files or shared memory regions.
If MCL_FUTURE has been specified and the number of locked pages exceeds the upper limit of allowed locked pages, then the system call which
caused the new mapping will fail with ENOMEM. If these new pages have been mapped by the the growing stack, then the kernel will deny
stack expansion and send a SIGSEGV.
Real-time processes should reserve enough locked stack pages before entering the time-critical section, so that no page fault can be caused
by function calls. This can be achieved by calling a function which has a sufficiently large automatic variable and which writes to the
memory occupied by this large array in order to touch these stack pages. This way, enough pages will be mapped for the stack and can be
locked into RAM. The dummy writes ensure that not even copy-on-write page faults can occur in the critical section.
Memory locks do not stack, i.e., pages which have been locked several times by calls to mlockall or mlock will be unlocked by a single call
to munlockall. Pages which are mapped to several locations or by several processes stay locked into RAM as long as they are locked at
least at one location or by at least one process.
On POSIX systems on which mlockall and munlockall are available, _POSIX_MEMLOCK is defined in <unistd.h>.
RETURN VALUE
On success, mlockall returns zero. On error, -1 is returned, errno is set appropriately.
ERRORS
ENOMEM The process tried to exceed the maximum number of allowed locked pages.
EPERM The calling process does not have appropriate privileges. Only root processes are allowed to lock pages.
EINVAL Unknown flags were specified.
CONFORMING TO
POSIX.1b, SVr4. SVr4 documents an additional EAGAIN error code.
SEE ALSO
munlockall(2), mlock(2), munlock(2)
Linux 1.3.43 1995-11-26 MLOCKALL(2)