hi friends,
i know that when there is a crash then that memory image is
put into /var/adm/crash
but if the system hangs up and if i have access to console of
that machine then how can i take the crash dump manully.
thanks (2 Replies)
Hi Guys,
I have two nodes clustered. Each node is AIX 5.2 & they are clustered with HACMP 5.2. The mode of the cluster is Active/Passive which mean one node is the Active node & have all resource groups on it & the 2nd node is standby.
Last Monday I noted that all resource groupes have been... (2 Replies)
Hi.
I have started heartbeat on two redhat servers. Using eth0.
Before I start heartbeat I can ping the two server to each other.
Once I start heartbeat both the server become active as they both have warnings that the other node is dead.
Also I am not able to ping each other. After stopping... (1 Reply)
Hi Guys,
I have to design a multinode hacmp cluster and am not sure if the design I am thinking of makes any sense.
I have to make an environment that currently resides on 5 nodes more resilient but I have the constrain of only having 4 frames. In addition the business doesnt want to pay for... (7 Replies)
Hi
I had an active passive cluster. Node A went down and all resource groups moved to Node B.
Now we brought up Node A. What is the procedure to bring everything back to Node A.
Node A #lssrc -a | grep cl
clcomdES clcomdES 323782 active
clstrmgrES cluster... (9 Replies)
Hi Experts,
I have configured HP-UX Service Guard cluster and it dumps crash every time i reboot a cluster node. Can anyone please help me to prevent these unnecessary crash dumps at the time of rebooting SG cluster node?
Thanks in advance.
Vaishey (2 Replies)
MacPro (2013) 12-Core, 64GB RAM (today's crash):
panic(cpu 2 caller 0xffffff7f8b333ad5): userspace watchdog timeout: no successful checkins from com.apple.WindowServer in 120 seconds
service: com.apple.logd, total successful checkins since load (318824 seconds ago): 31883, last successful... (3 Replies)
Discussion started by: Neo
3 Replies
LEARN ABOUT OSF1
filterlog
filterlog(8) System Manager's Manual filterlog(8)NAME
filterlog - Logs and reports system Correctable Read Data (CRD) memory errors on specific systems.
SYNOPSIS
/usr/sbin/filterlog [-l] [-d crdlog] [-d crdlifetime] [-s crdlength #] [-s crdcount #] [-h]
OPTIONS
Logs or filters an entry from stdin. This option is used by the binary event-log daemon, binlogd. Dumps the contents of the CRD log file
in text format. Errors shown in the file are those recorded by the system since the last boot. Dumps the CRD lifetime log information in
text format. Used to set the CRD interval time in minutes. The default is 24 hours. Used to set the CRD interval count. The default is
50. Prints a list of the command option options defined in this reference page.
DESCRIPTION
This utility ensures that only genuine memory errors are reported to system log files on certain system types. The command filterlog is
called directly by the binary event-log daemon, binlogd to filter system CRD (memory hardware) errors. CRD errors are logged according to
user-definable parameters. If it is determined that a genuine memory problem exists, an entry is passed to binlogd. The error will then
be written to the system's error log file and can be notified to the user through DECevent.
The filterlog utility uses two variables to determine if a CRD error signifies a genuine memory hardware problem -- crdlength and crdcount.
If crdcount number of errors occur during crdlength minutes, an entry is passed to binlogd. The default settings are 50 errors in 24 hours
(1440 minutes).
The filterlog utility reads the /etc/binlog.conf file to determine what log file should be used to record any events. The default log file
is /usr/adm/binary.crdlog.
SEE ALSO
Commands: binlogd(8), dia(8) (DECevent)
filterlog(8)