08-10-2008
Quote:
Originally Posted by
jlliagre
I'm afraid this statement is misleading.
Solaris still requires a single dump slice large enough for kernel crash dump data to be recorded, and there is no much point not using a swap area for it.
Even while this data is compressed since Solaris 8, I wouldn't recommend to use less than the RAM size as swap size as it is the only way to guarantee a crash dump will fit.
Most folks seem to say that 4GB is large enough for any Solaris core dump. In addition, you can specify to limit the size of the core dump.
Since I use Linux, I don't recall ever needing a core dump, and normally I find them a waste of disk space on most systems. Better to have a kernel that does not dump cores ....
that you have massive core dumps.
In other words, I don't think that core dumps should be must of a factor in thinking about swap.
10 More Discussions You Might Find Interesting
1. Programming
#include <stdio.h>
void main()
{
int Index=1;
char *Type=NULL;
Type = (char *)Index;
printf("%s",Type);
}
Getting coredump (5 Replies)
Discussion started by: vijaysabari
5 Replies
2. Solaris
We have Sun OS running on spark :
SunOS ciniwnpr67 5.10 Generic_118833-24 sun4u sparc SUNW,Sun-Fire-V440
Having Physical RAM :
Sol10box # prtconf | grep Mem
Memory size: 8192 Megabytes
My Top Output is :
130 processes: 129 sleeping, 1 on cpu
CPU states: 98.8% idle, 0.2% user, 1.0%... (27 Replies)
Discussion started by: rajwinder
27 Replies
3. AIX
Hi,
I am using zerofault in AIX to find memory leaks for my server.
zf -c <forked-server>
zf -l 30 <server> <arguments>
Then after some (5 mins ) it terminates core dumping and saying server exited abnormally.
I could not understand the core file generated: its something like show in below... (0 Replies)
Discussion started by: vivek.gkp
0 Replies
4. Solaris
Hi all
Got myself in a pickle here, chasing my own tail and am confused. Im trying to work out memory / swap on my solaris 10 server, that Im using zones on.
Server A has 32Gb of raw memory, ZFS across the root /mirror drives.
# prtdiag -v | grep mem = Memory size: 32768 Megabytes
#... (1 Reply)
Discussion started by: sbk1972
1 Replies
5. Solaris
We have a SPARC system which is running on Solaris-9 and Physical memory size is 16GB.We have allocated 32GB SWAP space(2 times of physical memory).But when we use df -h command it shows following output and SWAP space size shows more than our allocated space
# df -h
Filesystem size used... (2 Replies)
Discussion started by: cyberdemon
2 Replies
6. Shell Programming and Scripting
Inorder to find the user memory consumption I used the command: prstat -s cpu -a -n 10
But now I want to automate it and want to write the output to a file.
How can I write the out put of user name and percentage of consumption alone to an output file.? (2 Replies)
Discussion started by: engineer
2 Replies
7. Solaris
hi friends, we are relocating our DC and need to plan out electrical power for the new DC.
are there ways i could find the actual power consumption from my current servers ? instead of the product specs. (2 Replies)
Discussion started by: Exposure
2 Replies
8. Solaris
Hi Experts,
I have M4000 server with 132 GB Physical memory. 4 sparse zones are running under this server, which are running multiple applications. I am not getting any pointer, where swap space is getting consumed. Almost 97% of swap space is being used. I checked all /tmp (of zones as well),... (7 Replies)
Discussion started by: solaris_1977
7 Replies
9. Solaris
I have a customers that is getting grid alerts that swap is over 95% utilized. When I do swap -l on the machine I get the following results.
$ swap -l
swapfile dev swaplo blocks free
/swap/swapfile - 16 6291440 6291440
/swap/swapfile2 - 16 8191984... (18 Replies)
Discussion started by: Michael.McGraw
18 Replies
10. Solaris
Hi all,
Q1) Due to application requirement, i am required to have more swap space.
Currently my swap is on a partition with 32GB.
I have another partition with 100GB, but it already has a UFS filesystem on it.
Can i just swap -d /dev/dsk/current32gb and swap -a /dev/dsk/ufs100gb ?
Will... (17 Replies)
Discussion started by: javanoob
17 Replies
LEARN ABOUT FREEBSD
crashinfo
CRASHINFO(8) BSD System Manager's Manual CRASHINFO(8)
NAME
crashinfo -- analyze a core dump of the operating system
SYNOPSIS
crashinfo [-d crashdir] [-n dumpnr] [-k kernel] [core]
DESCRIPTION
The crashinfo utility analyzes a core dump saved by savecore(8). It generates a text file containing the analysis in the same directory as
the core dump. For a given core dump file named vmcore.XX the generated text file will be named core.txt.XX.
By default, crashinfo analyzes the most recent core dump in the core dump directory. A specific core dump may be specified via either the
core or dumpnr arguments. Once crashinfo has located a core dump, it analyzes the core dump to determine the exact version of the kernel
that generated the core. It then looks for a matching kernel file under each of the subdirectories in /boot. The location of the kernel
file can also be explicitly provided via the kernel argument.
Once crashinfo has located a core dump and kernel, it uses several utilities to analyze the core including dmesg(8), fstat(1), iostat(8),
ipcs(1), kgdb(1), netstat(1), nfsstat(1), ps(1), pstat(8), and vmstat(8).
The options are as follows:
-d crashdir
Specify an alternate core dump directory. The default crash dump directory is /var/crash.
-n dumpnr
Use the core dump saved in vmcore.dumpnr instead of the latest core in the core dump directory.
-k kernel
Specify an explicit kernel file.
SEE ALSO
textdump(4), savecore(8)
HISTORY
The crashinfo utility appeared in FreeBSD 6.4.
BSD
June 28, 2008 BSD