07-07-2003
Abort core dumped!!!!
HI All,
I am working on Solaris 8, i have this application runing on one of the partitions,(the installation was done here ie /export/home)
And the out put of this goes to another parition of other disk attached to the same machine. After a certain period of time is get this error stating "Abort core dumped". When i checked the core file the message read as "swapfile only valid if -k".
What could be the problem. Is it that the partitin on which the files are getting created is not mounted with proper options(cos earlier also i have ran the application with equal amount of data but then the output was on the same partition and also i had unmounted the parition whcih is storing the output now and mounted it back again).
Thanks in advance
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hello To All!
Now anfd then I receive a message on my console:
Segmentation fault (core dumped)
What does it mean? Or more precisely what are the implications?
:confused: (1 Reply)
Discussion started by: Ivo
1 Replies
2. UNIX for Dummies Questions & Answers
what is segmentation core dumped?
how should i respond? (9 Replies)
Discussion started by: yls177
9 Replies
3. UNIX for Advanced & Expert Users
I faced following problem while restoring root backup
Server : Compaq Proliant 6000
OS SCO : Unixware 7.0
#tar - xvf /dev/rmt/ctape1
After extracting some files following error message occurred and process stopped
# BUS ERROR CORE DUMPED
What may be the problem? How to avoid... (1 Reply)
Discussion started by: j1yant
1 Replies
4. Programming
what r the situations to receive an error msg like the one below
Segmentation Fault (core dumped) (2 Replies)
Discussion started by: bankpro
2 Replies
5. BSD
I am running Open BSD 3.8 (3.5 upgrade) on a Pent Pro. 200, 64 Megs Ram, Nvedia Vanta TNT 16 Megs, Realtech 8139 Nic. When running ifconfig -a I get this error back. I've run searches on google no deal. I can get Stack overflow or psync, but not both. So I would really like to know how to fix it. ... (0 Replies)
Discussion started by: jmcpreach
0 Replies
6. UNIX for Advanced & Expert Users
I'm running Red Hat Linux 2.6.7 on a x86_64 box.
I have a core file from a program that called abort(). Does anyone here know how can I get a backtrace? (Re-creating the error with svd running inside gdb has proved impossible).
% gdb svd core.25223
GNU gdb Red Hat Linux... (2 Replies)
Discussion started by: svact
2 Replies
7. UNIX for Advanced & Expert Users
I use SCO UNIX 5.07 on a Compaq Proliant Machine. Each time I press the Escape or Delete key while running a program or issuing a FoxBase+ command from the dot prompt, I receive the error message: "Memory Fault - Core Dumped" and the screen locks up immediately.
I would appreciate if necessary... (0 Replies)
Discussion started by: bayuz
0 Replies
8. Programming
I have written a code in UNIX which is complied by using g++. Compling with turbo C didnt yield any errors, but with g++ I am getting Memory fault, core dumped. Could anyone help me out with this?
Given below is the code:
#include<stdio.h>
#include<string.h>
#include<stdlib.h>... (2 Replies)
Discussion started by: usshell
2 Replies
9. Solaris
i am getting Segmentation Fault (core dumped) on solaris,
but when i run the same program with same input on linux it runs successfully.
How can i trace the fault in program on solaris. (6 Replies)
Discussion started by: junaid.nehvi
6 Replies
10. Filesystems, Disks and Memory
hello,
when i make gcc 4.4.2 i get this message
find gnu java javax org sun -name .svn -prune -o -name '*.class' -print | \
gjar -cfM@ $here/libgcj-4.4.2.jar
/bin/sh: line 2: 32730 Done find gnu java javax org sun -name .svn -prune -o -name '*.class'... (2 Replies)
Discussion started by: aleppos
2 Replies
DUMP(1M) DUMP(1M)
NAME
dump - incremental file system dump
SYNOPSIS
dump [ key [ argument ... ] filesystem ]
DESCRIPTION
Dump copies to magnetic tape all files changed after a certain date in the filesystem. The key specifies the date and other options about
the dump. Key consists of characters from the set 0123456789fusd.
f Place the dump on the next argument file instead of the tape.
u If the dump completes successfully, write the date of the beginning of the dump on file `/etc/ddate'. This file records a separate
date for each filesystem and each dump level.
0-9 This number is the `dump level'. All files modified since the last date stored in the file `/etc/ddate' for the same filesystem at
lesser levels will be dumped. If no date is determined by the level, the beginning of time is assumed; thus the option 0 causes the
entire filesystem to be dumped.
s The size of the dump tape is specified in feet. The number of feet is taken from the next argument. When the specified size is
reached, the dump will wait for reels to be changed. The default size is 2300 feet.
d The density of the tape, expressed in BPI, is taken from the next argument. This is used in calculating the amount of tape used per
write. The default is 1600.
If no arguments are given, the key is assumed to be 9u and a default file system is dumped to the default tape.
Now a short suggestion on how perform dumps. Start with a full level 0 dump
dump 0u
Next, periodic level 9 dumps should be made on an exponential progression of tapes. (Sometimes called Tower of Hanoi - 1 2 1 3 1 2 1 4 ...
tape 1 used every other time, tape 2 used every fourth, tape 3 used every eighth, etc.)
dump 9u
When the level 9 incremental approaches a full tape (about 78000 blocks at 1600 BPI blocked 20), a level 1 dump should be made.
dump 1u
After this, the exponential series should progress as uninterrupted. These level 9 dumps are based on the level 1 dump which is based on
the level 0 full dump. This progression of levels of dump can be carried as far as desired.
FILES
default filesystem and tape vary with installation.
/etc/ddate: record dump dates of filesystem/level.
SEE ALSO
restor(1), dump(5), dumpdir(1)
DIAGNOSTICS
If the dump requires more than one tape, it will ask you to change tapes. Reply with a new-line when this has been done.
BUGS
Sizes are based on 1600 BPI blocked tape. The raw magtape device has to be used to approach these densities. Read errors on the filesys-
tem are ignored. Write errors on the magtape are usually fatal.
DUMP(1M)