Sponsored Content
Operating Systems SCO Unable to dump due to limited space? Post 100061 by RTM on Thursday 23rd of February 2006 10:35:27 AM
Old 02-23-2006
Please post the version of the OS.

Your "Cannot dump 524190 pages to dumpdev hd (1/41)" errror is just lack of space to create the dump - only 128000 was available - that isn't your problem with your system.

Your problem is associated with this error:
Panic: HTFS: Bad directory ino 2 (offset 0) on HTFS hd (1/169)

That is what caused your system to crash (and attempt to create a crash dump)

This info on booting single user and filesystem checks may help.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Unable to catch the output after core dump and bus error

I have a weird situation in which the binary dumps core and gives bus error. But before dumping the core and throwing the buss error, it gives some output. unfortunately I can't grep the output before core dump db2bfd -b test.bnd maxSect 15 Bus Error (core dumped) But if I do ... (4 Replies)
Discussion started by: rakeshou
4 Replies

2. HP-UX

HPVM Unable to create more guests due to lack of RAM

Hi All, There are few threads regarding this subject of being unable to create more guests due to lack of RAM. So I am aware how the sum works.. add 8.5% to whatever is allocated, be that the host or guest. But I'm not sure if I have a hardware issue with memory or what I see is correct as I am... (3 Replies)
Discussion started by: EricF
3 Replies

3. Solaris

Solaris file system unable to use available space

Hi, The solaris filesystem /u01 shows available space as 100GB, and used space as 6 GB. The Problem is when iam trying to install some software or copy some files in this file system /u01 Iam unable to copy or install in this file system due to lack of space. ofcourse the software... (31 Replies)
Discussion started by: iris1
31 Replies

4. Red Hat

Unable to free space due to inode in use by database

Hi, I am having similar issue showing filesystem 100% even after deleting the files. I understood the issue after going through this chain. But i can not restart the processes being oracle database. Is there way like mounting filesytem with specific options would avoid happening this issue. How... (0 Replies)
Discussion started by: prashant185
0 Replies

5. OS X (Apple)

Compiling fails due to space in path to home folder

I seem to have issues compiling software and I think I've narrowed it down to something having to do with having a space in the path name to my Home folder (which contains "Macintosh HD"). The reason I think this is shown here: $ echo $HOME /Volumes/Macintosh HD/Users/Tom $ cd $HOME -sh:... (7 Replies)
Discussion started by: tdgrant1
7 Replies

6. Shell Programming and Scripting

Unable to read the first space of a record in while loop

I have a loop like while read i do echo "$i" . . . done < tms.txt The tms.txt contians data like 2008-02-03 00:00:00 <space>00:00:00 . . . 2010-02-03 10:54:32 (2 Replies)
Discussion started by: machomaddy
2 Replies

7. Red Hat

Unable to copy files due to many files in directory

I have directory that has some billion file inside , i tried copy some files for specific date but it's always did not respond for long time and did not give any result.. i tried everything with find command and also with xargs.. even this command find . -mtime -2 -print | xargs ls -d did not... (2 Replies)
Discussion started by: before4
2 Replies

8. HP-UX

Unable to create a tar file due to link

Hi, I am trying to tar a directory structure. but unable to do due to a symbolic link. Please help indomt@behpux $ tar -cvf test.tar /home/indomt a /home/indomt symbolic link to /dxdv/03/ap1dm1 Thanks (1 Reply)
Discussion started by: nag_sathi
1 Replies

9. HP-UX

Unable to get full FS space after mounting

Hi, I am unable to get the full FS space, as /home is 100% utilized and after deleting unwanted files, its still 100%. After checking the du -sk * | sort -n output and converting it to MBs, the total sizes comes out to be 351 MBs only however the lvol is of 3GB. I don't know where is all the space... (2 Replies)
Discussion started by: Kits
2 Replies

10. Ubuntu

Unable to add space

I have / root directory has file system /dev/sda1 with 19G space I want to add some more space to /home directory but unable to do it while running below command getting below message $sudo mkfs -t ext4 /dev/sda2 mke2fs 1.42.9 (4-Feb-2014) mkfs.ext4: inode_size (128) * inodes_count (0) too... (4 Replies)
Discussion started by: megh
4 Replies
savecore(1M)						  System Administration Commands					      savecore(1M)

NAME
savecore - save a crash dump of the operating system SYNOPSIS
/usr/bin/savecore [-Lvd] [-f dumpfile] [directory] DESCRIPTION
The savecore utility saves a crash dump of the kernel (assuming that one was made) and writes a reboot message in the shutdown log. It is invoked by the dumpadm service each time the system boots. savecore saves the crash dump data in the file directory/vmcore.n and the kernel's namelist in directory/unix.n. The trailing .n in the pathnames is replaced by a number which grows every time savecore is run in that directory. Before writing out a crash dump, savecore reads a number from the file directory/minfree. This is the minimum number of kilobytes that must remain free on the file system containing directory. If after saving the crash dump the file system containing directory would have less free space the number of kilobytes specified in minfree, the crash dump is not saved. if the minfree file does not exist, savecore assumes a minfree value of 1 megabyte. The savecore utility also logs a reboot message using facility LOG_AUTH (see syslog(3C)). If the system crashed as a result of a panic, savecore logs the panic string too. OPTIONS
The following options are supported: -d Disregard dump header valid flag. Force savecore to attempt to save a crash dump even if the header information stored on the dump device indicates the dump has already been saved. -f dumpfile Attempt to save a crash dump from the specified file instead of from the system's current dump device. This option may be useful if the information stored on the dump device has been copied to an on-disk file by means of the dd(1M) command. -L Save a crash dump of the live running Solaris system, without actually rebooting or altering the system in any way. This option forces savecore to save a live snapshot of the system to the dump device, and then immediately to retrieve the data and to write it out to a new set of crash dump files in the specified directory. Live system crash dumps can only be per- formed if you have configured your system to have a dedicated dump device using dumpadm(1M). savecore -L does not suspend the system, so the contents of memory continue to change while the dump is saved. This means that live crash dumps are not fully self-consistent. -v Verbose. Enables verbose error messages from savecore. OPERANDS
The following operands are supported: directory Save the crash dump files to the specified directory. If directory is not specified, savecore saves the crash dump files to the default savecore directory, configured by dumpadm(1M). FILES
directory/vmcore.n directory/unix.n directory/bounds directory/minfree /var/crash/'uname -n' default crash dump directory ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWcsu | +-----------------------------+-----------------------------+ SEE ALSO
adb(1), mdb(1), svcs(1), dd(1M), dumpadm(1M), svcadm(1M), syslog(3C), attributes(5), smf(5) NOTES
The system crash dump service is managed by the service management facility, smf(5), under the service identifier: svc:/system/dumpadm:default Administrative actions on this service, such as enabling, disabling, or requesting restart, can be performed using svcadm(1M). The ser- vice's status can be queried using the svcs(1) command. If the dump device is also being used as a swap device, you must run savecore very soon after booting, before the swap space containing the crash dump is overwritten by programs currently running. SunOS 5.10 25 Sep 2004 savecore(1M)
All times are GMT -4. The time now is 07:48 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy