Sponsored Content
Operating Systems SCO Unable to dump due to limited space? Post 100061 by RTM on Thursday 23rd of February 2006 10:35:27 AM
Old 02-23-2006
Please post the version of the OS.

Your "Cannot dump 524190 pages to dumpdev hd (1/41)" errror is just lack of space to create the dump - only 128000 was available - that isn't your problem with your system.

Your problem is associated with this error:
Panic: HTFS: Bad directory ino 2 (offset 0) on HTFS hd (1/169)

That is what caused your system to crash (and attempt to create a crash dump)

This info on booting single user and filesystem checks may help.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Unable to catch the output after core dump and bus error

I have a weird situation in which the binary dumps core and gives bus error. But before dumping the core and throwing the buss error, it gives some output. unfortunately I can't grep the output before core dump db2bfd -b test.bnd maxSect 15 Bus Error (core dumped) But if I do ... (4 Replies)
Discussion started by: rakeshou
4 Replies

2. HP-UX

HPVM Unable to create more guests due to lack of RAM

Hi All, There are few threads regarding this subject of being unable to create more guests due to lack of RAM. So I am aware how the sum works.. add 8.5% to whatever is allocated, be that the host or guest. But I'm not sure if I have a hardware issue with memory or what I see is correct as I am... (3 Replies)
Discussion started by: EricF
3 Replies

3. Solaris

Solaris file system unable to use available space

Hi, The solaris filesystem /u01 shows available space as 100GB, and used space as 6 GB. The Problem is when iam trying to install some software or copy some files in this file system /u01 Iam unable to copy or install in this file system due to lack of space. ofcourse the software... (31 Replies)
Discussion started by: iris1
31 Replies

4. Red Hat

Unable to free space due to inode in use by database

Hi, I am having similar issue showing filesystem 100% even after deleting the files. I understood the issue after going through this chain. But i can not restart the processes being oracle database. Is there way like mounting filesytem with specific options would avoid happening this issue. How... (0 Replies)
Discussion started by: prashant185
0 Replies

5. OS X (Apple)

Compiling fails due to space in path to home folder

I seem to have issues compiling software and I think I've narrowed it down to something having to do with having a space in the path name to my Home folder (which contains "Macintosh HD"). The reason I think this is shown here: $ echo $HOME /Volumes/Macintosh HD/Users/Tom $ cd $HOME -sh:... (7 Replies)
Discussion started by: tdgrant1
7 Replies

6. Shell Programming and Scripting

Unable to read the first space of a record in while loop

I have a loop like while read i do echo "$i" . . . done < tms.txt The tms.txt contians data like 2008-02-03 00:00:00 <space>00:00:00 . . . 2010-02-03 10:54:32 (2 Replies)
Discussion started by: machomaddy
2 Replies

7. Red Hat

Unable to copy files due to many files in directory

I have directory that has some billion file inside , i tried copy some files for specific date but it's always did not respond for long time and did not give any result.. i tried everything with find command and also with xargs.. even this command find . -mtime -2 -print | xargs ls -d did not... (2 Replies)
Discussion started by: before4
2 Replies

8. HP-UX

Unable to create a tar file due to link

Hi, I am trying to tar a directory structure. but unable to do due to a symbolic link. Please help indomt@behpux $ tar -cvf test.tar /home/indomt a /home/indomt symbolic link to /dxdv/03/ap1dm1 Thanks (1 Reply)
Discussion started by: nag_sathi
1 Replies

9. HP-UX

Unable to get full FS space after mounting

Hi, I am unable to get the full FS space, as /home is 100% utilized and after deleting unwanted files, its still 100%. After checking the du -sk * | sort -n output and converting it to MBs, the total sizes comes out to be 351 MBs only however the lvol is of 3GB. I don't know where is all the space... (2 Replies)
Discussion started by: Kits
2 Replies

10. Ubuntu

Unable to add space

I have / root directory has file system /dev/sda1 with 19G space I want to add some more space to /home directory but unable to do it while running below command getting below message $sudo mkfs -t ext4 /dev/sda2 mke2fs 1.42.9 (4-Feb-2014) mkfs.ext4: inode_size (128) * inodes_count (0) too... (4 Replies)
Discussion started by: megh
4 Replies
savecore(8)						      System Manager's Manual						       savecore(8)

Name
       savecore - save a core dump of the operating system

Syntax
       /etc/savecore [ options ] dirname [ system ] [ corename ]

Description
       The  command  is  meant	to  be called near the end of the file.  The command saves the core dump of the system (assuming one was made) and
       writes a reboot message in the shutdown log.

       The command checks the core dump to be certain it corresponds with the current running ULTRIX.  If it does, it saves the core image in  the
       file  dirname/vmcore.n and saves the namelist in the file dirname/vmunix.n.  The trailing .n in the pathnames is replaced by a number which
       increments each time is run in that directory.

       After saving the core and namelist images, will save the error logger buffer into a predetermined file.	The error logger  buffer  contains
       information about why the crash occurred.  After completes, the daemon will extract the error logger file and translate its contents into a
       form familiar to the program.

       Before writes out a core image, it reads a number from the file dirname/minfree.  If there are fewer free blocks  on  the  filesystem  that
       contains  dirname  than the number obtained from the file, a core dump is not done.  If the file does not exist, savecore always writes out
       the core file (assuming that a core dump was taken).

       The command also writes a reboot message in the shut down log.  If the system crashed as a result of a panic, also records the panic string
       in the shut down log.

       For  partial  crash  dumps,  creates  a	sparse	core image file in dirname/vmcore.n.  If this sparse core image file is copied or moved to
       another location, the file expands to its true size which can take too much file system space.  Hence, to copy or move  sparse  core  image
       files, you must use the command. The command has a conversion option to create sparse output files.

Options
       -c   Clears  the core dump.  This option is useful when the core dump is corrupted in a way that will not allow to save it safely.  Use the
	    option with caution, because once it clears the core dump, the core dump cannot be retrieved.

       -d dumpdev dumplo
	    Specifies the dump device and the dump offset when running on a system image other than the currently running system image.  The  pro-
	    gram  assumes  that  the running system image is and it reads the dump device and dump device offset are different in the system image
	    that crashed, the option provides the correct dump device and dump device offset.

       -e   Saves only the error logger buffer into a file.  If used, core or namelist images are not saved.

       -f corename
	    Takes the i corefile name as the file from which to extract the the crash dump data instead of the default dump device.   This  option
	    is used only for diskless workstations.

       If  the	core  dump was from a system other than /vmunix, the name of that system must be supplied as system.  The program assumes that the
       running image is

       After successful completion, the core dump is cleared.  Also, a message is written in the file which tells whether the  dump  succeeded	or
       failed.

Files
       Shut down log

       Current running ULTRIX system

See Also
       dd(1), uerf(8)

																       savecore(8)
All times are GMT -4. The time now is 05:59 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy