02-23-2006
Please post the version of the OS.
Your "Cannot dump 524190 pages to dumpdev hd (1/41)" errror is just lack of space to create the dump - only 128000 was available - that isn't your problem with your system.
Your problem is associated with this error:
Panic: HTFS: Bad directory ino 2 (offset 0) on HTFS hd (1/169)
That is what caused your system to crash (and attempt to create a crash dump)
This info on booting single user and
filesystem checks may help.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I have a weird situation in which the binary dumps core and gives bus error. But before dumping the core and throwing the buss error, it gives some output.
unfortunately I can't grep the output before core dump
db2bfd -b test.bnd
maxSect 15
Bus Error (core dumped)
But if I do ... (4 Replies)
Discussion started by: rakeshou
4 Replies
2. HP-UX
Hi All,
There are few threads regarding this subject of being unable to create more guests due to lack of RAM. So I am aware how the sum works.. add 8.5% to whatever is allocated, be that the host or guest. But I'm not sure if I have a hardware issue with memory or what I see is correct as I am... (3 Replies)
Discussion started by: EricF
3 Replies
3. Solaris
Hi,
The solaris filesystem /u01 shows available space as 100GB, and used space as 6 GB.
The Problem is when iam trying to install some software or copy some files in this file system /u01 Iam unable to copy or install in this file system due to lack of space.
ofcourse the software... (31 Replies)
Discussion started by: iris1
31 Replies
4. Red Hat
Hi, I am having similar issue showing filesystem 100% even after deleting the files. I understood the issue after going through this chain. But i can not restart the processes being oracle database. Is there way like mounting filesytem with specific options would avoid happening this issue.
How... (0 Replies)
Discussion started by: prashant185
0 Replies
5. OS X (Apple)
I seem to have issues compiling software and I think I've narrowed it down to something having to do with having a space in the path name to my Home folder (which contains "Macintosh HD"). The reason I think this is shown here:
$ echo $HOME
/Volumes/Macintosh HD/Users/Tom
$ cd $HOME
-sh:... (7 Replies)
Discussion started by: tdgrant1
7 Replies
6. Shell Programming and Scripting
I have a loop like
while read i
do
echo "$i"
.
.
.
done < tms.txt
The tms.txt contians data like
2008-02-03 00:00:00
<space>00:00:00
.
.
.
2010-02-03 10:54:32 (2 Replies)
Discussion started by: machomaddy
2 Replies
7. Red Hat
I have directory that has some billion file inside , i tried copy some files for specific date but it's always did not respond for long time and did not give any result.. i tried everything with find command and also with xargs..
even this command find . -mtime -2 -print | xargs ls -d did not... (2 Replies)
Discussion started by: before4
2 Replies
8. HP-UX
Hi,
I am trying to tar a directory structure. but unable to do due to a symbolic link. Please help
indomt@behpux $ tar -cvf test.tar /home/indomt
a /home/indomt symbolic link to /dxdv/03/ap1dm1
Thanks (1 Reply)
Discussion started by: nag_sathi
1 Replies
9. HP-UX
Hi,
I am unable to get the full FS space, as /home is 100% utilized and after deleting unwanted files, its still 100%. After checking the du -sk * | sort -n output and converting it to MBs, the total sizes comes out to be 351 MBs only however the lvol is of 3GB. I don't know where is all the space... (2 Replies)
Discussion started by: Kits
2 Replies
10. Ubuntu
I have / root directory has file system /dev/sda1 with 19G space
I want to add some more space to /home directory but unable to do it while running below command getting below message
$sudo mkfs -t ext4 /dev/sda2
mke2fs 1.42.9 (4-Feb-2014)
mkfs.ext4: inode_size (128) * inodes_count (0) too... (4 Replies)
Discussion started by: megh
4 Replies
cr_read(3) Library Functions Manual cr_read(3)
NAME
cr_read - read from crash dump
SYNOPSIS
DESCRIPTION
The function attempts to read the memory area defined by mem_page and num_pages into the buffer pointed to by buf from the crash dump
opened using crash_cb.
The starts at the position in the crash dump associated with the physical memory offset given by mem_page. If the physical memory page
mem_page does not exist in the crash dump, sets *num_pages to 0 and returns 0.
No data transfer will occur past a page of memory that does not exist in the crash dump. If the starting position, mem_page, plus the read
length, *num_pages, goes past an area of memory that does not exist in the crash dump, sets *num_pages to the number of consecutive pages
(starting at mem_page) actually read.
RETURN VALUE
Returns zero for success. Other possible return values are described in libcrash(5).
EXAMPLES
Assuming a process opened a crash dump, the following call to cr_read(3) reads the first pages from the crash dump into the buffer pointed
to by mybuf:
WARNINGS
may return fewer pages than requested due to implementation details. Always check the number of pages returned. If they are fewer than
requested, issue a new request starting at the first page not returned. Only if that new request reads zero pages (or returns an error)
can you be sure that the page was not dumped.
AUTHOR
was developed by HP.
SEE ALSO
cr_open(3), libcrash(5).
cr_read(3)