Sponsored Content
Special Forums UNIX and Linux Applications Infrastructure Monitoring 99% performance wa, slow server. Post 302562641 by draiphod on Friday 7th of October 2011 01:47:12 PM
Old 10-07-2011
The problem is solved. This was a crash of files system. Thank for you
 

8 More Discussions You Might Find Interesting

1. Post Here to Contact Site Administrators and Moderators

Help! Slow Performance

Is the performance now very, very slow (pages take a very long time to load)? Or is it just me? Neo (6 Replies)
Discussion started by: Neo
6 Replies

2. Shell Programming and Scripting

egrep is very slow : How to improve performance

We have an egrep search in a while loop. egrep -w "$key" ${PICKUP_DIR}/new_update >> ${PICKUP_DIR}/update_record_new ${PICKUP_DIR}/new_update is 210 MB file In each iteration, the egrep on an average takes around 50-60 seconds to search. Ther'es nothing significant in the loop other... (7 Replies)
Discussion started by: hidnana
7 Replies

3. Filesystems, Disks and Memory

Slow Copy(CP) performance

Hi all We have got issues with copying a 2.6 GB file from one folder to another folder. Well, this is not the first issue we are having on the box currently, i will try to explain everything we have done from the past 2 days. We got a message 2 days back saying that our Production is 98%... (3 Replies)
Discussion started by: b_sri
3 Replies

4. UNIX for Dummies Questions & Answers

Slow copy/performance... between volumes

hi guys We are seeing weird issues on my Linux Suse 10, it has lotus 8.5 and 1 filesystem for OS and another for Lotus Database. the issue is when the Lotus service starts wait on top is very high about 25% percent and in general CPU usage is very high we found that when this happens if we... (0 Replies)
Discussion started by: kopper
0 Replies

5. Solaris

Hard disk write performance very slow

Dear All, I have a hard disk in solaris on which the write performanc is too slow. The CPU , RAM memory are absolutely fine. What might be reason. Kindly explain. Rj (9 Replies)
Discussion started by: jegaraman
9 Replies

6. Shell Programming and Scripting

Slow performance filtering file

Please, I need help tuning my script. It works but it's too slow. The code reads an acivity log file with 50.000 - 100.000 lines and filters error messages from it. The data in the actlog file look similar to this: 02/08/2011 00:25:01,ANR2034E QUERY MOUNT: No match found using this criteria.... (5 Replies)
Discussion started by: Miila
5 Replies

7. Solaris

Solaris 11.1 Slow Network Performance

I have identical M5000 machines that are needing to transfer very large amounts of data between them. These are fully loaded machines, and I've already checked IO, memory usage, etc... I get poor network performance even when the machines are idle or copying via loopback. The 10 GB NICs are... (7 Replies)
Discussion started by: christr
7 Replies

8. Filesystems, Disks and Memory

Slow copy (cp) performance when overwriting files

I have a lot of binary files I need to copy to a folder. The folder is already filled with files of the same name. Copying on top of the old files takes MUCH longer than if I were to delete the old files then copy the new files to the now-empty folder. This result is specific to one system -... (3 Replies)
Discussion started by: ces55
3 Replies
expand_dump(8)						      System Manager's Manual						    expand_dump(8)

NAME
expand_dump - Produces a non-compressed kernel crash dump file SYNOPSIS
/usr/sbin/expand_dump input-file output-file DESCRIPTION
By default, kernel crash dump files (vmzcore.#) are compressed during the crash dump. Compressed core files can be examined by the latest versions of debugging tools that have been recompiled to support compressed crash dump files. However, not all debugging tools may be upgraded on a given system, or you may want to examine a crash dump from a remote system using an older version of a tool. The expand_dump utility produces a file that can be read by tools that have not been upgraded to support compressed crash dump files. This non-compressed version can also be read by any upgraded tool. This utility can only be used with compressed crash dump files, and does not support any other form of compressed file. You cannot use other decompression tools such as compress, gzip, or zip on a compressed crash dump file. Note that the non-compressed file will require significantly more disk storage space as it is possible to achieve compression ratios of up to 60:1. Check the available disk space before running expand_dump and estimate the size of the non-compressed file as follows: Run tests by halting your system and forcing a crash as described in the Kernel Debugging manual. Use an upgraded debugger to determine the value of the variable dumpsize. Multiply this vale by the 8Kb page size to approximate the required disk space of the non-compressed crash-dump. Run expand_dump and pipe the output file to /dev/null, noting the size of the file that is printed when expand_dump completes its task. RETURN VALUES
Successful completion of the decompression. The user did not supply the correct number of command line arguments. The input file could not be read. The input file is not a compressed dump, or is corrupted. The output file could not be created or opened for writing and truncated. There was some problem writing to the output file (probably a full disk). The input file is not formated consistantly. It is probably corrupted. The input file could not be correctly decompressed. It is probably corrupted. EXAMPLES
expand_dump vmzcore.4 vmcore.4 SEE ALSO
Commands: dbx(1), kdbx(8), ladebug(1), savecore(8) Kernel Debugging System Administration expand_dump(8)
All times are GMT -4. The time now is 02:00 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy