Sponsored Content
Special Forums UNIX and Linux Applications Infrastructure Monitoring 99% performance wa, slow server. Post 302562641 by draiphod on Friday 7th of October 2011 01:47:12 PM
Old 10-07-2011
The problem is solved. This was a crash of files system. Thank for you
 

8 More Discussions You Might Find Interesting

1. Post Here to Contact Site Administrators and Moderators

Help! Slow Performance

Is the performance now very, very slow (pages take a very long time to load)? Or is it just me? Neo (6 Replies)
Discussion started by: Neo
6 Replies

2. Shell Programming and Scripting

egrep is very slow : How to improve performance

We have an egrep search in a while loop. egrep -w "$key" ${PICKUP_DIR}/new_update >> ${PICKUP_DIR}/update_record_new ${PICKUP_DIR}/new_update is 210 MB file In each iteration, the egrep on an average takes around 50-60 seconds to search. Ther'es nothing significant in the loop other... (7 Replies)
Discussion started by: hidnana
7 Replies

3. Filesystems, Disks and Memory

Slow Copy(CP) performance

Hi all We have got issues with copying a 2.6 GB file from one folder to another folder. Well, this is not the first issue we are having on the box currently, i will try to explain everything we have done from the past 2 days. We got a message 2 days back saying that our Production is 98%... (3 Replies)
Discussion started by: b_sri
3 Replies

4. UNIX for Dummies Questions & Answers

Slow copy/performance... between volumes

hi guys We are seeing weird issues on my Linux Suse 10, it has lotus 8.5 and 1 filesystem for OS and another for Lotus Database. the issue is when the Lotus service starts wait on top is very high about 25% percent and in general CPU usage is very high we found that when this happens if we... (0 Replies)
Discussion started by: kopper
0 Replies

5. Solaris

Hard disk write performance very slow

Dear All, I have a hard disk in solaris on which the write performanc is too slow. The CPU , RAM memory are absolutely fine. What might be reason. Kindly explain. Rj (9 Replies)
Discussion started by: jegaraman
9 Replies

6. Shell Programming and Scripting

Slow performance filtering file

Please, I need help tuning my script. It works but it's too slow. The code reads an acivity log file with 50.000 - 100.000 lines and filters error messages from it. The data in the actlog file look similar to this: 02/08/2011 00:25:01,ANR2034E QUERY MOUNT: No match found using this criteria.... (5 Replies)
Discussion started by: Miila
5 Replies

7. Solaris

Solaris 11.1 Slow Network Performance

I have identical M5000 machines that are needing to transfer very large amounts of data between them. These are fully loaded machines, and I've already checked IO, memory usage, etc... I get poor network performance even when the machines are idle or copying via loopback. The 10 GB NICs are... (7 Replies)
Discussion started by: christr
7 Replies

8. Filesystems, Disks and Memory

Slow copy (cp) performance when overwriting files

I have a lot of binary files I need to copy to a folder. The folder is already filled with files of the same name. Copying on top of the old files takes MUCH longer than if I were to delete the old files then copy the new files to the now-empty folder. This result is specific to one system -... (3 Replies)
Discussion started by: ces55
3 Replies
ABRT-ACTION-ANALYZ(1)						    ABRT Manual 					     ABRT-ACTION-ANALYZ(1)

NAME
abrt-action-analyze-backtrace - Analyzes C/C++ backtrace, generates duplication hash, backtrace rating, and identifies crash function in problem directory DIR. SYNOPSIS
abrt-action-analyze-backtrace [-v] [-d DIR] DESCRIPTION
The tool reads a file named backtrace from problem directory, generates duplication hash, backtrace rating, and identifies crash function. Then it saves this data as new elements duphash, rating, crash_function in this problem directory. Integration with libreport events abrt-action-analyze-backtrace can be used as a secondary analyzer, after backtrace has been generated. The data generated by abrt-action-analyze-backtrace is useful for reporting the crash to bug databases: rating makes it possible to prevent reporting of bugs with low quality (non-informative) backtraces, duplication hash is used to find already filed bugs about similar crashes. Example usage in report_event.conf: EVENT=analyze analyzer=CCpp abrt-action-generate-backtrace || exit $? abrt-action-analyze-backtrace OPTIONS
-d DIR Path to problem directory. -v Be more verbose. Can be given multiple times. AUTHORS
o ABRT team SEE ALSO
abrt-action-generate-backtrace abrt 2.1.11 06/18/2014 ABRT-ACTION-ANALYZ(1)
All times are GMT -4. The time now is 03:15 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy