[Solved] High CPU utilization


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users [Solved] High CPU utilization
Prev   Next
# 1  
Old 11-12-2012
[Solved] High CPU utilization

Hi,

i am observing few processes taking high CPU and when i got some more detials about them it looks like this


Code:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 9452 xmp       25   0 16736 1224  860 R 100.0  0.0 903:54.18 ffmpeg -i -
 9777 xmp       25   0 16736 1224  860 R 100.0  0.0 904:05.66 ffmpeg -i -

[xmp@sbymipxmp16 ~]$ ps -ef |grep ffmpeg
xmp       9452  8487 89 Nov11 ?        15:04:32 ffmpeg -i -
xmp       9777  8487 90 Nov11 ?        15:04:43 ffmpeg -i -
xmp      31020 29948  0 13:15 pts/3    00:00:00 grep ffmpeg
[xmp@sbymipxmp16 ~]$ pwd
pwd   pwdx
[xmp@sbymipxmp16 ~]$ pwdx 9452
9452: /var/xmp/proc/TCP-ROUTER.sbymipxmp16.4
[xmp@sbymipxmp16 ~]$ pwdx 9777
9777: /var/xmp/proc/TCP-ROUTER.sbymipxmp16.4

What does ? mean in ps -ef output? Can we kill these processes?

-Siddhesh

Last edited by Scott; 11-12-2012 at 03:43 AM.. Reason: Code tags
 
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Red Hat

Kswapd high cpu utilization.

Hi, Please suggest how to troubleshoot, kswapd is utilizing high cpu? also wanted to know which parameters are important and needs to be added with sar command for monitoring the performance of Linux (Oracle 5.8 -64 bit Please guide me. I am facing the issue where server is getting... (0 Replies)
Discussion started by: manoj.solaris
0 Replies

2. Red Hat

High CPU Utilization

Hi Experts, I need to understand few basic things regarding top command result from one of the node i have collected: Cpu0 : 4.6%us, 2.0%sy, 0.0%ni, 91.4%id, 1.3%wa, 0.3%hi, 0.3%si, 0.0%st Cpu1 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : ... (5 Replies)
Discussion started by: mackjack87
5 Replies

3. Shell Programming and Scripting

Gzip with high CPU utilization

Hello all, I am very new to unix and trying to solve this problem. I have cluster of 3 nodes. when I run TOP command on each server, I see a two GZIP processess with very high CPU utilization even,if I don't go backups or unzipping. Can somebody tell me what is the problem, I don't want... (1 Reply)
Discussion started by: nnani
1 Replies

4. UNIX for Advanced & Expert Users

high cpu utilization

good morning. just wanted to ask if there's a way to check what causes the high cpu utilization of a server for the past 2 months? My jffnms report resulted to high utilization for a specific server last month. is there a way to check via a command line? thanks (9 Replies)
Discussion started by: lhareigh890
9 Replies

5. Solaris

vsh is high - cpu utilization

Hi, I am working on a solaris app processor and the vsh goes high from time to time. I have executed various ps commands and switches and have found that it looks like the rlogind daemon is terminating vsh and not cleaning up after itself. There are also something like 10 zombies hanging around... (2 Replies)
Discussion started by: troystevens
2 Replies

6. Shell Programming and Scripting

High CPU Utilization of the script

There is a script which processes the incoming files from a particular directory and sleeps if it doesnt find any. Currently, i have been told that eventhough there are no files to process, the CPU utilization is very high. An independent evaluation by advisory specialist has found this script does... (2 Replies)
Discussion started by: nandu
2 Replies

7. Shell Programming and Scripting

Script with high CPU utilization

Hi All, i have a script that finds the file with .txt .zip .Z .gzip that are 3 days old in directory /abc/def and removes them find /abc/def -name '0*.txt' -mtime +6 -exec rm {} \; find /abc/def -name '0*.zip' -mtime +6 -exec rm {} \; find /abc/def -name '0*.gzip' -mtime +6... (3 Replies)
Discussion started by: mad_man12
3 Replies

8. Solaris

Non-Global zone take high CPU utilization.

In Production system there is 12 Non-Global Zones. So in that 12 Non-Global zones one of the Non-Global zone taking 60-70% CPU usage and load average become very high. Running processors are 52 only. Please advise me is there any way to reduce the CPU sharing Utilization. (Most Urgent) ... (9 Replies)
Discussion started by: murthy76
9 Replies

9. AIX

High CPU utilization

Hi am facing high cpu utilization on my sybase server. I have P550 Number Of Processors: 4 Processor Clock Speed: 1656 MHz CPU Type: 64-bit Kernel Type: 32-bit LPAR Info: 1 65-D837E Memory Size: 7840 MB in topas it shows Name PID CPU% PgSp Owner dataserv 565264 ... (1 Reply)
Discussion started by: vjm
1 Replies

10. Solaris

High CPU Utilization

Good morning, I need some help figuring out what's eating up my cpu. My application can't get enough cpu to do its job. this is a sunfire V440 2CPU's at 1/593 GHZ with 8GB of memory. In the morning hours the box is at less than 3%. I can't figure out what else is using the CPU. We use foglight and... (2 Replies)
Discussion started by: bbouhaik
2 Replies
Login or Register to Ask a Question
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)