Sponsored Content
Homework and Emergencies Emergency UNIX and Linux Support Performance investigation, very high runq-sz %runocc Post 302508849 by achenle on Tuesday 29th of March 2011 09:31:35 AM
Old 03-29-2011
Forget "top". That's inaccurate enough on a Solaris box with just a few CPUs.

What's "prstat -a" and "vmstat 2 20" show when the machine is slow?

How about "iostat -sndxz 2 20"?

FWIW, there appears to be plenty of CPU available.
This User Gave Thanks to achenle For This Post:
 

6 More Discussions You Might Find Interesting

1. AIX

Performance Problem - High CPU utilization

Hello everybody. I have a problem with my AIX 5.3. Recently my unix shows a high cpu utilization with sar or topas. I need to find what I have to do to solve this problem, in fact, I don't know what is my problem. I had the same problem with another AIX 5.3 running the same... (2 Replies)
Discussion started by: wilder.mellotto
2 Replies

2. UNIX for Advanced & Expert Users

Causes of high runq-sz and cswch/s output from sar

Hi folks, I'm running RHEL4 (2.6.9 - 64 bit) on a 4 CPU Dual Core Xeon. This server is running DB2 database. I've been getting the following readings from sar over the past week: 09:35:01 AM cswch/s 09:40:01 AM 4774.95 09:45:01 AM 27342.76 09:50:02 AM 196015.02 09:55:01 AM... (8 Replies)
Discussion started by: fulat2k
8 Replies

3. High Performance Computing

High Performance Computing

I am interested in setting up some High Performance Computing clusters and would like to get people's views and experiences on this. I have 2 requirements: 1. Compute clusters to do fast cpu intensive computations 2. Storage clusters of parallel and extendable filesystems spread across many... (6 Replies)
Discussion started by: humbletech99
6 Replies

4. High Performance Computing

What does high performance computing mean?

Sorry, I am not really from a computer science background. But from the subject of it, does it mean something like multi processor programming? distributed computing? like using erlang? Sound like it, which excite me. I just had a 3 day crash course in erlang and "Cocurrency oriented programming"... (7 Replies)
Discussion started by: linuxpenguin
7 Replies

5. High Performance Computing

High performance Linkpack

hello everyone , Im new to HPL. i wanted to know whether High performance linpack solves linear system of equations for single precision airthmatic on LINUX. it works for double precision , so is there any HPL version which is for single precision.\ thanks . (0 Replies)
Discussion started by: rahul_viz
0 Replies

6. High Performance Computing

High Performance Linpack Compiling Issue

I'm trying to compile Linpack on a Ubuntu cluster. I'm running MPI. I've modified the following values to fit my system TOPdir MPdir LAlib CC LINKER. When compiling I get the following error: (the error is at the end, the other errors in between are because I've ran the script several times so... (0 Replies)
Discussion started by: JPJPJPJP
0 Replies
PMDUPCONTEXT(3) 					     Library Functions Manual						   PMDUPCONTEXT(3)

NAME
pmDupContext - duplicate a PMAPI context C SYNOPSIS
#include <pcp/pmapi.h> int pmDupContext(void); cc ... -lpcp DESCRIPTION
An application using the Performance Metrics Application Programming Interface (PMAPI) may manipulate several concurrent contexts, each associated with a source of performance metrics, e.g. pmcd(1) on some host, or an archive log of performance metrics as created by pmlog- ger(1). Calling pmDupContext will replicate the current PMAPI context, returning a handle for the new context that may be used with subsequent calls to pmUseContext(3). Once created, the duplicated context and the original context have independent existence, and so their instance profiles and collection time (relevant only for archive contexts) may be independently varied. The newly replicated context becomes the current context. SEE ALSO
PMAPI(3), pmNewContext(3) and pmUseContext(3). Performance Co-Pilot PCP PMDUPCONTEXT(3)
All times are GMT -4. The time now is 01:12 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy