06-25-2009
I think one of the biggest mistakes people make when testing is to time the test from start to finish and not pay attention to anything in between. For example, I always use netperf to measure network performance, or dt to measure disk as well as others, BUT I always run collectl in another window to see what's happening to my cpu, disk, network, memory and other subsystems while the test is in process. If you get an bad end-to-end number, and the intermediate numbers are very erratic it just may be a system or network switch is misconfigured. Looking at the elapsed time or average load will never give you a true picture.
In fact, if you run collectl with a monitoring interval of 0.1 seconds or even less which doing disk tests you can actually watch the cache fill as the tests run faster at first.
-mark
7 More Discussions You Might Find Interesting
1. AIX
I'm doing performance testing for one application which works on AIX.
But I don't know which performance parameters of memory need to be collected. Now, I just know very few:
1. page in
2. page out
3. fre
They are all collected by "vmstat" command.
I want to know, except for above... (2 Replies)
Discussion started by: adasong
2 Replies
2. Web Development
What is a good approach for a performance testing tool suite for web applications? I am specifically interested in tools that execute a certain set of tasks well as opposed to tuning high traffic sites. In other words, a profiler would be a good idea to have, although I understand these tools are... (4 Replies)
Discussion started by: figaro
4 Replies
3. IP Networking
Hello all!
I have the need to performance-test a MPLS switch, I was thinking of using iperf to accomplish the task.
I had in mind using a linux box with a Gigabit interface connected to a L2 switch on a 802.1Q trunk. In the interface I would create 20 VLANs with 20 different IP subnets.
... (0 Replies)
Discussion started by: ppucci
0 Replies
4. Emergency UNIX and Linux Support
Hello all!
I have the need to performance-test a MPLS switch, I was thinking of using iperf to accomplish the task.
I had in mind using a linux box with a Gigabit interface connected to a L2 switch on a 802.1Q trunk. In the interface I would create 20 VLANs with 20 different IP subnets.
... (2 Replies)
Discussion started by: ppucci
2 Replies
5. UNIX for Dummies Questions & Answers
Hi Everyone,
My company is involved in performing performance testing and now they want to perform couple of training related with executing those tests on the servers based on Unix sytems.
And I have to provide them draft of the content for those trainings.
I think this kind of training... (2 Replies)
Discussion started by: Bartuss
2 Replies
6. Shell Programming and Scripting
Hi Experts,
I am new to shell.How to extract logs (Web,APP,Database) using shell in performance testing?
Need for webserver logs,app server logs and d/b logs code.
Thanks in advance
Sree (3 Replies)
Discussion started by: sree vasu
3 Replies
7. AIX
Hi All,
I have 5 Servers ( 3 DataStages Server and 2 Database Servers running HACMP and DPF).
My question is 1 of the Main Core DataStage Server has few unsolved issues that I will post the question as following
1. Why most of time the File Cache in the memory seems constantly... (3 Replies)
Discussion started by: ckwan
3 Replies
LEARN ABOUT CENTOS
gtester
GTESTER(1) User Commands GTESTER(1)
NAME
gtester - test running utility
SYNOPSIS
gtester [OPTION...] [testprogram]
DESCRIPTION
gtester is a utility to run unit tests that have been written using the GLib test framework.
When called with the -o option, gtester writes an XML report of the test results, which can be converted into HTML using the gtester-report
utility.
OPTIONS
-h, --help
print help and exit
-v, --version
print version information and exit
--g-fatal-warnings
make warnings fatal
-k, --keep-going
continue running after tests failed
-l
list paths of available test cases
-m=MODE
run test cases in MODE, which can be one of:
perf
run performance tests
slow, thorough
run slow tests, or repeat non-deterministic tests more often
quick
do not run slow or performance tests, or do extra repeats of non-deterministic tests (default)
undefined
run test cases that deliberately provoke checks or assertion failures, if implemented (default)
no-undefined
do not run test cases that deliberately provoke checks or assertion failures
-p=TESTPATH
only run test cases matching TESTPATH
-s=TESTPATH
skip test cases matching TESTPATH
--seed=SEEDSTRING
run all test cases with random number seed SEEDSTRING
-o=LOGFILE
write the test log to LOGFILE
-q, --quiet
suppress per test binary output
--verbose
report success per testcase
SEE ALSO
gtester-report(1)
GLib GTESTER(1)