8 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi there can anyone help me to spot my mistake and please explain why it appears
My code :
#!/usr/bin/gawk -f
BEGIN { bytes =0}
{ temp=$(grep "datafeed\.php" | cut -d" " -f8)
bytes += temp}
END { printf "Number of bytes: %d\n", bytes }
when I am running ./q411 an411
an411:
... (6 Replies)
Discussion started by: FUTURE_EINSTEIN
6 Replies
2. UNIX Benchmarks
Type: UltraSPARC IIIi 1,593 Mhz x2
Ram: 16G
Disk: 2*70G fw scsi drives
Load: db application
kernel: Sunos5.10
pgms: compiled Sun cc -O2
==============================================================
BYTE UNIX Benchmarks (Version 3.11)
System -- SunOS sun.spmbox.com 5.10... (2 Replies)
Discussion started by: mr_manny
2 Replies
3. AIX
Hi,
I needed space on a FS, and when I've added the space on the filesystem, I did it trough the regular smitty fs inteface and not with smitty cl_lvm.
Can someone help me to repair the situat before a faileover happen ?
Thanks for your help,:mad: (13 Replies)
Discussion started by: azzed27
13 Replies
4. UNIX Benchmarks
CPU: 1 x PA8600, 440MHz
RAM: 1GB
Hardware model: 9000/800/N4000-44
BYTE UNIX Benchmarks (Version 3.11)
System -- HP-UX xxx B.11.11 U 9000/800 615379343 unlimited-user license
Start Benchmark Run: Tue Apr 4 05:43:42 IST 2006
1 interactive users.
Dhrystone 2 without register... (0 Replies)
Discussion started by: blowtorch
0 Replies
5. UNIX Benchmarks
Hi,
Is there a precompiled binary for Solaris 8 available?
I need to bench mark our Oracle server,
as we are upgrading from SFv880 to SFv890.
Both are fully loaded.
I can't find a sun machine that I can compile the software on.
Tks
JohnHo (0 Replies)
Discussion started by: John Ho
0 Replies
6. Linux
I work in a computer company which sells computer configurations and parts of them. And I want to give a choice to customers. If they want to buy a PC with Linux installed, not Windows. But I find difficult to test the Graphic Cards in Linux OS. I have searched the web and I didn't found any... (2 Replies)
Discussion started by: vlatkop
2 Replies
7. UNIX Benchmarks
AMD-K5 Processor at 133Mhz
32MB RAM
5 GB Hard Drive
10MB NIC
1MB ARC Graphics Card
PS/1 Keyboard
CD-ROM
Floppy Drive
Kickin' Fast BABY! hehe
BYTE UNIX Benchmarks (Version 3.11)
System -- FreeBSD evil-linux.net 5.0-RELEASE FreeBSD 5.0-RELEASE #0: Thu Jan 16 22:16:53 GMT 2003... (0 Replies)
Discussion started by: Phobos
0 Replies
8. UNIX Benchmarks
CPU/Speed: Ultrasparc IIi / 300MHz
Ram: 128MB (Not enough)
Motherboard: Sun Ultra 5/10
Bus: 4 PCI
Cache: 512K Ecache
Controller: Onboard IDE ATA/33
Disk: 40GB IBM ATA/100
Load: Low, 1 user, apache, samba, ipf, dtlogin disabled.
Kernel: Solaris 5.10 b72
Kernel ELF?: yes
pgms: I used the... (0 Replies)
Discussion started by: 98_1LE
0 Replies
Benchmark::Timer(3pm) User Contributed Perl Documentation Benchmark::Timer(3pm)
NAME
Benchmark::Timer - Benchmarking with statistical confidence
SYNOPSIS
# Non-statistical usage
use Benchmark::Timer;
$t = Benchmark::Timer->new(skip => 1);
for(1 .. 1000) {
$t->start('tag');
&long_running_operation();
$t->stop('tag');
}
print $t->report;
# --------------------------------------------------------------------
# Statistical usage
use Benchmark::Timer;
$t = Benchmark::Timer->new(skip => 1, confidence => 97.5, error => 2);
while($t->need_more_samples('tag')) {
$t->start('tag');
&long_running_operation();
$t->stop('tag');
}
print $t->report;
DESCRIPTION
The Benchmark::Timer class allows you to time portions of code conveniently, as well as benchmark code by allowing timings of repeated
trials. It is perfect for when you need more precise information about the running time of portions of your code than the Benchmark module
will give you, but don't want to go all out and profile your code.
The methodology is simple; create a Benchmark::Timer object, and wrap portions of code that you want to benchmark with "start()" and
"stop()" method calls. You can supply a tag to those methods if you plan to time multiple portions of code. If you provide error and
confidence values, you can also use "need_more_samples()" to determine, statistically, whether you need to collect more data.
After you have run your code, you can obtain information about the running time by calling the "results()" method, or get a descriptive
benchmark report by calling "report()". If you run your code over multiple trials, the average time is reported. This is wonderful for
benchmarking time-critical portions of code in a rigorous way. You can also optionally choose to skip any number of initial trials to cut
down on initial case irregularities.
METHODS
In all of the following methods, $tag refers to the user-supplied name of the code being timed. Unless otherwise specified, $tag defaults
to the tag of the last call to "start()", or "_default" if "start()" was not previously called with a tag.
$t = Benchmark::Timer->new( [options] );
Constructor for the Benchmark::Timer object; returns a reference to a timer object. Takes the following named arguments:
skip
The number of trials (if any) to skip before recording timing information.
minimum
The minimum number of trials to run.
error
A percentage between 0 and 100 which indicates how much error you are willing to tolerate in the average time measured by the
benchmark. For example, a value of 1 means that you want the reported average time to be within 1% of the real average time.
"need_more_samples()" will use this value to determine when it is okay to stop collecting data.
If you specify an error you must also specify a confidence.
confidence
A percentage between 0 and 100 which indicates how confident you want to be in the error measured by the benchmark. For example, a
value of 97.5 means that you want to be 97.5% confident that the real average time is within the error margin you have specified.
"need_more_samples()" will use this value to compute the estimated error for the collected data, so that it can determine when it
is okay to stop.
If you specify a confidence you must also specify an error.
$t->reset;
Reset the timer object to the pristine state it started in. Erase all memory of tags and any previously accumulated timings. Returns
a reference to the timer object. It takes the same arguments the constructor takes.
$t->start($tag);
Record the current time so that when "stop()" is called, we can calculate an elapsed time.
$t->stop($tag);
Record timing information. If $tag is supplied, it must correspond to one given to a previously called "start()" call. It returns the
elapsed time in milliseconds. "stop()" croaks if the timer gets out of sync (e.g. the number of "start()"s does not match the number
of "stop()"s.)
$t->need_more_samples($tag);
Compute the estimated error in the average of the data collected thus far, and return true if that error exceeds the user-specified
error. If a $tag is supplied, it must correspond to one given to a previously called "start()" call.
This routine assumes that the data are normally distributed.
$t->report($tag);
Returns a string containing a simple report on the collected timings for $tag. This report contains the number of trials run, the
total time taken, and, if more than one trial was run, the average time needed to run one trial and error information. "report()" will
complain (via a warning) if a tag is still active.
$t->reports;
In a scalar context, returns a string containing a simple report on the collected timings for all tags. The report is a concatenation
of the individual tag reports, in the original tag order. In an list context, returns a hash keyed by tag and containing reports for
each tag. The return value is actually an array, so that the original tag order is preserved if you assign to an array instead of a
hash. "reports()" will complain (via a warning) if a tag is still active.
$t->result($tag);
Return the time it took for $tag to elapse, or the mean time it took for $tag to elapse once, if $tag was used to time code more than
once. "result()" will complain (via a warning) if a tag is still active.
$t->results;
Returns the timing data as a hash keyed on tags where each value is the time it took to run that code, or the average time it took, if
that code ran more than once. In scalar context it returns a reference to that hash. The return value is actually an array, so that the
original tag order is preserved if you assign to an array instead of a hash.
$t->data($tag), $t->data;
These methods are useful if you want to recover the full internal timing data to roll your own reports.
If called with a $tag, returns the raw timing data for that $tag as an array (or a reference to an array if called in scalar context).
This is useful for feeding to something like the Statistics::Descriptive package.
If called with no arguments, returns the raw timing data as a hash keyed on tags, where the values of the hash are lists of timings for
that code. In scalar context, it returns a reference to that hash. As with "results()", the data is internally represented as an array
so you can recover the original tag order by assigning to an array instead of a hash.
BUGS
Benchmarking is an inherently futile activity, fraught with uncertainty not dissimilar to that experienced in quantum mechanics. But things
are a little better if you apply statistics.
LICENSE
This code is distributed under the GNU General Public License (GPL). See the file LICENSE in the distribution,
http://www.opensource.org/gpl-license.html, and http://www.opensource.org/.
AUTHOR
The original code (written before April 20, 2001) was written by Andrew Ho <andrew@zeuscat.com>, and is copyright (c) 2000-2001 Andrew Ho.
Versions up to 0.5 are distributed under the same terms as Perl.
Maintenance of this module is now being done by David Coppit <david@coppit.org>.
SEE ALSO
Benchmark, Time::HiRes, Time::Stopwatch, Statistics::Descriptive
perl v5.10.1 2009-12-03 Benchmark::Timer(3pm)