Intel P4 2.8ghz 1gb 800mhz RamBus


 
Thread Tools Search this Thread
UNIX Standards and Benchmarks UNIX & LINUX Benchmarks (Version 3.11) Linux Benchmarks Intel P4 2.8ghz 1gb 800mhz RamBus
# 1  
Old 03-11-2004
Intel P4 2.8ghz 1gb 800mhz RamBus

CPU: Intel P4 2.8ghz 512kb cache - 5505.02 BogoMIPS
Ram: 1gb 800mhz RamBus
Mobo: Intel 440
Disk: 2/maxtor 80gb 1/segate 40gb 1/maxtor 15gb - all ReiserFS
Kernel: 2.6.4-rc1-mm1 self-compile
Distro: Gentoo Linux 3.3.2-r5
Load: vmware/mysql/apache2/mydns/kylix3 was compiling/zde/buncha normal everyday apps in use

BYTE UNIX Benchmarks (Version 3.11)
System -- Linux avalon 2.6.4-rc1-mm1 #2 Tue Mar 2 02:13:26 EST 2004 i686 Intel(R) Pentium(R) 4 CPU 2.80GHz GenuineIntel GNU/Linux
Start Benchmark Run: Wed Mar 10 19:06:16 EST 2004
1 interactive users.
Dhrystone 2 without register variables 2354135.8 lps (10 secs, 6 samples)
Dhrystone 2 using register variables 2449676.2 lps (10 secs, 6 samples)
Arithmetic Test (type = arithoh) 8603100.6 lps (10 secs, 6 samples)
Arithmetic Test (type = register) 379019.8 lps (10 secs, 6 samples)
Arithmetic Test (type = short) 319691.0 lps (10 secs, 6 samples)
Arithmetic Test (type = int) 381003.4 lps (10 secs, 6 samples)
Arithmetic Test (type = long) 376588.6 lps (10 secs, 6 samples)
Arithmetic Test (type = float) 368390.1 lps (10 secs, 6 samples)
Arithmetic Test (type = double) 369366.9 lps (10 secs, 6 samples)
System Call Overhead Test 243579.0 lps (10 secs, 6 samples)
Pipe Throughput Test 396683.6 lps (10 secs, 6 samples)
Pipe-based Context Switching Test 114526.1 lps (10 secs, 6 samples)
Process Creation Test 6689.4 lps (10 secs, 6 samples)
Execl Throughput Test 1677.0 lps (9 secs, 6 samples)
File Read (10 seconds) 1060401.0 KBps (10 secs, 6 samples)
File Write (10 seconds) 254833.0 KBps (10 secs, 6 samples)
File Copy (10 seconds) 35295.0 KBps (10 secs, 6 samples)
File Read (30 seconds) 1010585.0 KBps (30 secs, 6 samples)
File Write (30 seconds) 245489.0 KBps (30 secs, 6 samples)
File Copy (30 seconds) 25137.0 KBps (30 secs, 6 samples)
C Compiler Test 562.8 lpm (60 secs, 3 samples)
Shell scripts (1 concurrent) 2466.3 lpm (60 secs, 3 samples)
Shell scripts (2 concurrent) 1325.5 lpm (60 secs, 3 samples)
Shell scripts (4 concurrent) 687.3 lpm (60 secs, 3 samples)
Shell scripts (8 concurrent) 274.4 lpm (60 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 76568.3 lpm (60 secs, 6 samples)
Recursion Test--Tower of Hanoi 36708.6 lps (10 secs, 6 samples)


INDEX VALUES
TEST BASELINE RESULT INDEX

Arithmetic Test (type = double) 2541.7 369366.9 145.3
Dhrystone 2 without register variables 22366.3 2354135.8 105.3
Execl Throughput Test 16.5 1677.0 101.6
File Copy (30 seconds) 179.0 25137.0 140.4
Pipe-based Context Switching Test 1318.5 114526.1 86.9
Shell scripts (8 concurrent) 4.0 274.4 68.6
=========
SUM of 6 items 648.1
AVERAGE 108.0
Login or Register to Ask a Question

Previous Thread | Next Thread

8 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Writing a script that creates a 1GB file with zeros using dd

I am new to Linux. Using latest version of Ubuntu. I want to make a script that creates a 1GB file filled with zeros using dd and then formats the file as vfat with a label of "MYFILE". If anyone can help me it would be appreciated. (9 Replies)
Discussion started by: paviter619
9 Replies

2. Shell Programming and Scripting

Diff on 1gb files

Hey Guys, I have a scenario to compare two different files which are of size 1gb each. I need to get the uncommon lines. I planned to use sdiff command, which generally works perfect for me. But in this case am facing a error saying "diff: memory exhausted" Can anyone please explain this.... (2 Replies)
Discussion started by: abhisheksunkari
2 Replies

3. UNIX for Dummies Questions & Answers

How do I find files which are older than 30 days and greater than 1GB

Hi All, I know the separate commands for finding files greater than 30 days and finding files greater than 1GB. How do I combine these two commands? Meaning how do I find files which are > 1GB and older than 30 days? ;) (4 Replies)
Discussion started by: Hangman2
4 Replies

4. Red Hat

1gb full duplex cannot be set to auto-nego off?

Hi all, I've changed eth0 to (#ethtool eth0): Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on and (#cat /etc/sysconfig/network-scripts/ifcfg-eth): ETHTOOL_OPTS="autoneg off speed 1000... (3 Replies)
Discussion started by: faraaris
3 Replies

5. Red Hat

1Gb+ memory problem

Hi, I have a Linux distribution ( Oralce Enterprise Linux 5.3 i.e. Redhat ) that I have installed. It works fine when I used 2*512Mb dimms or replace them with a single 1Gb dimm. However when I try to go above 1 Gb the bootup and general performance deteriorates badly. The BIOS picks up the memory... (3 Replies)
Discussion started by: jimthompson
3 Replies

6. UNIX for Dummies Questions & Answers

Have problem transfer large file bigger 1GB

Hi folks, I have a big problem.... and need help from your experience/knowledge. I previously install and use FREEBSD 7.0 release on my storage/backup file server, for some reason, I can not transfer any files that is bigger than 1GB. If I transfer it to Freebsd file server, the system... (2 Replies)
Discussion started by: bsdme2
2 Replies

7. UNIX for Dummies Questions & Answers

How can create a directory with 1GB size?

How can create a directory with 1GB size? (6 Replies)
Discussion started by: johnveslin
6 Replies

8. UNIX for Advanced & Expert Users

ZIP file with 1GB or more

Hi, I have to do a search on a zip files whose sizes vary from 1GB to 1.5GB. I dont want to unzip it since if it goes beyond 2GB....... also will unzip -p filename | grep create any problems, will it unzip the whole file or will it unzip it piece by piece?? I appreciate your inputs... ... (1 Reply)
Discussion started by: baanprog
1 Replies
Login or Register to Ask a Question