Sponsored Content
UNIX Standards and Benchmarks UNIX & LINUX Benchmarks (Version 3.11) Linux Benchmarks Filesystem Benchmarks for HDDs and SSDs Post 303045051 by stomp on Wednesday 11th of March 2020 08:14:13 AM
Old 03-11-2020
My test hardware is the following:

Code:
inxi -v2 -C -D -M -R

System:    Host: pvetest Kernel: 5.3.10-1-pve x86_64 bits: 64 Console: tty 1 Distro: Debian GNU/Linux 10 (buster) 
Machine:   Type: Desktop Mobo: Intel model: DQ67SW v: AAG12527-309 serial: BQSW133004FE BIOS: Intel 
           v: SWQ6710H.86A.0067.2014.0313.1347 date: 03/13/2014 
CPU:       Topology: Quad Core model: Intel Core i7-2600 bits: 64 type: MT MCP L2 cache: 8192 KiB 
           Speed: 1687 MHz min/max: 1600/3800 MHz Core speeds (MHz): 1: 2690 2: 3287 3: 3659 4: 3682 5: 1887 6: 3648 7: 3658 
           8: 2228 
Network:   Device-1: Intel 82579LM Gigabit Network driver: e1000e 
Drives:    Local Storage: total: 3.97 TiB used: 12.73 GiB (0.3%) 
           ID-1: /dev/sda model: N/A size: 930.99 GiB 
           ID-2: /dev/sdb model: 1 size: 930.99 GiB 
           ID-3: /dev/sdc model: 2 size: 930.99 GiB 
           ID-4: /dev/sdd model: 3 size: 930.99 GiB 
           ID-5: /dev/sde vendor: Intel model: SSDSC2MH120A2 size: 111.79 GiB 
           ID-6: /dev/sdf vendor: Samsung model: SSD 850 EVO M.2 250GB size: 232.89 GiB 
RAID:      Hardware-1: Intel SATA Controller [RAID mode] driver: ahci 
           Hardware-2: Adaptec AAC-RAID driver: aacraid

The hard disks are of type SAS and attached to the adaptec raid controller as single disks. One Intel SSD as OS-Filesystem. The other one is attached PCIe SSD-m.2 Adapter. An additional m.2 SSD will be attached for later tests with ssd caching.

For the tests I will make use of fio - flexible I/O tester - one of the currently most popular storage benchmarking tools.

My production scenario will be webhosting. So it will be 25% write and 75% read. I will test that probably later after the basic read/write tests.

At first I'm making sure the device names I use are fixed so my tests will not overwrite any of the wrong disks. This may happen under linux because there is no fixed device naming of storage devices. The ordering may be different at every reboot. And it actually is, as I have noticed.

So I'm checking the serial numbers and copy the device file names to unique names I will be using then.

What's regarding partitions: I try to avoid using them and use whole disks instead as it makes the procedere simpler.

The git repository for the scripts is here:

GitHub - megabert/storage-benchmarks: Storage Benchmark Scripts

The script for creating the device names is this:

storage-benchmarks/mk_dev_names at master . megabert/storage-benchmarks . GitHub

Last edited by stomp; 03-12-2020 at 06:52 AM..
 

6 More Discussions You Might Find Interesting

1. UNIX Benchmarks

Instructions for UNIX Benchmarks

STEP 1: Get the source here: https://www.unix.com/source/bm.zip or https://www.unix.com/source/unix_linux_bench.tar.gz STEP 2: Unzip or Untar STEP 3: make STEP 4: Run STEP: 5: Please login to www.unix.com and post test results along with platform info to: Include (if you... (0 Replies)
Discussion started by: Neo
0 Replies

2. HP-UX

hdds physically

Hi, I've a HP-UX 10x running on HP9000 box and also I have 3 scsi hdd(9Gb), one of them is working. I need to check the other 2 hdd physically. Is there an utility to check them from unix or another way to do it? Thanks.... (5 Replies)
Discussion started by: efrenba
5 Replies

3. UNIX for Dummies Questions & Answers

hwo to find shared filesystem and local filesystem in AIX

Hi, I wanted to find out that in my database server which filesystems are shared storage and which filesystems are local. Like when I use df -k, it shows "filesystem" and "mounted on" but I want to know which one is shared and which one is local. Please tell me the commands which I can run... (2 Replies)
Discussion started by: kamranjalal
2 Replies

4. Shell Programming and Scripting

Understanding Benchmarks

I need a little clarification in understanding why there would be a need for a benchmark file when used with a backup script. Logically thinking would tell me that the backups itself(backuptest.tgz) would have the time created and etc. So what would be the purpose of such a file: touch... (6 Replies)
Discussion started by: metallica1973
6 Replies

5. Solaris

SPARC T4-1/Solaris 11/Add 2 new HDDs in RAID 0 configuration

Hi, Couple of sentences for background: I'm a software developer, whose task was to create a server software for our customer. Software is ready for deployment and customer has a new T4-1 SPARC, but somehow it also became my task also to setup the server. I have managed to get the server is up... (13 Replies)
Discussion started by: julumme
13 Replies

6. AIX

IBM AIX 5.2 cloning Hdds

I have an old IBM Power 5 9111-520 that has data on it but the system is failing. I need to move it to a more reliable server. The current system has two drives and no raid. I would like to setup my "newer" system with raid and two partitions then clone my setup over. What is the best way to do... (2 Replies)
Discussion started by: BDC80
2 Replies
SYSTEMD-DETECT-VIRT(1)						systemd-detect-virt					    SYSTEMD-DETECT-VIRT(1)

NAME
systemd-detect-virt - Detect execution in a virtualized environment SYNOPSIS
systemd-detect-virt [OPTIONS...] DESCRIPTION
systemd-detect-virt detects execution in a virtualized environment. It identifies the virtualization technology and can distinguish full machine virtualization from container virtualization. systemd-detect-virt exits with a return value of 0 (success) if a virtualization technology is detected, and non-zero (error) otherwise. By default, any type of virtualization is detected, and the options --container and --vm can be used to limit what types of virtualization are detected. When executed without --quiet will print a short identifier for the detected virtualization technology. The following technologies are currently identified: Table 1. Known virtualization technologies (both VM, i.e. full hardware virtualization, and container, i.e. shared kernel virtualization) +----------+----------------+--------------------------------------+ |Type | ID | Product | +----------+----------------+--------------------------------------+ |VM | qemu | QEMU software virtualization, | | | | without KVM | | +----------------+--------------------------------------+ | | kvm | Linux KVM kernel virtual machine, | | | | with whatever software, except | | | | Oracle Virtualbox | | +----------------+--------------------------------------+ | | zvm | s390 z/VM | | +----------------+--------------------------------------+ | | vmware | VMware Workstation or Server, and | | | | related products | | +----------------+--------------------------------------+ | | microsoft | Hyper-V, also known as Viridian or | | | | Windows Server Virtualization | | +----------------+--------------------------------------+ | | oracle | Oracle VM VirtualBox (historically | | | | marketed by innotek and Sun | | | | Microsystems), | | | | for legacy and KVM | | | | hypervisor | | +----------------+--------------------------------------+ | | xen | Xen hypervisor (only domU, not dom0) | | +----------------+--------------------------------------+ | | bochs | Bochs Emulator | | +----------------+--------------------------------------+ | | uml | User-mode Linux | | +----------------+--------------------------------------+ | | parallels | Parallels Desktop, Parallels Server | | +----------------+--------------------------------------+ | | bhyve | bhyve, FreeBSD hypervisor | +----------+----------------+--------------------------------------+ |Container | openvz | OpenVZ/Virtuozzo | | +----------------+--------------------------------------+ | | lxc | Linux container implementation by | | | | LXC | | +----------------+--------------------------------------+ | | lxc-libvirt | Linux container implementation by | | | | libvirt | | +----------------+--------------------------------------+ | | systemd-nspawn | systemd's minimal container | | | | implementation, see systemd- | | | | nspawn(1) | | +----------------+--------------------------------------+ | | docker | Docker container manager | | +----------------+--------------------------------------+ | | rkt | rkt app container runtime | +----------+----------------+--------------------------------------+ If multiple virtualization solutions are used, only the "innermost" is detected and identified. That means if both machine and container virtualization are used in conjunction, only the latter will be identified (unless --vm is passed). OPTIONS
The following options are understood: -c, --container Only detects container virtualization (i.e. shared kernel virtualization). -v, --vm Only detects hardware virtualization). -r, --chroot Detect whether invoked in a chroot(2) environment. In this mode, no output is written, but the return value indicates whether the process was invoked in a chroot() environment or not. --private-users Detect whether invoked in a user namespace. In this mode, no output is written, but the return value indicates whether the process was invoked inside of a user namespace or not. See user_namespaces(7) for more information. -q, --quiet Suppress output of the virtualization technology identifier. -h, --help Print a short help text and exit. --version Print a short version string and exit. EXIT STATUS
If a virtualization technology is detected, 0 is returned, a non-zero code otherwise. SEE ALSO
systemd(1), systemd-nspawn(1), chroot(2), namespaces(7) systemd 237 SYSTEMD-DETECT-VIRT(1)
All times are GMT -4. The time now is 07:08 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy