Sponsored Content
UNIX Standards and Benchmarks UNIX & LINUX Benchmarks (Version 3.11) Linux Benchmarks Filesystem Benchmarks for HDDs and SSDs Post 303045051 by stomp on Wednesday 11th of March 2020 08:14:13 AM
Old 03-11-2020
My test hardware is the following:

Code:
inxi -v2 -C -D -M -R

System:    Host: pvetest Kernel: 5.3.10-1-pve x86_64 bits: 64 Console: tty 1 Distro: Debian GNU/Linux 10 (buster) 
Machine:   Type: Desktop Mobo: Intel model: DQ67SW v: AAG12527-309 serial: BQSW133004FE BIOS: Intel 
           v: SWQ6710H.86A.0067.2014.0313.1347 date: 03/13/2014 
CPU:       Topology: Quad Core model: Intel Core i7-2600 bits: 64 type: MT MCP L2 cache: 8192 KiB 
           Speed: 1687 MHz min/max: 1600/3800 MHz Core speeds (MHz): 1: 2690 2: 3287 3: 3659 4: 3682 5: 1887 6: 3648 7: 3658 
           8: 2228 
Network:   Device-1: Intel 82579LM Gigabit Network driver: e1000e 
Drives:    Local Storage: total: 3.97 TiB used: 12.73 GiB (0.3%) 
           ID-1: /dev/sda model: N/A size: 930.99 GiB 
           ID-2: /dev/sdb model: 1 size: 930.99 GiB 
           ID-3: /dev/sdc model: 2 size: 930.99 GiB 
           ID-4: /dev/sdd model: 3 size: 930.99 GiB 
           ID-5: /dev/sde vendor: Intel model: SSDSC2MH120A2 size: 111.79 GiB 
           ID-6: /dev/sdf vendor: Samsung model: SSD 850 EVO M.2 250GB size: 232.89 GiB 
RAID:      Hardware-1: Intel SATA Controller [RAID mode] driver: ahci 
           Hardware-2: Adaptec AAC-RAID driver: aacraid

The hard disks are of type SAS and attached to the adaptec raid controller as single disks. One Intel SSD as OS-Filesystem. The other one is attached PCIe SSD-m.2 Adapter. An additional m.2 SSD will be attached for later tests with ssd caching.

For the tests I will make use of fio - flexible I/O tester - one of the currently most popular storage benchmarking tools.

My production scenario will be webhosting. So it will be 25% write and 75% read. I will test that probably later after the basic read/write tests.

At first I'm making sure the device names I use are fixed so my tests will not overwrite any of the wrong disks. This may happen under linux because there is no fixed device naming of storage devices. The ordering may be different at every reboot. And it actually is, as I have noticed.

So I'm checking the serial numbers and copy the device file names to unique names I will be using then.

What's regarding partitions: I try to avoid using them and use whole disks instead as it makes the procedere simpler.

The git repository for the scripts is here:

GitHub - megabert/storage-benchmarks: Storage Benchmark Scripts

The script for creating the device names is this:

storage-benchmarks/mk_dev_names at master . megabert/storage-benchmarks . GitHub

Last edited by stomp; 03-12-2020 at 06:52 AM..
 

6 More Discussions You Might Find Interesting

1. UNIX Benchmarks

Instructions for UNIX Benchmarks

STEP 1: Get the source here: https://www.unix.com/source/bm.zip or https://www.unix.com/source/unix_linux_bench.tar.gz STEP 2: Unzip or Untar STEP 3: make STEP 4: Run STEP: 5: Please login to www.unix.com and post test results along with platform info to: Include (if you... (0 Replies)
Discussion started by: Neo
0 Replies

2. HP-UX

hdds physically

Hi, I've a HP-UX 10x running on HP9000 box and also I have 3 scsi hdd(9Gb), one of them is working. I need to check the other 2 hdd physically. Is there an utility to check them from unix or another way to do it? Thanks.... (5 Replies)
Discussion started by: efrenba
5 Replies

3. UNIX for Dummies Questions & Answers

hwo to find shared filesystem and local filesystem in AIX

Hi, I wanted to find out that in my database server which filesystems are shared storage and which filesystems are local. Like when I use df -k, it shows "filesystem" and "mounted on" but I want to know which one is shared and which one is local. Please tell me the commands which I can run... (2 Replies)
Discussion started by: kamranjalal
2 Replies

4. Shell Programming and Scripting

Understanding Benchmarks

I need a little clarification in understanding why there would be a need for a benchmark file when used with a backup script. Logically thinking would tell me that the backups itself(backuptest.tgz) would have the time created and etc. So what would be the purpose of such a file: touch... (6 Replies)
Discussion started by: metallica1973
6 Replies

5. Solaris

SPARC T4-1/Solaris 11/Add 2 new HDDs in RAID 0 configuration

Hi, Couple of sentences for background: I'm a software developer, whose task was to create a server software for our customer. Software is ready for deployment and customer has a new T4-1 SPARC, but somehow it also became my task also to setup the server. I have managed to get the server is up... (13 Replies)
Discussion started by: julumme
13 Replies

6. AIX

IBM AIX 5.2 cloning Hdds

I have an old IBM Power 5 9111-520 that has data on it but the system is failing. I need to move it to a more reliable server. The current system has two drives and no raid. I would like to setup my "newer" system with raid and two partitions then clone my setup over. What is the best way to do... (2 Replies)
Discussion started by: BDC80
2 Replies
MFI(4)							   BSD Kernel Interfaces Manual 						    MFI(4)

NAME
mfi -- LSI MegaRAID SAS driver SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file: device pci device mfi Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5): mfi_load="YES" DESCRIPTION
This driver is for LSI's next generation PCI Express SAS RAID controllers. Access to RAID arrays (logical disks) from this driver is pro- vided via /dev/mfid? device nodes. A simple management interface is also provided on a per-controller basis via the /dev/mfi? device node. The mfi name is derived from the phrase "MegaRAID Firmware Interface", which is substantially different than the old "MegaRAID" interface and thus requires a new driver. Older SCSI and SATA MegaRAID cards are supported by amr(4) and will not work with this driver. Two sysctls are provided to tune the mfi driver's behavior when a request is made to remove a mounted volume. By default the driver will disallow any requests to remove a mounted volume. If the sysctl dev.mfi.%d.delete_busy_volumes is set to 1, then the driver will allow mounted volumes to be removed. A tunable is provided to adjust the mfi driver's behaviour when attaching to a card. By default the driver will attach to all known cards with high probe priority. If the tunable hw.mfi.mrsas_enable is set to 1, then the driver will reduce its probe priority to allow mrsas to attach to the card instead of mfi. HARDWARE
The mfi driver supports the following hardware: o LSI MegaRAID SAS 1078 o LSI MegaRAID SAS 8408E o LSI MegaRAID SAS 8480E o LSI MegaRAID SAS 9240 o LSI MegaRAID SAS 9260 o Dell PERC5 o Dell PERC6 o IBM ServeRAID M1015 SAS/SATA o IBM ServeRAID M1115 SAS/SATA o IBM ServeRAID M5015 SAS/SATA o IBM ServeRAID M5110 SAS/SATA o IBM ServeRAID-MR10i o Intel RAID Controller SRCSAS18E o Intel RAID Controller SROMBSAS18E FILES
/dev/mfid? array/logical disk interface /dev/mfi? management interface DIAGNOSTICS
mfid%d: Unable to delete busy device An attempt was made to remove a mounted volume. SEE ALSO
amr(4), pci(4), mfiutil(8) HISTORY
The mfi driver first appeared in FreeBSD 6.1. AUTHORS
The mfi driver and this manual page were written by Scott Long <scottl@FreeBSD.org>. BUGS
The driver does not support big-endian architectures at this time. BSD
July 15, 2013 BSD
All times are GMT -4. The time now is 04:57 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy