Sponsored Content
UNIX Standards and Benchmarks UNIX & LINUX Benchmarks (Version 3.11) Linux Benchmarks Filesystem Benchmarks for HDDs and SSDs Post 303045048 by stomp on Wednesday 11th of March 2020 06:58:25 AM
Old 03-11-2020
Filesystem Benchmarks for HDDs and SSDs

Hi,

I'm interested in storage benchmarks for various configurations in order to figure out what's best for a virtualization environment. The virtualization environment will be proxmox, as it is my choice for the best manageable virtualization platform with plenty of features right now.

I want to look at the following configuration options, which may have an impact on performance:
  • filesystem
  • lvm
  • thin provisioning
  • transparent compression
  • multi disk technology(technology, raidlevel)
  • ssd caching

thin provisioning

Thin provisioning is the method of having virtually unlimited space and provide actual physical existent space only in the amount of actually used space. So you can define multiple TB of disk capacity and only have a 250 GB SSD at the back. If that backend device is getting filled up, you can add more storage when you need it. It's especially helpful in the times of SSDs because they are still considerably more expensive, so you do not want to spend thousands of $ when you in fact do not need it. Furthermore there are big differences in SSD products. SSDs for desktop use maybe quite cheap. But SSDs for server which are heavily written on are much more expensive.

price example
  • normal consumer SSD: 500 GB m.2 ssd start from 80 € (Total Lifetime Write Capacity: 300 TB = 600 Full Writes)
  • datacenter SSD: 375GB Intel Optane SSD DC P4800X PCIe costs about 1200 €. (Total Lifetime Write Capacity: 20.5 PB = 57,000 Full writes)

filesystem and lvm

Many filesystems have interesting features, which are helpful besides the pure performance and problems which one would not like:
  • PRO: zfs and btrfs has checksums and selfhealing against data corruption.
  • PRO: zfs and lvm provides methods for thin provisioning
  • PRO: ext4 is easy to use. a simple fire and forget filesystem.
  • PRO: btrfs has an enormous flexibility
  • PRO: lvm has the flexibility to change configurations without downtime
  • CON: ext3 has quite long filesystemcheck times.
  • ...

transparent compression

Transparent compression is a layer which reduces the amount of written/read data onto/from the raw disk and thus may increase speed at the cost of cpu power.

multi disk technology(technology, raidlevel)

There are different multi disk technologies available. Linux Software RAID, LVM, btrfs raid, zfs raid. They combine the speed of multiple devices and add redundancy to be able to cope with device failures without data loss.

ssd caching

ssd caching can accelerate slower hdds by adding putting used data onto the fast ssd as read cache or by storing datas to be written preliminary to the ssd and have it synced to the slower hard disks in the background, not loosing data security, because data written to the ssd is already persistent.

ceph - no option here

Ceph is a very interesting technology. I'm not considering using it, because the money needed to get it run with good performance is a lot higher than just with disks and ssds. You need at least 10 G networking, or even better, which is a lot more costly than 1 G. You need full equipped SSD Storage which is more expensive too. A big plus with ceph is that you get a redundant network storage, so you can immediately start virtual machines on other nodes if a compute node crashes. If money is no problem, and the performance is not needed at the maximum, ceph would be an excellent choice. I have a 3-node-cluster with ceph here up and running. It works like charm. Administration is easy and performance is fine.

In the following threads, I'll introduce more on my environment and scripts of the benchmarking.

Last edited by stomp; 03-11-2020 at 08:22 AM..
These 3 Users Gave Thanks to stomp For This Post:
 

6 More Discussions You Might Find Interesting

1. UNIX Benchmarks

Instructions for UNIX Benchmarks

STEP 1: Get the source here: https://www.unix.com/source/bm.zip or https://www.unix.com/source/unix_linux_bench.tar.gz STEP 2: Unzip or Untar STEP 3: make STEP 4: Run STEP: 5: Please login to www.unix.com and post test results along with platform info to: Include (if you... (0 Replies)
Discussion started by: Neo
0 Replies

2. HP-UX

hdds physically

Hi, I've a HP-UX 10x running on HP9000 box and also I have 3 scsi hdd(9Gb), one of them is working. I need to check the other 2 hdd physically. Is there an utility to check them from unix or another way to do it? Thanks.... (5 Replies)
Discussion started by: efrenba
5 Replies

3. UNIX for Dummies Questions & Answers

hwo to find shared filesystem and local filesystem in AIX

Hi, I wanted to find out that in my database server which filesystems are shared storage and which filesystems are local. Like when I use df -k, it shows "filesystem" and "mounted on" but I want to know which one is shared and which one is local. Please tell me the commands which I can run... (2 Replies)
Discussion started by: kamranjalal
2 Replies

4. Shell Programming and Scripting

Understanding Benchmarks

I need a little clarification in understanding why there would be a need for a benchmark file when used with a backup script. Logically thinking would tell me that the backups itself(backuptest.tgz) would have the time created and etc. So what would be the purpose of such a file: touch... (6 Replies)
Discussion started by: metallica1973
6 Replies

5. Solaris

SPARC T4-1/Solaris 11/Add 2 new HDDs in RAID 0 configuration

Hi, Couple of sentences for background: I'm a software developer, whose task was to create a server software for our customer. Software is ready for deployment and customer has a new T4-1 SPARC, but somehow it also became my task also to setup the server. I have managed to get the server is up... (13 Replies)
Discussion started by: julumme
13 Replies

6. AIX

IBM AIX 5.2 cloning Hdds

I have an old IBM Power 5 9111-520 that has data on it but the system is failing. I need to move it to a more reliable server. The current system has two drives and no raid. I would like to setup my "newer" system with raid and two partitions then clone my setup over. What is the best way to do... (2 Replies)
Discussion started by: BDC80
2 Replies
All times are GMT -4. The time now is 08:52 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy