Sponsored Content
Full Discussion: Disk Performance
Operating Systems Linux Disk Performance Post 302961567 by jimthompson on Tuesday 1st of December 2015 10:28:34 AM
Old 12-01-2015
Disk Performance

I have a freshly installed Oracle Linux 7.1 ( akin to RHEL ) server.

However after installing some Oracle software, I have noticed that my hard disk light is continually on and the system performance is slow.

So I check out SAR and IOSTAT

Code:
lab3:/root>iostat
Linux 3.8.13-55.1.6.el7uek.x86_64 (lab3)        01/12/15        _x86_64_        (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          21.33    0.00    2.66   41.71    0.00   34.30

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             100.94      1928.77       653.97  110966874   37624674
sdb              53.08       929.61      3510.79   53482763  201984646
dm-0            438.70      1351.24       653.67   77740019   37607217
dm-1              0.01         0.02         0.00       1396          0
dm-2              4.93       577.01         0.27   33196938      15409

lab3:/root>sar 5 5
Linux 3.8.13-55.1.6.el7uek.x86_64 (lab3)        01/12/15        _x86_64_        (2 CPU)

15:19:06        CPU     %user     %nice   %system   %iowait    %steal     %idle
15:19:11        all      0.50      0.00      0.40      3.52      0.00     95.58
15:19:16        all      0.50      0.00      0.50      2.21      0.00     96.78
15:19:21        all      0.70      0.00      0.40      1.81      0.00     97.08
15:19:26        all      0.40      0.00      0.40      3.73      0.00     95.46
15:19:31        all      0.50      0.00      0.50     13.29      0.00     85.70
Average:        all      0.52      0.00      0.44      4.91      0.00     94.12

Now I only have 2 disks in my server i.e. /dev/sda and /dev/sdb

Q1. Why does Linux create dm-0,dm-1 and dm-2 as separate devices ( albeit I guess these are virtual devices via Device Manager ?
As far as I can tell these are the Oracle Linux Home, the Swap Device and
the Oracle Linux Root - however Idon't see a command directly linking dm-0, and dm-1 which the /home and / mount points

Q2. How do you tell if the dm-0, dm-1 and dm-2 are using the sda or sdb device ?

Q3. I see dm-0 ( Linux Home ) is experiencing a high rate of tps ( transactions per second ? ) whereas the sda device ( which I believe dm-0 is ultamately on ) is experiencing a high amount of data read - is this where my performance problem resides ?

Q4. Is there a way to tell which mounted file system is performing poorly ?

Q5. Why when I increase the Swap from 3 Gb to 19 Gb, I do this by adding a swap file ? Why is the 3 Gb shown as a swap device but the additional 16 Gb is not shown as a device ?

Code:
lab3:/root>swapon
NAME       TYPE      SIZE USED PRIO
/swapfile1 file       16G 6.8G   -1
/dev/dm-1  partition   3G   0B   -2

any help greatly appreciated
Jim

Last edited by Scrutinizer; 12-01-2015 at 12:18 PM.. Reason: Code tags
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

optimizing disk performance

I have some questions regarding disk perfomance, and what I can do to make it just a little (or much :)) more faster. From what I've heard the first partitions will be faster than the later ones because tracks at the outer edges of a hard drive platter simply moves faster. But I've also read in... (4 Replies)
Discussion started by: J.P
4 Replies

2. AIX

disk performance

Hello, I have a aix 570 system with san disk. I do write test of performance in a lv with four disk. While the test I run filemon tools for trace the disk activity. The outputs of filemon are at the en of this message. I see my lV(logical volume) throughput at 100 meg by second. 2 of 4 disk... (0 Replies)
Discussion started by: Hugues
0 Replies

3. AIX

AIX 5.2 5.3 disk performance exerciser tool

I'm search for a disk exerciser / load tool like iometer, iozone, diskx for IBM AIX 5.2 and 5.3 Because of a very bad disk performance on several AIX systems, I need to have a tool which is able to generate a disk load on my local and SAN disks. Does somebody knows a kind of tool which is... (5 Replies)
Discussion started by: funsje
5 Replies

4. Red Hat

Linux disk performance

I am getting absolutely dreadful iowait stats on my disks when I am trying to install some applications. I have 2 physical disks on which I have created 2 separate logical volume groups and a logical volume in each. I have dumped some stats as below My dual core CPU is not being over utilised... (3 Replies)
Discussion started by: jimthompson
3 Replies

5. Red Hat

Disk performance problem on login

Running CentOS 5.5: I've come across a relatively recent problem, where in the last 2 months or so, the root disk goes to 99% utilization for about 20 seconds when a user logs in. This occurs whether a user logs in locally or via ssh. I have tried using lsof to track down the process that is... (5 Replies)
Discussion started by: dangral
5 Replies

6. Solaris

Hard disk write performance very slow

Dear All, I have a hard disk in solaris on which the write performanc is too slow. The CPU , RAM memory are absolutely fine. What might be reason. Kindly explain. Rj (9 Replies)
Discussion started by: jegaraman
9 Replies

7. Solaris

disk performance

What tools/utilities do you use to generate metrics on disk i/o throughput on Solaris. For example, if I want to see the i/o rate of random or sequential r/w. (2 Replies)
Discussion started by: dangral
2 Replies

8. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

9. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies
IOSTAT(1)							Linux User's Manual							 IOSTAT(1)

NAME
iostat - Report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions. SYNOPSIS
iostat [ -c ] [ -d ] [ -h ] [ -N ] [ -k | -m ] [ -t ] [ -V ] [ -x ] [ -z ] [ [ [ -T ] -g group_name ] { device [...] | ALL } ] [ -p [ device [,...] | ALL ] ] [ interval [ count ] ] DESCRIPTION
The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates. The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks. The first report generated by the iostat command provides statistics concerning the time since the system was booted. Each subsequent report covers the time since the previous report. All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics. On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured. The interval parameter specifies the amount of time in seconds between each report. The first report contains statistics for the time since system startup (boot). Each subsequent report contains statistics collected during the interval since the previous report. The count param- eter can be specified in conjunction with the interval parameter. If the count parameter is specified, the value of count determines the number of reports generated at interval seconds apart. If the interval parameter is specified without the count parameter, the iostat com- mand generates reports continuously. REPORTS
The iostat command generates three types of reports, the CPU Utilization report, the Device Utilization report and the Network Filesystem report. CPU Utilization Report The first report generated by the iostat command is the CPU Utilization Report. For multiprocessor systems, the CPU values are global averages among all processors. The report has the following format: %user Show the percentage of CPU utilization that occurred while executing at the user level (application). %nice Show the percentage of CPU utilization that occurred while executing at the user level with nice priority. %system Show the percentage of CPU utilization that occurred while executing at the system level (kernel). %iowait Show the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. %steal Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor. %idle Show the percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request. Device Utilization Report The second report generated by the iostat command is the Device Utilization Report. The device report provides statistics on a per physical device or partition basis. Block devices and partitions for which statistics are to be displayed may be entered on the com- mand line. If no device nor partition is entered, then statistics are displayed for every device used by the system, and providing that the kernel maintains statistics for it. If the ALL keyword is given on the command line, then statistics are displayed for every device defined by the system, including those that have never been used. Transfer rates are shown in 1K blocks by default, unless the environment variable POSIXLY_CORRECT is set, in which case 512-byte blocks are used. The report may show the following fields, depending on the flags used: Device: This column gives the device (or partition) name as listed in the /dev directory. tps Indicate the number of transfers per second that were issued to the device. A transfer is an I/O request to the device. Mul- tiple logical requests can be combined into a single I/O request to the device. A transfer is of indeterminate size. Blk_read/s (kB_read/s, MB_read/s) Indicate the amount of data read from the device expressed in a number of blocks (kilobytes, megabytes) per second. Blocks are equivalent to sectors and therefore have a size of 512 bytes. Blk_wrtn/s (kB_wrtn/s, MB_wrtn/s) Indicate the amount of data written to the device expressed in a number of blocks (kilobytes, megabytes) per second. Blk_read (kB_read, MB_read) The total number of blocks (kilobytes, megabytes) read. Blk_wrtn (kB_wrtn, MB_wrtn) The total number of blocks (kilobytes, megabytes) written. rrqm/s The number of read requests merged per second that were queued to the device. wrqm/s The number of write requests merged per second that were queued to the device. r/s The number (after merges) of read requests completed per second for the device. w/s The number (after merges) of write requests completed per second for the device. rsec/s (rkB/s, rMB/s) The number of sectors (kilobytes, megabytes) read from the device per second. wsec/s (wkB/s, wMB/s) The number of sectors (kilobytes, megabytes) written to the device per second. avgrq-sz The average size (in sectors) of the requests that were issued to the device. avgqu-sz The average queue length of the requests that were issued to the device. await The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. r_await The average time (in milliseconds) for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. w_await The average time (in milliseconds) for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. svctm The average service time (in milliseconds) for I/O requests that were issued to the device. Warning! Do not trust this field any more. This field will be removed in a future sysstat version. %util Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%. OPTIONS
-c Display the CPU utilization report. -d Display the device utilization report. -g group_name { device [...] | ALL } Display statistics for a group of devices. The iostat command reports statistics for each individual device in the list then a line of global statistics for the group displayed as group_name and made up of all the devices in the list. The ALL keyword means that all the block devices defined by the system shall be included in the group. -h Make the Device Utilization Report easier to read by a human. -k Display statistics in kilobytes per second. -m Display statistics in megabytes per second. -N Display the registered device mapper names for any device mapper devices. Useful for viewing LVM2 statistics. -p [ { device [,...] | ALL } ] The -p option displays statistics for block devices and all their partitions that are used by the system. If a device name is entered on the command line, then statistics for it and all its partitions are displayed. Last, the ALL keyword indicates that sta- tistics have to be displayed for all the block devices and partitions defined by the system, including those that have never been used. -T This option must be used with option -g and indicates that only global statistics for the group are to be displayed, and not statis- tics for individual devices in the group. -t Print the time for each report displayed. The timestamp format may depend on the value of the S_TIME_FORMAT environment variable (see below). -V Print version number then exit. -x Display extended statistics. -z Tell iostat to omit output for any devices for which there was no activity during the sample period. ENVIRONMENT
The iostat command takes into account the following environment variables: S_TIME_FORMAT If this variable exists and its value is ISO then the current locale will be ignored when printing the date in the report header. The iostat command will use the ISO 8601 format (YYYY-MM-DD) instead. The timestamp displayed with option -t will also be compliant with ISO 8601 format. POSIXLY_CORRECT When this variable is set, transfer rates are shown in 512-byte blocks instead of the default 1K blocks. EXAMPLES
iostat Display a single history since boot report for all CPU and Devices. iostat -d 2 Display a continuous device report at two second intervals. iostat -d 2 6 Display six reports at two second intervals for all devices. iostat -x sda sdb 2 6 Display six reports of extended statistics at two second intervals for devices sda and sdb. iostat -p sda 2 6 Display six reports at two second intervals for device sda and all its partitions (sda1, etc.) BUGS
/proc filesystem must be mounted for iostat to work. Kernels older than 2.6.x are no longer supported. The average service time (svctm field) value is meaningless, as I/O statistics are calculated at block level, and we don't know when the disk driver starts to process a request. For this reason, this field will be removed in a future sysstat version. FILES
/proc/stat contains system statistics. /proc/uptime contains system uptime. /proc/diskstats contains disks statistics. /sys contains statistics for block devices. /proc/self/mountstats contains statistics for network filesystems. AUTHOR
Sebastien Godard (sysstat <at> orange.fr) SEE ALSO
sar(1), pidstat(1), mpstat(1), vmstat(8), nfsiostat(1), cifsiostat(1) http://pagesperso-orange.fr/sebastien.godard/ Linux MAY 2012 IOSTAT(1)
All times are GMT -4. The time now is 08:16 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy