Sponsored Content
Full Discussion: disk performance
Operating Systems Solaris disk performance Post 302483866 by DukeNuke2 on Tuesday 28th of December 2010 05:02:34 PM
Old 12-28-2010
"iostat" and/or "zpool iostat" if you are on a zpool. also you can use dtrace if you are on solaris 10.
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

optimizing disk performance

I have some questions regarding disk perfomance, and what I can do to make it just a little (or much :)) more faster. From what I've heard the first partitions will be faster than the later ones because tracks at the outer edges of a hard drive platter simply moves faster. But I've also read in... (4 Replies)
Discussion started by: J.P
4 Replies

2. AIX

disk performance

Hello, I have a aix 570 system with san disk. I do write test of performance in a lv with four disk. While the test I run filemon tools for trace the disk activity. The outputs of filemon are at the en of this message. I see my lV(logical volume) throughput at 100 meg by second. 2 of 4 disk... (0 Replies)
Discussion started by: Hugues
0 Replies

3. AIX

AIX 5.2 5.3 disk performance exerciser tool

I'm search for a disk exerciser / load tool like iometer, iozone, diskx for IBM AIX 5.2 and 5.3 Because of a very bad disk performance on several AIX systems, I need to have a tool which is able to generate a disk load on my local and SAN disks. Does somebody knows a kind of tool which is... (5 Replies)
Discussion started by: funsje
5 Replies

4. Red Hat

Linux disk performance

I am getting absolutely dreadful iowait stats on my disks when I am trying to install some applications. I have 2 physical disks on which I have created 2 separate logical volume groups and a logical volume in each. I have dumped some stats as below My dual core CPU is not being over utilised... (3 Replies)
Discussion started by: jimthompson
3 Replies

5. Red Hat

Disk performance problem on login

Running CentOS 5.5: I've come across a relatively recent problem, where in the last 2 months or so, the root disk goes to 99% utilization for about 20 seconds when a user logs in. This occurs whether a user logs in locally or via ssh. I have tried using lsof to track down the process that is... (5 Replies)
Discussion started by: dangral
5 Replies

6. Solaris

Hard disk write performance very slow

Dear All, I have a hard disk in solaris on which the write performanc is too slow. The CPU , RAM memory are absolutely fine. What might be reason. Kindly explain. Rj (9 Replies)
Discussion started by: jegaraman
9 Replies

7. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

8. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies

9. Linux

Disk Performance

I have a freshly installed Oracle Linux 7.1 ( akin to RHEL ) server. However after installing some Oracle software, I have noticed that my hard disk light is continually on and the system performance is slow. So I check out SAR and IOSTAT lab3:/root>iostat Linux... (2 Replies)
Discussion started by: jimthompson
2 Replies
IOSTAT(1)						      General Commands Manual							 IOSTAT(1)

NAME
iostat - report I/O statistics SYNOPSIS
iostat [ drives ] [ interval [ count ] ] DESCRIPTION
Iostat iteratively reports the number of characters read and written to terminals per second, and, for each disk, the number of transfers per second, kilobytes transferred per second, and the milliseconds per average seek. It also gives the percentage of time the system has spent in user mode, in user mode running low priority (niced) processes, in system mode, and idling. To compute this information, for each disk, seeks and data transfer completions and number of words transferred are counted; for terminals collectively, the number of input and output characters are counted. Also, each sixtieth of a second, the state of each disk is examined and a tally is made if the disk is active. From these numbers and given the transfer rates of the devices it is possible to determine average seek times for each device. The optional interval argument causes iostat to report once each interval seconds. The first report is for all time since a reboot and each subsequent report is for the last interval only. The optional count argument restricts the number of reports. If more than 4 disk drives are configured in the system, iostat displays only the first 4 drives, with priority given to Massbus disk drives (i.e. if both Unibus and Massbus drives are present and the total number of drives exceeds 4, then some number of Unibus drives will not be displayed in favor of the Massbus drives). To force iostat to display specific drives, their names may be supplied on the command line. FILES
/dev/kmem /vmunix SEE ALSO
vmstat(1) 4th Berkeley Distribution April 29, 1985 IOSTAT(1)
All times are GMT -4. The time now is 06:07 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy