Sponsored Content
Full Discussion: Linux disk performance
Operating Systems Linux Red Hat Linux disk performance Post 302341258 by mark54g on Wednesday 5th of August 2009 11:04:03 AM
Old 08-05-2009
Another consideration is the performance of the disk controller itself. If the controller or the driver for it is poor, your overall will be as well. You also should consider what kernel scheduler to use for your workload as well as the speed of the drives themselves and whether the controller or disks can keep up with your workload type.

What file system were you using? What parameters at mount?
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

optimizing disk performance

I have some questions regarding disk perfomance, and what I can do to make it just a little (or much :)) more faster. From what I've heard the first partitions will be faster than the later ones because tracks at the outer edges of a hard drive platter simply moves faster. But I've also read in... (4 Replies)
Discussion started by: J.P
4 Replies

2. AIX

AIX System paramerter for Disk performance

Can I change any AIX System paramerter for speeding the data Disk performance? Currently it slows with writing operations. (1 Reply)
Discussion started by: gogogo
1 Replies

3. News, Links, Events and Announcements

Announcing collectl - new performance linux performance monitor

About 4 years ago I wrote this tool inspired by Rob Urban's collect tool for DEC's Tru64 Unix. What makes this tool as different as collect was in its day is its ability to run at a low overhead and collect tons of stuff. I've expanded the general concept and even include data not available in... (0 Replies)
Discussion started by: MarkSeger
0 Replies

4. AIX

disk performance

Hello, I have a aix 570 system with san disk. I do write test of performance in a lv with four disk. While the test I run filemon tools for trace the disk activity. The outputs of filemon are at the en of this message. I see my lV(logical volume) throughput at 100 meg by second. 2 of 4 disk... (0 Replies)
Discussion started by: Hugues
0 Replies

5. Red Hat

Disk performance problem on login

Running CentOS 5.5: I've come across a relatively recent problem, where in the last 2 months or so, the root disk goes to 99% utilization for about 20 seconds when a user logs in. This occurs whether a user logs in locally or via ssh. I have tried using lsof to track down the process that is... (5 Replies)
Discussion started by: dangral
5 Replies

6. Solaris

disk performance

What tools/utilities do you use to generate metrics on disk i/o throughput on Solaris. For example, if I want to see the i/o rate of random or sequential r/w. (2 Replies)
Discussion started by: dangral
2 Replies

7. Solaris

Poor Disk performance on ZFS

Hello, we have a machine with Solaris Express 11, 2 LSI 9211 8i SAS 2 controllers (multipath to disks), multiport backplane, 16 Seagate Cheetah 15K RPM disks. Each disk has a sequential performance of 220/230 MB/s and in fact if I do a dd if=/dev/zero of=/dev/rdsk/<diskID_1> bs=1024k... (1 Reply)
Discussion started by: golemico
1 Replies

8. Solaris

Poor disk performance however no sign of failure

Hello guys, I have two servers performing the same disk operations. I believe one server is having a disk's impending failure however I have no hard evidence to prove it. This is a pair of Netra 210's with 2 drives in a hardware raid mirror (LSI raid controller). While performing intensive... (4 Replies)
Discussion started by: s ladd
4 Replies

9. Linux

Disk Performance

I have a freshly installed Oracle Linux 7.1 ( akin to RHEL ) server. However after installing some Oracle software, I have noticed that my hard disk light is continually on and the system performance is slow. So I check out SAR and IOSTAT lab3:/root>iostat Linux... (2 Replies)
Discussion started by: jimthompson
2 Replies
MFI(4)							   BSD Kernel Interfaces Manual 						    MFI(4)

NAME
mfi -- LSI Logic & Dell MegaRAID SAS RAID controller SYNOPSIS
mfi* at pci? dev ? function ? DESCRIPTION
The mfi driver provides support for the MegaRAID SAS family of RAID controllers, including: - Dell PERC 5/e, PERC 5/i, PERC 6/e, PERC 6/i - Intel RAID Controller SRCSAS18E, SRCSAS144E - LSI Logic MegaRAID SAS 8208ELP, MegaRAID SAS 8208XLP, MegaRAID SAS 8300XLP, MegaRAID SAS 8308ELP, MegaRAID SAS 8344ELP, MegaRAID SAS 8408E, MegaRAID SAS 8480E, MegaRAID SAS 8708ELP, MegaRAID SAS 8888ELP, MegaRAID SAS 8880EM2, MegaRAID SAS 9260-8i - IBM ServeRAID M1015, ServeRAID M5014 These controllers support RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50 and RAID 60 using either SAS or SATA II drives. Although the controllers are actual RAID controllers, the driver makes them look just like SCSI controllers. All RAID configuration is done through the controllers' BIOSes. mfi supports monitoring of the logical disks in the controller through the bioctl(8) and envstat(8) commands. EVENTS
The mfi driver is able to send events to powerd(8) if a logical drive in the controller is not online. The state-changed event will be sent to the /etc/powerd/scripts/sensor_drive script when such condition happens. SEE ALSO
intro(4), pci(4), scsi(4), sd(4), bioctl(8), envstat(8), powerd(8) HISTORY
The mfi driver first appeared in NetBSD 4.0. BSD
March 22, 2012 BSD
All times are GMT -4. The time now is 09:46 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy