Sponsored Content
Full Discussion: optimizing disk performance
Special Forums Hardware Filesystems, Disks and Memory optimizing disk performance Post 22907 by Perderabo on Wednesday 12th of June 2002 08:44:29 PM
Old 06-12-2002
I have to tell you that I don't really think this approach is a great idea. With unix filesystems, it's too hard to to keep a file precisely positioned in one spot. Unix wasn't meant to be used that way. But here are your answers...

In the olden days disks had a fixed geometry. The first track and the last track held the same amount of data. As disk manufacturers chased after greater data densities, they changed things so that the outer tracks now have more sectors than the inner tracks. Some disk optimization papers were written in those olden days. Every thing they say may no longer apply. This is the problem with exploiting disk geometry...it changes and suddenly your hack is now counterproductive.

But in the olden days, since each sector could be read with equal speed, your primary concern was getting the disk heads to your sector. This is why putting the data in the middle of the disk is a good idea. The heads cannot be more than half a disk away. So the mean seek time is as low as you can get it. But this assumes that the heads might be anywhere on the disk. If you can guarantee that the heads are positioned over your data, seek time becomes less of an issue. One way to do this is to use only the outer tracks of each disk drive and ignore the inner 90% of the disk.

If that's not possible, then it will depend on how much data is to be transferred. With large multi-sector transfers, the longer seek time may be compensated by the faster transfer time. The only way to be sure is to try it both ways and benchmark it.

And while you're at it, put the data on the inner tracks and benchmark that. That should be the worst case, longest mean seek time and longest transfer time. This will give you a feel for how little benefit you're reaping from a lot of work.
 

7 More Discussions You Might Find Interesting

1. News, Links, Events and Announcements

Optimizing Apache Server Performance

I found this link useful in a discussion about Apache performance tuning: http://www.serverwatch.com/tutorials/article.php/3436911 (0 Replies)
Discussion started by: Neo
0 Replies

2. AIX

disk performance

Hello, I have a aix 570 system with san disk. I do write test of performance in a lv with four disk. While the test I run filemon tools for trace the disk activity. The outputs of filemon are at the en of this message. I see my lV(logical volume) throughput at 100 meg by second. 2 of 4 disk... (0 Replies)
Discussion started by: Hugues
0 Replies

3. Red Hat

Linux disk performance

I am getting absolutely dreadful iowait stats on my disks when I am trying to install some applications. I have 2 physical disks on which I have created 2 separate logical volume groups and a logical volume in each. I have dumped some stats as below My dual core CPU is not being over utilised... (3 Replies)
Discussion started by: jimthompson
3 Replies

4. Red Hat

Disk performance problem on login

Running CentOS 5.5: I've come across a relatively recent problem, where in the last 2 months or so, the root disk goes to 99% utilization for about 20 seconds when a user logs in. This occurs whether a user logs in locally or via ssh. I have tried using lsof to track down the process that is... (5 Replies)
Discussion started by: dangral
5 Replies

5. Solaris

disk performance

What tools/utilities do you use to generate metrics on disk i/o throughput on Solaris. For example, if I want to see the i/o rate of random or sequential r/w. (2 Replies)
Discussion started by: dangral
2 Replies

6. Linux

Disk Performance

I have a freshly installed Oracle Linux 7.1 ( akin to RHEL ) server. However after installing some Oracle software, I have noticed that my hard disk light is continually on and the system performance is slow. So I check out SAR and IOSTAT lab3:/root>iostat Linux... (2 Replies)
Discussion started by: jimthompson
2 Replies

7. Debian

Optimizing exim performance

Hi, Recently, I experienced that exim was slow in sending outgoing mail, it was spending a lot of time in the queue, resulting in customer complains. I came across an article in the internet to optimize the performance of exim in the server. However, the location of the exim.conf is not in... (0 Replies)
Discussion started by: anaigini45
0 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated disk driver SYNOPSIS
pseudo-device ccd [count] DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you're familiar with how to generate kernels, how to properly configure disks and pseudo-devices in a kernel con- figuration file, and how to partition disks. Note that the 'raw' partitions of the disks must not be combined. Each component partition should be offset at least one cylinder from the beginning of the component disk. This avoids potential conflicts between the component disk's disklabel and the ccd's disklabel. The kernel will only allow component partitions of type FS_CCD. But for now, it allows partition of all types since some port lacks support of an on- disk BSD disklabel. The partition of FS_UNUSED may be rejected because device driver of component disk will refuse it. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: pseudo-device ccd 4 # concatenated disk devices The count argument is how many ccds memory is allocated for at boot time. In this example, no more than 4 ccds may be configured. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase performance. Since the interleave factor is expressed in units of DEV_BSIZE, one must account for sector sizes other than DEV_BSIZE in order to calculate the correct interleave. The kernel will not allow an interleave factor less than the size of the largest component sector divided by DEV_BSIZE. Note that best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. Also note that the total size of concatenated disk may vary depending on the interleave factor even if the exact same components are concate- nated. And an old on-disk disklabel may be read after interleave factor change. As a result, the disklabel may contain wrong partition geometry and will cause an error when doing I/O near the end of concatenated disk. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. WARNINGS
If just one (or more) of the disks in a non-mirrored ccd fails, the entire file system will be lost. FILES
/dev/{,r}ccd* ccd device special files. SEE ALSO
config(1), MAKEDEV(8), ccdconfig(8), fsck(8), mount(8), newfs(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
March 5, 2004 BSD
All times are GMT -4. The time now is 02:45 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy