06-12-2002
I have to tell you that I don't really think this approach is a great idea. With unix filesystems, it's too hard to to keep a file precisely positioned in one spot. Unix wasn't meant to be used that way. But here are your answers...
In the olden days disks had a fixed geometry. The first track and the last track held the same amount of data. As disk manufacturers chased after greater data densities, they changed things so that the outer tracks now have more sectors than the inner tracks. Some disk optimization papers were written in those olden days. Every thing they say may no longer apply. This is the problem with exploiting disk geometry...it changes and suddenly your hack is now counterproductive.
But in the olden days, since each sector could be read with equal speed, your primary concern was getting the disk heads to your sector. This is why putting the data in the middle of the disk is a good idea. The heads cannot be more than half a disk away. So the mean seek time is as low as you can get it. But this assumes that the heads might be anywhere on the disk. If you can guarantee that the heads are positioned over your data, seek time becomes less of an issue. One way to do this is to use only the outer tracks of each disk drive and ignore the inner 90% of the disk.
If that's not possible, then it will depend on how much data is to be transferred. With large multi-sector transfers, the longer seek time may be compensated by the faster transfer time. The only way to be sure is to try it both ways and benchmark it.
And while you're at it, put the data on the inner tracks and benchmark that. That should be the worst case, longest mean seek time and longest transfer time. This will give you a feel for how little benefit you're reaping from a lot of work.
7 More Discussions You Might Find Interesting
1. News, Links, Events and Announcements
I found this link useful in a discussion about Apache performance tuning:
http://www.serverwatch.com/tutorials/article.php/3436911 (0 Replies)
Discussion started by: Neo
0 Replies
2. AIX
Hello,
I have a aix 570 system with san disk. I do write test of performance
in a lv with four disk. While the test I run filemon tools for trace
the disk activity. The outputs of filemon are at the en of this message. I
see my lV(logical volume) throughput at 100 meg by second. 2 of 4
disk... (0 Replies)
Discussion started by: Hugues
0 Replies
3. Red Hat
I am getting absolutely dreadful iowait stats on my disks when I am trying to install some applications.
I have 2 physical disks on which I have created 2 separate logical volume groups and a logical volume in each. I have dumped some stats as below
My dual core CPU is not being over utilised... (3 Replies)
Discussion started by: jimthompson
3 Replies
4. Red Hat
Running CentOS 5.5:
I've come across a relatively recent problem, where in the last 2 months or so, the root disk goes to 99% utilization for about 20 seconds when a user logs in. This occurs whether a user logs in locally or via ssh. I have tried using lsof to track down the process that is... (5 Replies)
Discussion started by: dangral
5 Replies
5. Solaris
What tools/utilities do you use to generate metrics on disk i/o throughput on Solaris. For example, if I want to see the i/o rate of random or sequential r/w. (2 Replies)
Discussion started by: dangral
2 Replies
6. Linux
I have a freshly installed Oracle Linux 7.1 ( akin to RHEL ) server.
However after installing some Oracle software, I have noticed that my hard disk light is continually on and the system performance is slow.
So I check out SAR and IOSTAT
lab3:/root>iostat
Linux... (2 Replies)
Discussion started by: jimthompson
2 Replies
7. Debian
Hi,
Recently, I experienced that exim was slow in sending outgoing mail, it was spending a lot of time in the queue, resulting in customer complains.
I came across an article in the internet to optimize the performance of exim in the server. However, the location of the exim.conf is not in... (0 Replies)
Discussion started by: anaigini45
0 Replies
LEARN ABOUT DEBIAN
gfs2_convert
gfs2_convert(8) System Manager's Manual gfs2_convert(8)
NAME
gfs2_convert - Convert a GFS1 filesystem to GFS2
SYNOPSIS
gfs2_convert [OPTION]... DEVICE
DESCRIPTION
gfs2_convert is used to convert a filesystem from GFS1 to GFS2. Do not attempt to convert a GFS1 filesystem which is not clean - the
process of conversion will reinitialise the journals. We highly recommend that fsck.gfs is run successfully before attempting to convert a
filesystem. Many of the on-disk structures are identical between GFS1 and GFS2, so the conversion process updates the journals, a few items
of incompatible metadata (mostly indirect pointers and inodes) and adds the per-node directories required by the gfs2meta filesystem. The
conversion process is performed in-place and does not require any extra disk space so that it is possible to successfully convert a GFS1
filesystem that is completely full.
Always ensure you have a complete backup of the data on any filesystem before starting the conversion process.
OPTIONS
-h Help.
This prints out the proper command line usage syntax.
-q Quiet. Print less information while running.
-n No to all questions.
-V Print program Version information only.
Print out the current version name.
-v Verbose operation.
Print more information while running.
-y Yes to all questions.
By specifying this option, gfs2_convert will not prompt before making changes.
EXAMPLE
gfs2_convert /dev/vg0/lvol0
This will convert the Global File System on the block device "/dev/vg0/lvol0" to gfs2 format.
gfs2_convert(8)