Sponsored Content
Full Discussion: Disk I/O Issue
Operating Systems AIX Disk I/O Issue Post 302312739 by shockneck on Sunday 3rd of May 2009 04:43:00 AM
Old 05-03-2009
Quote:
Originally Posted by ukatru
We have a filesystem which contains 8 hard disks but i am facing disk I/O issue becuase data is not spreading across all the disks.Is there any way i can check how data is spreading and any parameter we need to change to spread ata across all disks.

OS--AIX 5.3
Spreading of data across several (up to 1024 depending on VG type) hdisk devices can be done by creating Logical Volumes LV with an inter policy of maximum (lslv <yourlv>). The LV's upper bound needs to go along with the number of disks you use.

In case your LV was initially set up with an inter policy of minimum you could change the LV settings with
# chlv -ex <yourlv>
followed by a reorgainisation of the LV or Volume Group VG.
# reorgvg <yourvg> <yourlv>
You need free Physical Partitions PP in you VG to be able to reorg though. Mind that reorganising LV/VG that are mirrored over several storage systems might lead to copies being spread over those storage boxes in a way that makes the LVM mirror less reliable.
(In case you don't want to use the commandline you can use the AIX System Managment Interface Tool SMIT with the command as a fastpath e.g.
# smitty chlv
to access the chlv menu. )

Two more hints on sensibly using LV
- AIX uses its LVs in the order in which they are created. I.e. it uses the filesystems that are created in theses LVs in this order. Therefore it may make sense to create different LV on different disks. You can control where an LV is placed with the mklv command: Creating an 4PP LV on hdisk3 hdisk4 hdisk2 hdisk1 would make AIX use the LV/FS in this order. Creating LV without telling LVM where to place it would lead to all LV (with an inter policy of maximum) being created in the order hdisk1 hdisk2 hdisk3 hdisk4. Such a design leads to hdisk1 being more utilised than the other hdisks. I.e. you created a hot spot that way.
- It makes sense to not place LV with a high I/O on the same physical disks. E.g. keep loging LV on different physical disks. Use migratepv to move LV from one disk to another (of the same VG).
# migratepv -l <yourlv> <fromdisk> <todisk>

Mind that spreading the LV over several disks was meant originally to spread data over physical disks. In case you use SAN LUNs those virtual disks might already be placed on several real disks making the inter policy superfluous.

To hype the LVM performance to the max you would then use stripe sets on several disks but that can be done during inital FS setup only, is more difficult to maintain/update and therefore is a different cattle of fish.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Solaris 8 disk/mirroring issue

Hello! I recently inherited system administration duties for a SUN v880 box. The system has 6 physical hard disks.. In doing some basic maintenance, I found they're configured for mirroring. I ran the metastat and metadb commands, and many of the mirrors are showing they are in need of... (5 Replies)
Discussion started by: ghuber
5 Replies

2. AIX

disk issue

Hi, I have a AIX 4.3 box here with problems cause of a disk in below VG. volume group: workvg lspv hdisk4 000166789869ab2d workvg hdisk5 000166789869b96b workvg now hdisk4 disk has failed and cause the quorum was set, workvg became varied off. I have to replace the... (2 Replies)
Discussion started by: karthikosu
2 Replies

3. UNIX for Advanced & Expert Users

partition disk issue

hi guys, I've got a strange issue, may be one of you has experienced this. SunOS 5.10 Generic_118833-33 sun4u sparc SUNW,Sun-Fire-V440 everything is mirrored. My issue is that I have a umpty directory but seems to have data on. Let me show you # df -h /data Filesystem size used... (10 Replies)
Discussion started by: moustik
10 Replies

4. Solaris

Disk Suite issue

Solaris 9 We had a problem server where only root was not mirrored (before my time). When I tried to mirror it, the live root slice bailed with errors at 97% so it couldn't be mirrored. It's a matched pair of boxes (nfs1 and nfs2) and they are interchangeable with regards to the NFS... (0 Replies)
Discussion started by: BOFH
0 Replies

5. AIX

Disk I/O Issue using LVM

We have a filesystem which contains 8 hard disks but i am facing disk I/O issue becuase data is not spreading across all the disks.Is there any way i can check how data is spreading and any parameter we need to change to spread ata across all disks. OS--AIX 5.3 (1 Reply)
Discussion started by: ukatru
1 Replies

6. Filesystems, Disks and Memory

Issue available disk space while using xdd

Good morning, I seem to be running into an issue with some drives I have attached to my solaris server. The drives are attached correctly, the partitions are arranged with fdisk, the ext3 filesystem is setup using mkfs, and finally the drive is mounted. When I use xdd to perform read/write... (3 Replies)
Discussion started by: mrpogo07
3 Replies

7. UNIX Desktop Questions & Answers

Issue with disk space usage

Issue with disk space usage I have the following line in my "df -h" output: Filesystem Size Used Avail Capacity Mounted on /dev/ad4s1a 496M 495M -39M 109% / What is the issue with having 9% excess utilisation? How can I find out what this partition is... (2 Replies)
Discussion started by: figaro
2 Replies

8. Solaris

T5220 disk mapping issue

Hi, More a Sun T5220 problem then a Solaris 10 problem, but perhaps someone had a similar issue. For starters the output with 1 disk in slot 0 of the server. It points to PhyNum 5, where I would expect PhyNum 0. {0} ok probe-scsi MPT Version 1.05, Firmware Version 1.22.00.00 Target... (2 Replies)
Discussion started by: ejdv
2 Replies

9. Solaris

Server disk issue need help

Hello all, Our Solaris 9? Sun Fire 480R backup server(in another city) is throwing disk errors such as these repeatedly. WARNING: vxvm:vxio: Subdisk rootdisk-02 block 24037056: Uncorrectable read error WARNING: vxvm:vxio: Subdisk rootdisk-02 block 7767072: Uncorrectable write error ... (18 Replies)
Discussion started by: RyanV
18 Replies

10. Solaris

Solaris 11 disk issue

I have 2 disks in my system.I recently added a zpool to the disk, but today I changed my mind and deleted the zpool , zpool destroy -f extra The zpool is now deleted and I want to partition the disk, so I delete the only partition on the disk. Now when I run format again, format... (13 Replies)
Discussion started by: cbtshare
13 Replies
VGREDUCE(8)						      System Manager's Manual						       VGREDUCE(8)

NAME
vgreduce - reduce a volume group SYNOPSIS
vgreduce [-a|--all] [-A|--autobackup y|n] [-d|--debug] [-h|-?|--help] [--removemissing] [-t|--test] [-v|--verbose] VolumeGroupName [Physi- calVolumePath...] DESCRIPTION
vgreduce allows you to remove one or more unused physical volumes from a volume group. OPTIONS
See lvm for common options. -a, --all Removes all empty physical volumes if none are given on command line. --removemissing Removes all missing physical volumes from the volume group, if there are no logical volumes allocated on those. This resumes normal operation of the volume group (new logical volumes may again be created, changed and so on). If this is not possible (there are logical volumes referencing the missing physical volumes) and you cannot or do not want to remove them manually, you can run this option with --force to have vgreduce remove any partial LVs. Any logical volumes and dependent snapshots that were partly on the missing disks get removed completely. This includes those parts that lie on disks that are still present. If your logical volumes spanned several disks including the ones that are lost, you might want to try to salvage data first by acti- vating your logical volumes with --partial as described in lvm (8). SEE ALSO
lvm(8), vgextend(8) Sistina Software UK LVM TOOLS 2.02.95(2) (2012-03-06) VGREDUCE(8)
All times are GMT -4. The time now is 08:08 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy