Cannot extend logical volume


 
Thread Tools Search this Thread
Operating Systems AIX Cannot extend logical volume
# 8  
Old 11-23-2016
1. chlv -u 18
2. chlv -x 75000
3. extendlv
4. chfs

The problem is, that you have upper bound = 16 in your LV configuration. It means your LV can be maximum on 16 physical volumes. It seems, that the disks, used by the LV, already full and either you must free up some space on these physical volumes by moving other logical volumes, or you change the LV configuration and allow it to span on the whole 18 physical volumes, you have in your volume group.
This User Gave Thanks to agent.kgb For This Post:
# 9  
Old 11-25-2016
I suggest to get rid of the ridiculous striping at all. Striping is a good idea if you hae physical disks and want to spread the load over all of them so that the overall response time of the (disk sub-)system gets better. It makes absolutely no sense at all with SAN disks (from the names of the hdisk devices i suppose you have an EMC storage).

To tell you the bad news up front: you will need a downtime to do this because it means deleting and recreating the LV. Still it is a good idea to do so because the further administration will be way easier once you did it.

I hope this helps.

bakunin
# 10  
Old 11-26-2016
Indeed as a rule the combination of lvm striping with high end SAN storage (double or triple striping) is to be avoided, both for reasons of simplicity and performance.

Remarkably perhaps, I have come across situations, where lvm striping actually did make a serious performance difference (improvement) with high end SAN Storage and sequential read IO, but only with a narrow width (say 4-8) at small stripe sizes (128KiB-256KiB).

This was because there was a front-end bottle neck and at the same it was difficult to increase IO queue sizes, which is extra important because of the serial nature of Fibre Channel SANs (the bottle neck could even be observed when the cache-hit ratio was at 100% at the front-end storage level).

By using narrow striping it was possible to increase the effective IO queue size, while at the same time not confusing the prefetch algorithms of the SAN storage. The LUN's in the LVM narrow stripe had to be from different physical disk sets in the backend SAN Storage level, in case of a storage array architecture where this would make a difference). At the SAN storage, this translated in a nice even spread of backend usage, without hot spots.

Situations where this mattered was with databases with quite a bit of sequential read IO. This happened with Oracle databases that were never fully optimized, because the standard query specifications kept changing, which in my experience is the situation that occurs often. Another situation is when out of necessity reports or other batches need to run during on-line usage.

Conversely, I have come across a situation where a large stripe size was used (4 MiB) with a large stripe width (16) and that really confused storage, thwarting the prefetch algorithms, and all IO was done with small sizes, bringing sequential read IO to a crawl, while the storage processors were working overtime.

So as usual in performance tuning: "it depends...." Smilie

Last edited by Scrutinizer; 11-26-2016 at 08:19 AM..
# 11  
Old 11-28-2016
Quote:
Originally Posted by -=XrAy=-
Yes!
You need enough space on all your 16 devices. lslv -l oravol will show you the affected devices.
Thanks for your help.

the output for lslv -l oravol
Code:
PV                COPIES        IN BAND       DISTRIBUTION
hdiskpower32      3999:000:000  20%           819:819:819:819:723
hdiskpower13      3999:000:000  20%           819:819:819:819:723
hdiskpower31      3999:000:000  20%           819:819:819:819:723
hdiskpower12      3999:000:000  20%           819:819:819:819:723
hdiskpower30      3999:000:000  20%           819:819:819:819:723
hdiskpower11      3999:000:000  20%           819:819:819:819:723
hdiskpower29      3999:000:000  20%           819:819:819:819:723
hdiskpower10      3999:000:000  20%           819:819:819:819:723
hdiskpower26      3999:000:000  20%           800:800:799:800:800
hdiskpower8       3999:000:000  20%           800:800:799:800:800
hdiskpower23      3999:000:000  20%           800:800:799:800:800
hdiskpower7       3999:000:000  20%           800:800:799:800:800
hdiskpower20      3999:000:000  20%           800:800:799:800:800
hdiskpower6       3999:000:000  20%           800:800:799:800:800
hdiskpower17      3999:000:000  20%           800:800:799:800:800
hdiskpower5       3999:000:000  20%           800:800:799:800:800

---------- Post updated at 04:09 PM ---------- Previous update was at 03:32 PM ----------

Quote:
Originally Posted by rbatte1
Are these local disks or hardware protected in some way? (SAN provided, RAID device etc.)

The reason I ask is that you have a single copy of each PP. On real hardware you might lose the LV if any disk fails. If it is hardware protected, then you might be causing yourself an IO overhead by striping. I know it sounds counter-intuitive, but I've seen issues where spreading IO according to how the OS sees it can cause contention on the real disks when a SAN also spreads the IO. Bizarrely we improved IO when we tried to create hot-spots as the OS saw it because the SAN then really did spread the heavy IO properly.

Can you explain a little more about what hardware you have in play?


Thanks,
Robin
Dear Robin,

Thanks for your support.

It's using SAN disks, and each LUN around 512GB and seperated into 8 individual raid group (RAID5), it's using a EMC VNX storage. The setup engineer was gone, so we don't have enough information under OS level.

Thanks.

---------- Post updated at 04:11 PM ---------- Previous update was at 04:09 PM ----------

Quote:
Originally Posted by vbe
we dont have an
Code:
lsvg -l

either to unterstand how it is organised ( mirror between to bays? etc... ) And I may be wrong as now I use only mirror pools

It looks like you are in morror and yes you added new disks but are they one in each mrirror copy etc...

If you are stripped with strict policy you are stuck...you will have to add as many disks it needs to respect the stripping policy, but one way perhaps to see if true would be to do a reorgvg datavg as if the stripping is not strict, it will move blocks to unused disks and free the ones completely full, beware if this have never been done before ( running tht command...) it can take quite some time ( in hours...)
If it worked, you can try
Code:
 chfs -a size=+2000M /oradata

and see if that works.. if so you are a happy guy...
Dear VBE,

Thanks for your advices.

If found some information on the internet that said need to use reorgvg, but I didn't use it before, so I don't know the risk and impact.

We just want to know is reorgvg is the only way under our situation!! Thanks.

---------- Post updated at 04:15 PM ---------- Previous update was at 04:11 PM ----------

Quote:
Originally Posted by agent.kgb
1. chlv -u 18
2. chlv -x 75000
3. extendlv
4. chfs

The problem is, that you have upper bound = 16 in your LV configuration. It means your LV can be maximum on 16 physical volumes. It seems, that the disks, used by the LV, already full and either you must free up some space on these physical volumes by moving other logical volumes, or you change the LV configuration and allow it to span on the whole 18 physical volumes, you have in your volume group.
Dear agent.kgb,

we did the change the upper bound to 18 but failed. error message seem need to set the multiple of stripe width.

Code:
# chlv -u 18 oravol
0516-1441 chlv: Striped logical volume upperbound can only be an even multiple of the striping width.
0516-704 chlv: Unable to change logical volume oravol.

---------- Post updated at 06:16 PM ---------- Previous update was at 04:15 PM ----------

Quote:
Originally Posted by bakunin
I suggest to get rid of the ridiculous striping at all. Striping is a good idea if you hae physical disks and want to spread the load over all of them so that the overall response time of the (disk sub-)system gets better. It makes absolutely no sense at all with SAN disks (from the names of the hdisk devices i suppose you have an EMC storage).

To tell you the bad news up front: you will need a downtime to do this because it means deleting and recreating the LV. Still it is a good idea to do so because the further administration will be way easier once you did it.

I hope this helps.

bakunin
Dear Bakunin,

Thanks for your support.

Yes, we are using EMC VNX as storage box.
As you said that, when using SAN, there have no improvement to use striping?

For my understanding is that, striping is use for pool I/O (maybe not using SAN), let more disks work on I/O, to re-balance the I/O. But once use SAN, storage pool will balance the I/O into all disks. Am i right?

Thanks.

Last edited by rbatte1; 11-28-2016 at 08:51 AM.. Reason: Added ICODE tags
# 12  
Old 11-28-2016
Quote:
Originally Posted by lckdanny
As you said that, when using SAN, there have no improvement to use striping?
Basically and in most cases: yes. There are some notable exceptions to this rule (see Scrutinizers post #10 for such exceptions), but in general: what you try to achieve with disk striping a modern SAN box already does itself internally. There is no sense in doing it twice. If you are particularly unlucky (well, i agree, this is more a theoretical possibility) your own striping and the striping of the SAN box will overlay and create a Moiré-like effect that de-stripes your disk access.

I have once written a lengthy article about performance tuning, which i suggest you to read. Maybe it answers a few questions you might have.

The VNX is a small platform and i haven't worked with it but i suppose its frontend is not all that sophisticated. Therefore it might be worthwile to examine other aspects of disk access as well if the need of performance tuning arises: queue sizes, the distribution of block sizes in your typical load, data hotspots (maybe suggesting multitiered disk architectures with SATA-disks on one end and FC-disks or even SSDs on the other) or some other measures.

As Scrutinizer said so rightly: in performance tuning it always depends and one size never fits all.

I hope this helps.

bakunin
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Red Hat

No space in volume group. How to create a file system using existing logical volume

Hello Guys, I want to create a file system dedicated for an application installation. But there is no space in volume group to create a new logical volume. There is enough space in other logical volume which is being mounted on /var. I know we can use that logical volume and create a virtual... (2 Replies)
Discussion started by: vamshigvk475
2 Replies

2. AIX

Position of the logical volume on the physical volume

Hello everyone, I just read that while creating a logical volume(LV) we can choose the region of the physical volume (PV) in which the LV should be created. When I say region I mean: outer edge - outer middle - center - inner middle and inner edge. Can anyone help me understand the utility... (11 Replies)
Discussion started by: adilyos
11 Replies

3. HP-UX

[Solved] How to extend a mirrored logical volume?

Want to extend the /home filesystem: Filesystem kbytes used avail %used Mounted on /dev/vg00/lvol4 262144 260088 2056 99% /home root@server:./root # vgdisplay vg00 --- Volume groups --- VG Name /dev/vg00 VG Write Access read/write VG Status available Max LV 255 Cur LV 11 Open... (4 Replies)
Discussion started by: proactiveaditya
4 Replies

4. UNIX for Dummies Questions & Answers

Confusion Regarding Physical Volume,Volume Group,Logical Volume,Physical partition

Hi, I am new to unix. I am working on Red Hat Linux and side by side on AIX also. After reading the concepts of Storage, I am now really confused regarding the terminologies 1)Physical Volume 2)Volume Group 3)Logical Volume 4)Physical Partition Please help me to understand these concepts. (6 Replies)
Discussion started by: kashifsd17
6 Replies

5. HP-UX

Unable To Extend the Size of a Logical Volume File System

Background: # uname -a HP-UX deedee B.11.23 U ia64 4294967295 unlimited-user license deedee.rsn.hp.com:/ # bdf /opt Filesystem kbytes used avail %used Mounted on /dev/vg00/lvol6 6553600 6394216 158144 98% /opt /opt is almost full... (3 Replies)
Discussion started by: Rob Sandifer
3 Replies

6. AIX

Logical volume name conflict in two volume group

Hello, I am a french computer technician, and i speak English just a little. On Aix 5.3, I encounter a name conflict logical volume on two volume group. The first volume lvnode01 is OK in rootvg and mounted. It is also consistent in the ODM root # lsvg -l rootvg |grep lvnode01 ... (10 Replies)
Discussion started by: dantares
10 Replies

7. AIX

Basic Filesystem / Physical Volume / Logical Volume Check

Hi! Can anyone help me on how I can do a basic check on the Unix filesystems / physical volumes and logical volumes? What items should I check, like where do I look at in smit? Or are there commands that I should execute? I need to do this as I was informed by IBM that there seems to be... (1 Reply)
Discussion started by: chipahoys
1 Replies

8. AIX

Help - can't extend logical volume ?

Hi, Smit "Increase the Size of a Logical Volume" command failed. Output: ---------------------------------------------------------- Command: failed stdout: yes stderr: no Before command completion, additional instructions may appear below. The distribution of this command (111) failed on... (2 Replies)
Discussion started by: vilius
2 Replies

9. UNIX for Advanced & Expert Users

LVM - Extending Logical Volume within Volume Group

Hello, I have logical volume group of 50GB, in which I have 2 logical volumes, LogVol01 and LogVol02, both are of 10GB. If I extend LogVol01 further by 10GB, then it keeps the extended copy after logical volume 2. I want to know where it keeps this information Regards Himanshu (3 Replies)
Discussion started by: ghimanshu
3 Replies

10. AIX

Extend one Logical Volume

Hi All, I am new to AIX. I need to extend one Logical Volume it is jfs type on On AIX 5.1. I have enough free space on the volume group for this extension Can I use smitty chjfs , will this do it without interruptions to the application that is using this Logical Volume. Thanks Scampi (1 Reply)
Discussion started by: scampi
1 Replies
Login or Register to Ask a Question