Increase LUN size in AIX with VIOS and HACMP


 
Thread Tools Search this Thread
Operating Systems AIX Increase LUN size in AIX with VIOS and HACMP
# 1  
Old 03-09-2012
Increase LUN size in AIX with VIOS and HACMP

Hello!

I have this infraestructure:
- 1 POWER7 with single VIOS on Site A.
- 1 POWER6 with single VIOS on Site B.
- 1 LPAR called NodeA as primary node for PowerHA 6.1 on Site A.
- 1 LPAR called NodeB as secondary (cold) node for PowerHA 6.1 on SiteB.
- 1 Storage DS4700 on Site A.
- 1 Storage DS4700 on Site B.
- All VIOS versions are 2.2.0.13-FP24 SP-03.
- All AIX versions are 6.1.6.5.
- Power version is 6.1 SP3.
- Data VG is configured as LVM Cross Site, using one disk from each storage.
- Both disks are configured with reserve_policy=no_reserve in all VIOS an LPARs.
- queue_depth attribute is the same (10) in all VIOS and LPARs for this disks too.

My problem is that I can't increase the size of my data VG.

I increased the LUN size. Then I run cfgdev in both VIOS and cfgmgr in both LPARs. Also chvg -g datavg returns: "0516-1382 chvg: Volume group is not changed. None of the disks in the volume group have grown in size."

I've tried several ways to do this without luck.

Any suggestions?

Thanks!

Enzote
# 2  
Old 03-10-2012
Are you using Concurrent VG for your cluster ?
System will not allow you increase those with VG Online
But again you should got different error....

Are you sure those disks are re-sized ?
# 3  
Old 03-12-2012
I would try export and Import VG tor reread VGDA from the disks
# 4  
Old 03-12-2012
How are the disks mapped?

Quote:
Originally Posted by enzote
Hello!

I have this infraestructure:
- 1 POWER7 with single VIOS on Site A.
- 1 POWER6 with single VIOS on Site B.
- 1 LPAR called NodeA as primary node for PowerHA 6.1 on Site A.
- 1 LPAR called NodeB as secondary (cold) node for PowerHA 6.1 on SiteB.
- 1 Storage DS4700 on Site A.
- 1 Storage DS4700 on Site B.
- All VIOS versions are 2.2.0.13-FP24 SP-03.
- All AIX versions are 6.1.6.5.
- Power version is 6.1 SP3.
- Data VG is configured as LVM Cross Site, using one disk from each storage.
- Both disks are configured with reserve_policy=no_reserve in all VIOS an LPARs.
- queue_depth attribute is the same (10) in all VIOS and LPARs for this disks too.

My problem is that I can't increase the size of my data VG.

I increased the LUN size. Then I run cfgdev in both VIOS and cfgmgr in both LPARs. Also chvg -g datavg returns: "0516-1382 chvg: Volume group is not changed. None of the disks in the volume group have grown in size."

I've tried several ways to do this without luck.

Any suggestions?

Thanks!

Enzote
How are the disks mapped to the LPAR? Are they VSCSI or VFCHOST?
# 5  
Old 03-12-2012
Map

The disk are in concurrent mode. If a run lspv from one node I got:

# lspv
...
hdisk3 00f6317b414624c2 vg_data concurrent
hdisk4 00f6317b41466700 vg_data concurrent
...

I tried export/import.

The disks are mapped through VSCSI.

I will make a test, unmapping the disks, running cfgdev on VIOS and then remapping them.

Thanks!

Enzote

---------- Post updated at 09:41 AM ---------- Previous update was at 08:12 AM ----------

Hello!

This procedure works:

1.- Stop PowerHA services in both nodes.

2.- In VIOS A:
$ rmdev -dev vtd_data_A
$ rmdev -dev vtd_data_B
$ r oem
# rmdev -dl hdisk1
# rmdev -dl hdisk2
# cfgmgr
# chdev -l hdisk1 -a reserve_policy=no_reserve
# chdev -l hdisk2 -a reserve_policy=no_reserve
# exit
$ mkvdev -vdev hdisk1 -vadaper vhost0 -dev vtd_data_A
$ mkvdev -vdev hdisk2 -vadaper vhost0 -dev vtd_data_B


3.- Repeat step 2 en VIOS B.

4.- On node A:
# rmdev -dl hdisk3
# rmdev -dl hdisk4
# cfgmgr
# chdev -l hdisk3 -a reserve_policy=no_reserve
# chdev -l hdisk4 -a reserve_policy=no_reserve
# chdev -l hdisk3 -a queue_depth=10
# chdev -l hdisk4 -a queue_depth=10
# varyonvg vg_data
0516-1434 varyonvg: Following physical volumes appear to be grown in size.
Run chvg command to activate the new space.
hdisk3 hdisk4
# chvg -g vg_data
# varyoffvg vg_data

5.- Repeat step 4 on node B, except chvg -g.


Thanks for the support!

Enzote
# 6  
Old 03-12-2012
Exactly

Quote:
Originally Posted by enzote
The disk are in concurrent mode. If a run lspv from one node I got:

# lspv
...
hdisk3 00f6317b414624c2 vg_data concurrent
hdisk4 00f6317b41466700 vg_data concurrent
...

I tried export/import.

The disks are mapped through VSCSI.

I will make a test, unmapping the disks, running cfgdev on VIOS and then remapping them.

Thanks!

Enzote

---------- Post updated at 09:41 AM ---------- Previous update was at 08:12 AM ----------

Hello!

This procedure works:

1.- Stop PowerHA services in both nodes.

2.- In VIOS A:
$ rmdev -dev vtd_data_A
$ rmdev -dev vtd_data_B
$ r oem
# rmdev -dl hdisk1
# rmdev -dl hdisk2
# cfgmgr
# chdev -l hdisk1 -a reserve_policy=no_reserve
# chdev -l hdisk2 -a reserve_policy=no_reserve
# exit
$ mkvdev -vdev hdisk1 -vadaper vhost0 -dev vtd_data_A
$ mkvdev -vdev hdisk2 -vadaper vhost0 -dev vtd_data_B


3.- Repeat step 2 en VIOS B.

4.- On node A:
# rmdev -dl hdisk3
# rmdev -dl hdisk4
# cfgmgr
# chdev -l hdisk3 -a reserve_policy=no_reserve
# chdev -l hdisk4 -a reserve_policy=no_reserve
# chdev -l hdisk3 -a queue_depth=10
# chdev -l hdisk4 -a queue_depth=10
# varyonvg vg_data
0516-1434 varyonvg: Following physical volumes appear to be grown in size.
Run chvg command to activate the new space.
hdisk3 hdisk4
# chvg -g vg_data
# varyoffvg vg_data

5.- Repeat step 4 on node B, except chvg -g.


Thanks for the support!

Enzote
This is why I asked if they were mapped as VSCSI as the new disks needed re-mapping.
# 7  
Old 03-12-2012
we have almost the same configuration on some clusters, resizing luns (storage DS8300) works without problems, I think even without cfgmgr on both vio and lpar, at least it's not necessary on the vio server

as gito said, dynamic resizing of hdisks that belong to a concurrent vg working in concurrent active or concurrent passive mode is not supported, so you need to at least bring the resource group down once

just could imagine the problem in conjunction with your ds4xxx storage and the drivers on the vio servers

we have clusters with 80+ concurrent pvs, imagine remapping of every single one (we have a script for this, but anyways..), that must be a bug, I would open an IBM ticket in your case
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. AIX

Increase filesystem size AIX 5.3

I an trying to increase the file size on an AIX 5.3 box. I think i am missing the correct syntax for the command. Here is was i am trying on a test box # lsvg rootvg VOLUME GROUP: rootvg VG IDENTIFIER: 0000bd8b00004c00000 0010d8ed7a76e VG STATE: active ... (3 Replies)
Discussion started by: fierfek
3 Replies

2. AIX

Uniform LUN size

Hi, Is there any advantage is making all my storage LUNS ( hdisk ) of uniform size. Currently the LUN's are having different size () eg: 50G / 60G / 75G etc ). I am planning for a storage migration....so should i go for uniform lun size or with current LUN size pattern ? Regards, jibu (3 Replies)
Discussion started by: jibujacob
3 Replies

3. AIX

PowerHA HACMP on VIOS servers

Few questions regarding Power HA ( previously known as HACMP) and VIOS POWERVM IVM ( IBM Virtualization I/O Server ) Is it possible to create HACMP cluster between two VIOS servers Physical Machine_1 VIOS_SERVER_1 LPAR_1 SHARED_DISK_XX VIOS_SERVER_2 Physical Machine_2 LPAR_2... (6 Replies)
Discussion started by: filosophizer
6 Replies

4. UNIX for Dummies Questions & Answers

Increase size to sd[b-c]

hi guys I am working on my vmware workstation. I have a /dev/sdb which is 5GB. I am using LVM. Now I increase /dev/sdb 2 more GB. fdisk -l shows 7 GB but pvscan still shows 5GB. how do I make my system recognize the new 7GB added and be able to add those to my physical volumen and... (1 Reply)
Discussion started by: kopper
1 Replies

5. AIX

How to increase the filesystem size in HACMP

How to increase the filesystem size in HACMP. what is the difference between normal chfs command and increase the filesystem size in HACMP. (4 Replies)
Discussion started by: AIXlearner
4 Replies

6. Solaris

Largest LUN size in Solaris 10

What is the largest possible LUN size that can be presented to Solaris 10. I've been googling a lot about this. The new EFI lablels (an alternative to VTOC) supports LUNs greater than 2TB. I need to know the upper limit. please help me find it. (4 Replies)
Discussion started by: pingmeback
4 Replies

7. UNIX for Dummies Questions & Answers

Increase salt size

Unix protect its password by using salt It that mean larger the salt size the more secure? if the salt size increase greatly, will the password still able to be cracked? thank you for helping (1 Reply)
Discussion started by: cryogen
1 Replies

8. AIX

Problem mapping LUN disk from VIOS to the LPAR

Hello guys, It would be so nice of you if someone can provide me with these informations. 1) My SAN group assigned 51G of LUN space to the VIO server.I ran cfgdev to discover the newly added LUN. Unfortunately most of the disks that are in VIO server is 51G. How would I know which is the newly... (3 Replies)
Discussion started by: solaix14
3 Replies

9. AIX

HACMP LV size is not reflecting

Hi i increased the lv size using smit hacmp. but, the new size is not reflecting.. why?. the pp size is 512 MB. before it was 4 PP's. i increased to 10 PP's. when i type df -m /xxx.. it is showing 2GB only. see the info below.. root@db:/ > df -m /xxx/xxx Filesystem MB blocks Free... (2 Replies)
Discussion started by: honeym210
2 Replies

10. HP-UX

increase size

Hi All, one of the mount point in Hp ux server has reached 95% its a data base file and can not be deleted. so i want to know how to increase the size of mount point i am new to unix ,please help me (1 Reply)
Discussion started by: jyoti
1 Replies
Login or Register to Ask a Question