Migrating LUNs to new storage array

Thread Tools Search this Thread
Operating Systems AIX Migrating LUNs to new storage array
# 1  
Old 06-13-2019
Migrating LUNs to new storage array


I have a AIX server with multiple volumes and I need to move them from a legacy storage SAN to a new SAN. We are concerned because to use mirrorvg as we would have to run on a single HBA (one on the old SAN and one on the new). Our other option is to use an appliance to do a block migration offline.

I don't know a lot about AIX but if I wanted to do an offline migration in HPUX I would do a VGexport and create a map file, then I'd export the the vg and then removed it. Then I'd shut down the server, remove the old disk, add the new disk and then power the server back on. Once on I'm do a vgimport with the map file and I'd be back in business (steps below). Is there a similar process for AIX, I can find anything like it that is documented. Can someone help me with my options?

1. Create a map file of the volume group on source server.
# vgexport -p -s -m /tmp/<vgname>.mapfile /dev/<vgname>  2. Check that the map files contains the VGID at the first line and logical volumes name at the next lines. 
# cat /tmp/<vgname>.mapfile  Example: 
# vgexport -p -s -m /tmp/vg00.mapfile /dev/vg00 vgexport: Volume group "/dev/vg00" is still active. vgexport: Preview of vgexport on volume group "/dev/vg00" succeeded. 
# cat /tmp/vg00.mapfile VGID 1e69dc32486a8cc1 1 lvol1 2 lvol2 3 lvol3 4 lvol4 5 lvol5 6 lvol6 7 lvol7 8 lvol8 9 test

2. Copy the mapfile to the remote server or any external storage. Create backups of the mapfile(s).
Remove LVM volume group from from the source host

3. Stop all access to the storage devices, unmount file system if required, disable the volume group and export the volume group. Alternative if the system will be decommisioned just shutdown the server.
  vgchange -a n /dev/<vgname> # vgexport /dev/<vgname>

4. Remove the device special files of the physical volumes that will be moved to a different host.
 # rmsf -H <HW Path>   or alternative: 
 # rmsf -a /dev/dsk/c#t#d#

5. Remove the disk(s) or unpresent the logical units (LUN) them from the source host. This task is specific the backing storage subsystem used.
Import the LVM volume group on the target server

6. Add or present the old disk(s) (physical volumes) to the target server. This task is specific the storage or SAN platform in use..
7. Scan for new hardware and create the new special device files if not already present.
 # ioscan -fnC disk # insf -e

8. Create the LVM volume group special device file to import the volume group. Verify that mknod uses hexacimal instead of decimal numbering scheme.
 # mkdir /dev/<vgname> # mknod /dev/<vgname>/group c 64 0x##0000

9. Import the volume group using the map file. Since the map files contains the VGID, you don't need to specify the new physical volume paths because every disk on the storage subsytem will be scanned to match against the mapfile VGID. This is the prefered method. Alternative, you can especify every physical volume path to the vgimport command if those are know before hand.
 # vgimport -s -m /tmp/<vgname>.map /dev/<vgname>

Thanks in advance.
Moderator's Comments:
Mod Comment
Our eyes and brains provide better responses when code tags are used.
Please, numerate steps you took in sequence.

Last edited by Peasant; 09-13-2019 at 03:12 PM.. Reason: Everything.
# 2  
Old 06-13-2019
Take a peek at this:
IBM Knowledge Center

Sorry, I don't have much time to go into depth on this one right now, but yes you can export the vg and import the vg on a new system.

  1. unmount all FS
  2. varyoffvg <vgname>
  3. exportvg <vgname>
  4. rmdev -dl hdiskX <where the vg resided>
  1. Assuming everything is zoned for the lun.
  2. cfgmgr
  3. lspv to find your new hdisk
  4. importvg -y <vgname> hdiskX
  5. mount the filesystems
  6. change characteristics as needed for the VG and FS
Should be noted that importvg creates file mount points and entries in the /etc/filesystems
# 3  
Old 09-13-2019
One thing I would add would be before importing the DG, have the storage team tell you what queue_depth you should be using and set accordingly

# chdev -l hdisknn -a queue_depth=#

If you've already imported, you can still run the above, but append "-P" to the end. This will make the change in the ODM (CuDv, I believe) and the settings will be applied next reboot.
# 4  
Old 09-16-2019
Just a thought (and trying to avoid downtime) can you not add the new LUNs into the VG and then migrate them with the server still live? I've done this very successfully and prevented needless hours of downtime (three weekends-worth)

The team I was helping to do this were very nervous and wanted the belt & braces approach so we:-
  1. Added the new LUNs to the VG
  2. Mirrored the LVs (no sync) and confirmed the layout was correct (not spread to old disks etc.)
  3. Ran a syncvg and left if. Any problems, we would drop the new leaving the old intact.
When it had completed, we:-
  1. Removed the mirror from the old/original disk
  2. Removed the original disks from the volume group
Like I say, this was done with the server online and processing. The volume manager deals with this very well, and whilst it takes a long time to synchronise, it's probably no worse than waiting for something else to migrate the data. It is also reversible up to the point you commit by removing the mirror copies on the old disks.

Why do you not want to have both SANs connected at the same time? The usual arrangement is to have the SAN connections passed through a switch rather than direct to the AIX host. Is there a reason that you could not just add the new SAN connections to these switches?

I hope this gives you an alternative,

Last edited by rbatte1; 09-16-2019 at 10:03 AM..
Login or Register to Ask a Question

Previous Thread | Next Thread

6 More Discussions You Might Find Interesting

1. Red Hat

Migrating an Oracle database storage - REDHAT4

e have a RHEL4 server with an Oracle 10 ( RAC ) are migrating storage bank records. The copy is made by the storage they make a copy of the level of disk all blocks are copied and the file permissions. Problems we had Initially the Cluster service was not rising due to absence of OCR disks... (0 Replies)
Discussion started by: fausto.ncc1701
0 Replies

2. UNIX for Advanced & Expert Users

VxVM breaking mirror for migrating storage

We are in the process of migrating storage from one array to another. I know the commands I would use in LVM but I am not very familiar with VxVM. The goal is to attach the new LUNs from the new array and add them to the disk group. Then mirror the data online to these new LUNs using VxVM... (4 Replies)
Discussion started by: keelba
4 Replies

3. HP-UX

HP Storage Array

hi I need to check status and configure HP Storage Array device. HP Storageworks P2000 Please suggest (0 Replies)
Discussion started by: anand87
0 Replies

4. Solaris

storage array simulator

hi i haven't much experience in solaris. I would like to know about storage. how to configure,how it is working etc....can i get any simulator for doing these work? plaese send me the link . please help me advance thanks to all (2 Replies)
Discussion started by: sijocg
2 Replies

5. Filesystems, Disks and Memory

how can I monitoring the LUNs in HP storage

Hi, How can i monitoring the LUN in HP Storage with korn shells? I would like to have a shell can monitoring the LUN, VGs, lvols and raw devices. Jesus (3 Replies)
Discussion started by: jgutierrez29
3 Replies

6. Filesystems, Disks and Memory

Storage array question

We just purchased a MOD30 disk array strage system. We have 15 drives and 2 hot spares. We're running a database app with 8 data sets. I'm trying to get the best i/o speed out of my disk configuration. Right now I have 3 raid5 arrays setup. This seems to offer the same performance as having the... (1 Reply)
Discussion started by: ncmurf00
1 Replies
Login or Register to Ask a Question

Featured Tech Videos