I have a AIX server with multiple volumes and I need to move them from a legacy storage SAN to a new SAN. We are concerned because to use mirrorvg as we would have to run on a single HBA (one on the old SAN and one on the new). Our other option is to use an appliance to do a block migration offline.
I don't know a lot about AIX but if I wanted to do an offline migration in HPUX I would do a VGexport and create a map file, then I'd export the the vg and then removed it. Then I'd shut down the server, remove the old disk, add the new disk and then power the server back on. Once on I'm do a vgimport with the map file and I'd be back in business (steps below). Is there a similar process for AIX, I can find anything like it that is documented. Can someone help me with my options?
1. Create a map file of the volume group on source server.
2. Copy the mapfile to the remote server or any external storage. Create backups of the mapfile(s). Remove LVM volume group from from the source host
3. Stop all access to the storage devices, unmount file system if required, disable the volume group and export the volume group. Alternative if the system will be decommisioned just shutdown the server.
4. Remove the device special files of the physical volumes that will be moved to a different host.
5. Remove the disk(s) or unpresent the logical units (LUN) them from the source host. This task is specific the backing storage subsystem used. Import the LVM volume group on the target server
6. Add or present the old disk(s) (physical volumes) to the target server. This task is specific the storage or SAN platform in use..
7. Scan for new hardware and create the new special device files if not already present.
8. Create the LVM volume group special device file to import the volume group. Verify that mknod uses hexacimal instead of decimal numbering scheme.
9. Import the volume group using the map file. Since the map files contains the VGID, you don't need to specify the new physical volume paths because every disk on the storage subsytem will be scanned to match against the mapfile VGID. This is the prefered method. Alternative, you can especify every physical volume path to the vgimport command if those are know before hand.
Thanks in advance.
Moderator's Comments:
Our eyes and brains provide better responses when code tags are used.
Please, numerate steps you took in sequence.
Last edited by Peasant; 09-13-2019 at 02:12 PM..
Reason: Everything.
One thing I would add would be before importing the DG, have the storage team tell you what queue_depth you should be using and set accordingly
If you've already imported, you can still run the above, but append "-P" to the end. This will make the change in the ODM (CuDv, I believe) and the settings will be applied next reboot.
Just a thought (and trying to avoid downtime) can you not add the new LUNs into the VG and then migrate them with the server still live? I've done this very successfully and prevented needless hours of downtime (three weekends-worth)
The team I was helping to do this were very nervous and wanted the belt & braces approach so we:-
Added the new LUNs to the VG
Mirrored the LVs (no sync) and confirmed the layout was correct (not spread to old disks etc.)
Ran a syncvg and left if. Any problems, we would drop the new leaving the old intact.
When it had completed, we:-
Removed the mirror from the old/original disk
Removed the original disks from the volume group
Like I say, this was done with the server online and processing. The volume manager deals with this very well, and whilst it takes a long time to synchronise, it's probably no worse than waiting for something else to migrate the data. It is also reversible up to the point you commit by removing the mirror copies on the old disks.
Why do you not want to have both SANs connected at the same time? The usual arrangement is to have the SAN connections passed through a switch rather than direct to the AIX host. Is there a reason that you could not just add the new SAN connections to these switches?
e have a RHEL4 server with an Oracle 10 ( RAC ) are migrating storage bank records.
The copy is made by the storage they make a copy of the level of disk all blocks are copied and the file permissions.
Problems we had
Initially the Cluster service was not rising due to absence of OCR disks... (0 Replies)
We are in the process of migrating storage from one array to another. I know the commands I would use in LVM but I am not very familiar with VxVM.
The goal is to attach the new LUNs from the new array and add them to the disk group. Then mirror the data online to these new LUNs using VxVM... (4 Replies)
hi
i haven't much experience in solaris. I would like to know about storage. how to configure,how it is working etc....can i get any simulator for doing these work? plaese send me the link .
please help me
advance thanks to all (2 Replies)
Hi,
How can i monitoring the LUN in HP Storage with korn shells? I would like to have a shell can monitoring the LUN, VGs, lvols and raw devices.
Jesus (3 Replies)
We just purchased a MOD30 disk array strage system.
We have 15 drives and 2 hot spares.
We're running a database app with 8 data sets.
I'm trying to get the best i/o speed out of my disk configuration.
Right now I have 3 raid5 arrays setup. This seems to offer the same performance as having the... (1 Reply)