Server migration

Thread Tools Search this Thread
Operating Systems AIX Server migration
# 1  
Old 03-12-2014
Server migration


Existing several p5 server with lpar (aix5.3), also implemented with hacmp.
And now planning to buy new set of server (installing aix7.1)and SAN to replace the existing server.

My question is, how to perform data migration from old server/SAN to new server/SAN.

Suppose I install aix7.1 and power HA 7, itm, tsm , oracle in new sever first right? But then how to migrate the data.

# 2  
Old 03-12-2014
Are you buying a new SAN as well? You may want to look into LPM (Live partition Mobility) What does IBM recommend? If you are buying power 8, IBM may assist you in this effort at no extra cost. Check with them first.
# 3  
Old 03-12-2014
Originally Posted by Oceanlo2013
Existing several p5 server with lpar (aix5.3), also implemented with hacmp.
And now planning to buy new set of server (installing aix7.1)and SAN to replace the existing server.
First off: before you start using new pieces of hardware plan thoroughly. Update all firmware/microcodes to the absolutely latest level, regardless of being needed or not. You never know when you get another downtime once the systems are productive. The following is just a rough sketch - read it, fill in the details, go over it again, test it with a play system, test it again, review again, .... only then file a change and do the real takeover. This may sound chicken, but it will make sure you are home early.

Changing your SAN hardware and your server hardware should be two different things. Let us first talk about moving the LPARs to new hardware:

LPAR moves

You cannot do it without any interruption, but you can minimise the impact pretty much - in fact you can get down to one HACMP takeover for the whole operation. This will be the only visible effect for your users.

HACMP communication is done via communication daemons called "RSCT". As long as these daemons are at the same version HACMP will work even if the underlying OS versions are different. Keep this in mind for the following procedure:

1) Split the cluster (shutdown the cluster software on the standby node) and update RSCT on both nodes. Because the cluster is down no communication takes place and the active node will not notice that the standby node is on a different level as long as you do not restart the cluster software.

2) Once both nodes are udated RSCT-wise restart and resync the cluster. This step is optional but i am of fthe paranoid sort, therefore i did it.

3) Move the passive node over to the new hardware after you have split the cluster again. If you do it via LPM or not doesn't matter, as the node is not participating in the cluster anyway. You also do the OS update/upgrade now, either on the new or the old hardware. (I suppose here that the boot disk for the LPAR is provided by the VIOS and both VIOS pairs in the new and the old ahrdware can serve it up to the LPAR.)

4) Restart the cluster, synchronize (this will work because the RSCT is on the same level even if the rest of the OS is not) and do a takeover. You will end up with an active node on the new hardware, already updated to the target OS level. Notice, that i also presuppose here the availability of th data disk(s) on both the new and the old hardware. Take your time with the zoning for this, it can be pretty tricky!)

5) shut down the now passive (=further active) node, do the OS update and the LPAR move, like you did before with the other LPAR.

6) Restart the cluster and synchronize. Your resource groups now run on the "wrong" node (the one being passive when you started) and if you want to move it back you will have to do another takeover - this will also be noticeable by the users, but it is optional. You are running completely on new hardware and new OS levels.

SAN takeover

The first I'd like to recommend is: take your time for the zoning and the layout of your new SAN. Create sound naming conventions, zoning conventions, conventions for everything you can think about. Review these, over and over again. You will get no second chance for doing it right the first time. Changing conventions once you have some 100 LUNs given out is nigh impossible. Take care for live partition mobility! What makes P7 systems so sexy is that LPARs are (can be) completely independent from hardware. If you do it right you can move your LPARs around from one managed system to the next withput the user even noticing. You can make one MS free, shut it down, update it, bring it back online and move the LPARs back. Still, the SAN and the zoning has to support that, so think twice and thrice about what you do and if it leaves you with all these freedoms.

Once you are done SAN takeover is quite easy:

1) create new LUNS and bring it to the LPAR.

2) if you have concurrent VGs (ECC) it is easy, but some older SANs do not allow for concurrent mirrors. (Our old EMC Clariion for instance.) Bring the PVs to the active node in this case.

3) Before the change: bring the new LUNs online, then do a "extendvg" and a "mirrorvg" to the new disk(s). You can do this during online time, it will not slow down the system all too much. For maximum performance switch off the immediate synchronisation (use "-s" with "mirrorvg") and do a "syncvg" afterwards in background wiht the "-P" parameter set, i.e.:

extendvg myVG newPV
mirrorvg -s myVG newPV
syncvg -P <somenumber> -v myVG &

Depending on the PP size i used numbers between 5 and 16, which worked well.

4) If you have to promote the VG to ECC you will need some downtime. After syncing, shut down the cluster on both nodes, do a learning import on the passive node, remove the old disk from the VG ("unmirrorvg myVG oldPV"), change to ECC and do a learning import on te active side.

5) Note that you probably have to build new disks for the disk heartbeat too. I suggest moving them into a VG after discovering and configuring the heartbeat network just to make them unavailable for erroneous use otherwise.

I hope this helps.


//PS: been there already: Takeover-Howto

Another point: If you use PowerHA 7.1 you have to use this idiotic "system repository disk", which was about as necessary as catching a flu, serves absolutely no purpose but may make problems if you have the wrong SAN storage. Our EMC VMax, virtualized via EMC VPLexes, for instance, are not supported. We have to (if we ever have to go for 7.1, which we try to avoid - its a long time until September, y'know) serve the repo-disk via VIOS as iSCSI therefore.


Last edited by bakunin; 03-12-2014 at 02:32 PM..
This User Gave Thanks to bakunin For This Post:
# 4  
Old 03-13-2014
Hi Bakunin,

Thank you for your detail explanation. Its is a good migration experience.

For our case, we are planing to buy all new set of server and its peripheral. (p7 server, IBM san switch, ibm SAN, tape library). And if we cannot connect the new server to old SAN, is it possible to install the AIX 7.1, power HA 7 ... first, and then any method to clone the data to new SAN (mid range SAN like DS4000).

# 5  
Old 03-13-2014
Originally Posted by Oceanlo2013
For our case, we are planing to buy all new set of server and its peripheral. (p7 server, IBM san switch, ibm SAN, tape library).
Which hardware exactly (small/medium/big irons), with 1 or 2 service processors? Do you have distributed datacenters or just one? How redundant is your hardware? How do you plan do connect the fabrics, SAN, machines? ...

Sorry, but to help you at all we need to know a lot more details than you provided.

Originally Posted by Oceanlo2013
And if we cannot connect the new server to old SAN, is it possible to install the AIX 7.1, power HA 7 ... first, and then any method to clone the data to new SAN (mid range SAN like DS4000).
OK, the DS4000 is pretty small, so i suppose the rest of your infrastructure is on the smaller side too. What is the rest of your environment like? Do you have a NIM server and how well is it maintained? (If you do not have one i suggest you start immediately building one, this system is invaluable in a migration!)

You see, there are a lot more questions than answers at the time being, The reason is your questions are quite unspecific, so we need to know a lot to give an adequate answer. Ask very specific questions ("can i attach the Flurbomatic 2000 to a Oomphotronic Mod. XL" can be answered with "yes" or "no") and you get answers immediately.

I hope thi shelps.

# 6  
Old 03-13-2014
My suggestion is DO NOT do all the migration at once, Like (OS and HACMP and SAN).
Do one at a time, give it ample time to show any performance degrations or any show stoppers, and if things are good you can proceed with next step. This way troubleshooting is very easy.

Remember to take the backup of each, while you upgrade.
# 7  
Old 03-13-2014
Hi all,

I think I take time to check all requirement first.
And thank you all of you to provide your valuable experience.

Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Solaris

Server migration - using ufsdump

hi all, i am refreshing my hardware, but i do not want to do a clean installation/reinstallation. I am wondering if i could do - do a ufsdump of the / partition (into a file on a nfs share) - bootup using cdrom in the new machine - mount the boot device/slice, and restore the ufsdump on... (7 Replies)
Discussion started by: javanoob
7 Replies

2. UNIX and Linux Applications

Server migration from samba+ldap to windows server 2003

Hi, i have a server installed samba+openldap (pdc). Need to migration windows server 2003 (active directory) object users, computers. Where you can read how to do it? Or can tell me how to do it? Thanks. P.S. Sorry for bad english (0 Replies)
Discussion started by: ap0st0l
0 Replies

3. Red Hat

Print server Migration from AIX to Linux

Hi, Can anyone help me on migration the print server from AIX to RHEL 4? Appreciate your help? (1 Reply)
Discussion started by: brby07
1 Replies

4. Solaris

Storage Migration on Solaris server

Hi All, I need your help urgently. Below I have given the brief description of the Architecture and the Problem. The brief details of the architecture: We have 5 Containers(Non-Global Zones) running on one T5220 Box. The Global Zone is running Solaris 10. The Non-global Zones are... (6 Replies)
Discussion started by: kamaldeep1986
6 Replies

5. Linux

CentOS Live Server Migration?

Hello, everyone. I've been lurking on this forum for a while but have never needed to post asking for a bit of help until now. Long story short, I have a test of sorts scheduled with a prospective employer, a managed web hosting company, that involves migrating a hypothetical customer from... (2 Replies)
Discussion started by: Deputy Cartman
2 Replies

6. Shell Programming and Scripting

Server Migration: Problems with attributes

I am migrating from a host to another and I am trying to mimic the attributes on the old server. Only problem is I can't figure out a fast way. Any recommendations? Initially I thought about listing all my files on my previous host, but for some reason the listing of all files produced... (2 Replies)
Discussion started by: raykon
2 Replies

7. Solaris

Server Migration

Hi All We are having a very old server that is working on solaris 5.6 and oracle7. now We are planning to migrate it to a Solaris 8 and oracle 9 environment. The application software of the server is basically written in ProC and there are few sql ,perl and shell scripts on the server . We... (13 Replies)
Discussion started by: asalman.qazi
13 Replies

8. AIX

How will do migration through NIM server in AIX

Can any one help..... How will do migration through NIM server? (4 Replies)
Discussion started by: AIXlearner
4 Replies

9. AIX

Migration from OLD server to NEW one

Hi everybody, I want ask if it is possible to copy all storage structure (VGs, LVs & filesystems) from one server to another. In other words, I want to move my system from old server to new one, so I want the new server exactly have the same sizes and number of VGs, LVs, & filesystems. Thanks... (11 Replies)
Discussion started by: aldowsary
11 Replies
Login or Register to Ask a Question