AIX Hardware Migration w/ HACMP...Advice Needed


 
Thread Tools Search this Thread
Operating Systems AIX AIX Hardware Migration w/ HACMP...Advice Needed
# 1  
Old 04-22-2016
AIX Hardware Migration w/ HACMP...Advice Needed

Hello Everyone,

Hope you all are doing great!

As you can see by the title on top, we are in the process of migrating alot of our servers from Power5 (physical) to Power8 (Virtual). Now it's turn for servers with HACMP Cluster on it. Let me lay out the environment like:

OLD ENVIRONMENT:

The Primary and secondary nodes resides on a Power5 host. Both are physical servers. The rootvg is on internal disks and the data vg's are on SAN Attached storage. Both primary and secondary nodes are on AIX 7.1 TL3 SP4. For HACMP we have a Active-Passive configuration.

NEW ENVIRONMENT:

Everything is virtualized. We have a Dual VIO setup on the Power8s. The boot disk are coming through a Storage cluster on VIO (vSCSI) and Data disks from SAN (NPIV).
As part of the new environment we are also moving on to the new network i.e. different IP subnet. (old=172.XXX) (new=10.XXX)

The way we have been migrating our previous (non-HA/test/dev) servers is that we clone the mksysb on to the vscsi disk on the new lpar before the day of the cutover and configure the new network on the server. On the day of cutover we bring down apps/DB on the old server (p5) then umount FS, varyoff and export VGs and remove the SAN disks. The storage team unmaps the LUNs and maps them back on the NEW WWPNs on the Power8 (NPIV).

On the new server we configure and import the disks and VGs and mount all FS inclulding NFS. Make DNS changes with the new IP and bring down all apps and DBs.

Now once we bring HACMP into the picture things become a little more complicated, can some of the experts here give me sound advice on how can we make the process as smooth as possible as we are moving on to the new server hardware as well as new network.


Thanks in Advance!
# 2  
Old 04-22-2016
1. make an HACMP snapshot before migration/mksysb.
2. I personally would do server and IP migration separately. Either first IP migration and cluster reconfiguration and then server migration or vice versa, but not all at the same time.
Many changes at once - too many points of failures and too much troubleshooting in case of a problem.
# 3  
Old 04-22-2016
Quote:
Originally Posted by agent.kgb
1. make an HACMP snapshot before migration/mksysb.
2. I personally would do server and IP migration separately. Either first IP migration and cluster reconfiguration and then server migration or vice versa, but not all at the same time.
Many changes at once - too many points of failures and too much troubleshooting in case of a problem.
I thought about that, but here is the issue;

Both networks have restrictions. I cannot put old IP (172.X) on the new hardware as it connect to the new network and vice versa, that is what makes it so complicated.
# 4  
Old 04-25-2016
Quote:
Originally Posted by uzair_rock
I thought about that, but here is the issue;

Both networks have restrictions. I cannot put old IP (172.X) on the new hardware as it connect to the new network and vice versa, that is what makes it so complicated.
In this case you will have to have a downtime of some sorts. Alas, there seems to be no way around that because you need to (newly) configure HACMP on your new system and this means some downtime when you make the transition. Fortunately this downtime can be minimised pretty much by good planning.

Furthermore you haven't said anything about involved OS and HACMP versions. I suppose you will have to update at least one of them (most probably both) too.

I'd investigate the following procedure (you might have to add some things, this is just a first idea):

- Take an mksyb from the running systems, create the appropriate LPARs on the new system and try to install it from the mksysb image without any data (just the rootvg).

- Now do all the updates (start with OS, then the HACMP software) on the new systems

- get a single LUN from your SAN and configure your new cluster with the same topology as the old one, just with one test-VG on the single LUN. Use this to test your zoning, (basic) cluster setup and other details.

- Finally, after preparing your new cluster with new addresses on the new hardware you need the downtime: get the SAN people to zone the old data-LUNs to your new hardware so that the disks are seen from your cluster nodes, create the new cluster config for the imported VGs and start. You might want to add the DNS names of the old cluster service IPs as aliases to the new addresses so that the transition becomes smoother for the clients.

I hope this helps.

bakunin
# 5  
Old 04-25-2016
Quote:
Originally Posted by bakunin
In this case you will have to have a downtime of some sorts. Alas, there seems to be no way around that because you need to (newly) configure HACMP on your new system and this means some downtime when you make the transition. Fortunately this downtime can be minimised pretty much by good planning.

Furthermore you haven't said anything about involved OS and HACMP versions. I suppose you will have to update at least one of them (most probably both) too.

I'd investigate the following procedure (you might have to add some things, this is just a first idea):

- Take an mksyb from the running systems, create the appropriate LPARs on the new system and try to install it from the mksysb image without any data (just the rootvg).

- Now do all the updates (start with OS, then the HACMP software) on the new systems

- get a single LUN from your SAN and configure your new cluster with the same topology as the old one, just with one test-VG on the single LUN. Use this to test your zoning, (basic) cluster setup and other details.

- Finally, after preparing your new cluster with new addresses on the new hardware you need the downtime: get the SAN people to zone the old data-LUNs to your new hardware so that the disks are seen from your cluster nodes, create the new cluster config for the imported VGs and start. You might want to add the DNS names of the old cluster service IPs as aliases to the new addresses so that the transition becomes smoother for the clients.

I hope this helps.

bakunin
Thanks for the reply!!

AIX version running is 7.1 TL3 SP4 and HA version is 6.1. Both are supported on the Power8s.
As far as the outage goes, we do have a 2.5 hour window to re-configure the cluster.
I have already included the mksysb and restore part into my plan. Doing my research I found this on the IBM website:

To change a service IP label/address definition:

  1. Stop cluster services on all nodes.
  2. On any cluster node, enter smit hacmp
  3. Select HACMP Initialization and Standard Configuration > Configure Resources to Make Highly Available > Configure Service IP Labels/Addresses > Change/Show a Service IP Label/Address. Note: In the Extended Cluster Configuration flow, the SMIT path is HACMP > Extended Configuration > HACMP Extended Resources Configuration > Configure Service IP Labels/Addresses > Change/Show a Service IP Label/Address.
  4. In the IP Label/Address to Change panel, select the IP Label/Address you want to change. The Change/Show a Service IP Label/Address panel appears.
  5. Make changes in the field values as needed.
  6. Press Enter after filling in all required fields. HACMP now checks the validity of the new configuration. You may receive warnings if a node cannot be reached, or if network interfaces are found to not actually be on the same physical network.
  7. On the local node, verify and synchronize the cluster. Return to the HACMP Standard or Extended Configuration SMIT panel and select the Verification and Synchronization option.
  8. Restart Cluster Services.
Do you think, if after I move everything on the new Power8 (OS and SAN). I can change the service and bootip addresses using the above method and then try to start the cluster? Do you think it'll work?
# 6  
Old 04-25-2016
Quote:
Originally Posted by uzair_rock
AIX version running is 7.1 TL3 SP4 and HA version is 6.1. Both are supported on the Power8s.
Current version for AIX 7.1 is TL3 SP6 and there is already 7.2 out there. I'd suggest at least the former because it fixes some problems with HACMP: on 7.1.3.4 one of the rsct-daemons is running wild and cluttering up the /var. You have either to install efixes (generally generating as many problems as it is solving) or update to the latest level.

Current version for HACMP is 7.1.3 and 6.1 is already EOL by September. Do yourself a favour and use this maintenance window to update to the as latest version as possible. DO NOT use any version below 7.1.3 if updating to 7.1! Many things (like the repo disks via NPIV with non-IBM storage) worked only in theory, not in practice. 7.1.3 is more or less stable. I have some 40-50 clusters running here and could go on for pages and pages about the workarounds and quick-fixes we had to use to get working clusters with the earlier versions of 7.1.

Quote:
As far as the outage goes, we do have a 2.5 hour window to re-configure the cluster.
You do not need to: as i said, create your LPARs when your old cluster is still working, from teh mksysbs (plus necessary updates, see above), create a NEW 7.1-cluster and test that until you are ready to make the move. You can pre-create the complete cluster-configuration into a series of commands now, because FINALLY the clmgr-command really works and it is possible to do a cluster-config via commandline! This (not having to navigate all these SMITTY-menus all the time) is by far the biggest relief since i work with HACMP.


Quote:
To change a service IP label/address definition:
You said you needed to use new IP-addresses anyway, so don't bother. Create your new cluster with the new addresses and test thoroughly, then make the transition basically by moving the data disks (they are NPIV, no?) to the new LPARs.

Quote:
Do you think, if after I move everything on the new Power8 (OS and SAN). I can change the service and bootip addresses using the above method and then try to start the cluster? Do you think it'll work?
It might work, but again: you don't need that. I can give you a complete procedure for setting up a (7.1-)-cluster and in fact it is 10 minutes work now, only a few commands. Far better and far easier than to navigate these endless SMIT-menus.

I hope this helps.

bakunin

/PS: don't get me wrong: SMITty is fine if you don't know exactly what you want to do and what format a certain command is. But for what i do daily and know exactly what and how to do SMITty is more a hindrance than a tool.
This User Gave Thanks to bakunin For This Post:
# 7  
Old 04-25-2016
Quote:
Originally Posted by bakunin
Current version for AIX 7.1 is TL3 SP6 and there is already 7.2 out there. I'd suggest at least the former because it fixes some problems with HACMP: on 7.1.3.4 one of the rsct-daemons is running wild and cluttering up the /var. You have either to install efixes (generally generating as many problems as it is solving) or update to the latest level.

Current version for HACMP is 7.1.3 and 6.1 is already EOL by September. Do yourself a favour and use this maintenance window to update to the as latest version as possible. DO NOT use any version below 7.1.3 if updating to 7.1! Many things (like the repo disks via NPIV with non-IBM storage) worked only in theory, not in practice. 7.1.3 is more or less stable. I have some 40-50 clusters running here and could go on for pages and pages about the workarounds and quick-fixes we had to use to get working clusters with the earlier versions of 7.1.



You do not need to: as i said, create your LPARs when your old cluster is still working, from teh mksysbs (plus necessary updates, see above), create a NEW 7.1-cluster and test that until you are ready to make the move. You can pre-create the complete cluster-configuration into a series of commands now, because FINALLY the clmgr-command really works and it is possible to do a cluster-config via commandline! This (not having to navigate all these SMITTY-menus all the time) is by far the biggest relief since i work with HACMP.




You said you needed to use new IP-addresses anyway, so don't bother. Create your new cluster with the new addresses and test thoroughly, then make the transition basically by moving the data disks (they are NPIV, no?) to the new LPARs.



It might work, but again: you don't need that. I can give you a complete procedure for setting up a (7.1-)-cluster and in fact it is 10 minutes work now, only a few commands. Far better and far easier than to navigate these endless SMIT-menus.

I hope this helps.

bakunin

/PS: don't get me wrong: SMITty is fine if you don't know exactly what you want to do and what format a certain command is. But for what i do daily and know exactly what and how to do SMITty is more a hindrance than a tool.
You're definitely a life saver! Updating to HA 7.1.3 makes sense. We have been using smitty which definitely take more than 10 mins lol.

If it's possible can you give me a complete procedure for setting up a (7.1-)-cluster that will help me a lot.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Need advice for project UNIX to Linux migration

I am working on UNIX AIX to Linux migration. Does anybody know the good site for doing this? Thanks for contribution (4 Replies)
Discussion started by: digioleg54
4 Replies

2. AIX

AIX - FC Switch migration, SAN Migration question!

I'm New to AIX / VIOS We're doing a FC switch cutover on an ibm device, connected via SAN. How do I tell if one path to my remote disk is lost? (aix lvm) How do I tell when my link is down on my HBA port? Appreciate your help, very much! (4 Replies)
Discussion started by: BG_JrAdmin
4 Replies

3. HP-UX

VPar hardware migration - ERRATA document

Hi guys, I'm moving some vPars from BL860c I2 and BL870c I2 using the DRD clone and relocation. In some case, as instance to upgrade to BL870c I2, the ERRATA document reports some additional driver to be added to the image made by the DRD and then recompile the kernel before the move to the... (0 Replies)
Discussion started by: cecco16
0 Replies

4. Solaris

Hardware to software RAID migration

We have hardware RAID configured on our T6320 server and two LDOMs are running on this server. One of our disk got failed and replaced. After replacemnt the newly installed disk not detected by RAID controlled so Oracle suggested to upgrade the REM firmware. As this is the standalone production... (0 Replies)
Discussion started by: rock123
0 Replies

5. Hardware

Hardware compatibility advice wanted.

If anyone here is successfully running Linux Mint and PC-BSD on two dedicated hard disk drives, (no emulator or partitioning stuff), using Phenom II or Athlon II CPU, I'd like to ask your help to pick hardware! (5 Replies)
Discussion started by: Varsel
5 Replies

6. Hardware

Hardware issue advice

Hi all, I've got an issue with my PC and was wondering what you thought might be the issue. The problem manifests it'self in two ways (at least I'm assuming it's related). 1. I turn the power on at the wall and press the on button, but nothing happens. I have to wait for several seconds to... (3 Replies)
Discussion started by: DougyC
3 Replies

7. Linux

Scripting advice needed

Evening all, Im trying to get a script that will: Select the most 3 recent files in a specific directory Run a command on them (like chmod) Ask of you would like to continue Copy the files to another directory If a linux guru could help me out, it would be very much appreciated. Thanks... (2 Replies)
Discussion started by: Wiggins
2 Replies

8. AIX

IY17981 fix required for aix 4.3.3 to aix 5L migration but not found

Hi, redbook documentation is telling that IY17981 fix is required for aix 4.3.3 to aix 5L migration. But there is no mention about that fix in any ML installation packages. - My system is ML11 : oslevel –r 4330-11 - But xlC.rte is on wrong version : lslpp -L xlC.rte xlC.rte ... (3 Replies)
Discussion started by: astjen
3 Replies

9. Solaris

Storage hardware - advice needed

I realise this is an odd request - but I need advice please.. I have two server - in different geographical locations.. The have 2 local 72gb disks which are mirrored. I need to get storage added to increase both to 300gb in total each and this needs to be mirrored in case of failure. The... (2 Replies)
Discussion started by: frustrated1
2 Replies

10. Linux

programming advice needed....

i'm a grad student taking a UNIX course and a networks course (i have a background in C++ and JAVA). i'm trying to combine the two classes. My questions stems from a networks programming homework assignment below: "Using the operating system and language of your choice, develop a program to... (5 Replies)
Discussion started by: trostycp
5 Replies
Login or Register to Ask a Question