Sponsored Content
Full Discussion: SAN Migration
Operating Systems AIX SAN Migration Post 302900524 by bakunin on Wednesday 7th of May 2014 08:56:48 AM
Old 05-07-2014
Quote:
Originally Posted by ElizabethPJ
Thank you very much.
Yes I can get downtime for cluster
You do not even need a downtime for the cluster, provided that your SAN disks allow for concurrent access (check with "lsvg <vgname>" if it is "enhanced concurrent capable" and if it is opened in concurrent mode).

If this is indeed the case:

- attach the new disks to both nodes, run "cfgmgr" on both nodes to create the new devices. Check with "lspv".

- add the new volumes to your VGs on the active node:
Code:
extendvg <volumegroup> <hdisk-device>

- mirror the old disks to the new disks. Switch off automatic sync and do a "syncvg" afterwards, it is usually much faster. Use the "-P" switch to let it run in parallel:

Code:
mirrorvg -s <volumegroup> <hdisk-device>
syncvg -P <##> -v <volumegroup>

- after the syncing is finished do a "learning import" on the passive node. The passive node knows only the old disk to be in the VG so far, so use this to reimport the VG definition from:
Code:
importvg -L <volumegroup> <old hdisk-device>

- on the active node unmirror the VG, removing the mirror on the old disk, then move that old disk out of the VG:
Code:
unmirrorvg <volumegroup> <old hdisk-device>
reducevg <volumegroup> <old hdisk-device>

- again do a "learning import" on the passive node to sync the VG definition across cluster nodes, this time from the new disk:
Code:
importvg -L <volumegroup> <new hdisk-device>

Remove the zones from the fabric and delete the old hdisk-devices. When running "cfgmgr" again they should not be discovered any more.

I hope this helps
bakunin
 

10 More Discussions You Might Find Interesting

1. HP-UX

SAN migration

Hi, I'm going to be involved in a migration of SAN islands to one big SAN. I've not worked with SANs before and I'm not sure how to approach this. I suspect the disk devices on the HP servers are going to change, when the EVA's and servers are plugged into this new Cisco 9509 switch. Any... (0 Replies)
Discussion started by: Hils
0 Replies

2. Solaris

Thoughts/experiences of SAN attaching V880 to EMC SAN

Hi everyone, I wonder if I can canvas any opinions or thoughts (good or bad) on SAN attaching a SUN V880/490 to an EMC Clarion SAN? At the moment the 880 is using 12 internal FC-AL disks as a db server and seems to be doing a pretty good job. It is not I/O, CPU or Memory constrained and the... (2 Replies)
Discussion started by: si_linux
2 Replies

3. AIX

IBM SAN TO SAN Mirroring

Has anyone tried SAN to SAN mirroring on IBM DS SAN Storage. DS5020 mentions Enhanced Remote Mirror to multi-LUN applications I wonder if Oracle High availibility can be setup using Remote Mirror option of SAN ? (1 Reply)
Discussion started by: filosophizer
1 Replies

4. HP-UX

SAN Migration of HP-UX hosts

Hello gurus, I am a SAN Admin - not very familiar with the HPUX administration - so need help with the steps in regards to the migration I need to do at my client place. Environment: Migrating from CX4 to VMAX - using OR/Hot Pull. Here are the steps I have put together - HPUX gurus please... (5 Replies)
Discussion started by: jps460
5 Replies

5. Solaris

Need Help On Solaris Cluster Steps For San Migration

I am going to do a SAN Array migration and need the sequence of steps required on the Solaris cluster before moving the old array luns to new array luns. Here are the steps and I need info on the bold points:( I might even be wrong on the sequence of steps please correct me if I am wrong) 1.... (0 Replies)
Discussion started by: sunshinedeepu
0 Replies

6. HP-UX

SAN Migration question

Hi, I am very new to HP-UX, and we're going to be doing a SAN migration. We're going to take down the machine, and zone it to the new SAN. My question is, will the device names change and will that interfere with the LVM? If the new disks come in with different device names, how would I... (3 Replies)
Discussion started by: BG_JrAdmin
3 Replies

7. Red Hat

Volume group not activated at boot after SAN migration

I have an IBM blade running RHEL 5.4 server, connected to two Hitachi SANs using common fibre cards & Brocade switches. It has two volume groups made from old SAN LUNs. The old SAN needs to be retired so we allocated LUNs from the new SAN, discovered the LUNs as multipath disks (4 paths) and grew... (4 Replies)
Discussion started by: rbatte1
4 Replies

8. AIX

AIX - FC Switch migration, SAN Migration question!

I'm New to AIX / VIOS We're doing a FC switch cutover on an ibm device, connected via SAN. How do I tell if one path to my remote disk is lost? (aix lvm) How do I tell when my link is down on my HBA port? Appreciate your help, very much! (4 Replies)
Discussion started by: BG_JrAdmin
4 Replies

9. Solaris

How to identify if disk is attached to SAN and assist in migration.?

I am working on VM host and collecting data to identify the type of storage attached to the server which will be migrated to VNX. it has one ldom created on it luxadm probe output --- No Network Array enclosures found in /dev/es Found Fibre Channel device(s): Node... (7 Replies)
Discussion started by: kpatel786
7 Replies

10. Filesystems, Disks and Memory

Faster way: SAN hd to SAN hd copying

hi! i got a rhel 6.3 host that already have an xfs filesystem mounted from a SAN (let's call it SAN-1) whose size is 9TB. i will be receiving another SAN (let's call it SAN-2) storage of 15TB size. this new addition is physically on another SAN storage. SAN-1 is on a Pillar storage while the new... (6 Replies)
Discussion started by: rino19ny
6 Replies
cmrunnode(1m)															     cmrunnode(1m)

NAME
cmrunnode - run a node in a high availability cluster SYNOPSIS
cmrunnode [-v] [node_name...] [-t | -w none] DESCRIPTION
cmrunnode causes a node to start its cluster daemon to join the existing cluster. This command verifies the network configuration before causing the node to start its cluster daemon. To start a cluster on one of its nodes, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the clus- ter configuration file. See access policy in cmquerycl(1m). Starting a node will not cause any active packages to be moved to the new node. However, if a package is DOWN, has its switching enabled, and is able to run on the new node, that package will automatically run there. If node_name is not specified, the cluster daemon will be started on the local node and will join the existing cluster. Options cmrunnode supports the following options: -v Verbose output will be displayed. -t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages. The -w option is not required with the -t option as -t does not validate network connectivity, but assumes that all the nodes can meet any external dependencies such as EMS resources, package subnets, and storage. node_name... Start the cluster daemon on the specified node(s). -w none By default network probing is performed to check that the network connectivity is the same as when the cluster was configured. Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The option should only be used if this network configuration is known to be correct from a recent check. RETURN VALUE
cmrunnode returns the following value: 0 Successful completion. 1 Command failed. EXAMPLES
Run the cluster daemon on the current node: cmrunnode Run the cluster daemons on node1 and node2: cmrunnode node1 node2 AUTHOR
cmrunnode was developed by HP. SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmruncl(1m), cmviewcl(1m), cmeval(1m). Requires Optional Serviceguard Software cmrunnode(1m)
All times are GMT -4. The time now is 10:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy