Powerpath upgrade with SVM disksets


 
Thread Tools Search this Thread
Operating Systems Solaris Powerpath upgrade with SVM disksets
# 1  
Old 09-29-2013
Powerpath upgrade with SVM disksets

Hi all
I am running the following hardware:
2 SUN Solaris 10 machines
They are connected to EMC clarion LUNs
We are running a Sun Cluster 3.2 managing both nodes
We are using Solaris Volume Manager to handle the shared storage for the cluster. It is using a number of named disk sets that is shared among both hosts.

Now the situation is as follows:
We are upgrading the storage from Clarion to VMax. The storage admin will handle data replication so that the new luns are exactly the same. The only problem at hand is that the PowerPath software will be updated which means that the luns will have new IDs.
I need to change the hardware IDs of the diskset disks to reflect the new hardware address provided that they still have their same DID.
How can i do that? can i edit some file and change the hardware address or should i attach the new disk and remove the old one? and if so how can i retain the same DID?
thanks a lot in advance
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Solaris

One emc powerpath failed

It seems like I lost one path on my Solaris-11 box. But I want to make sure before going to Storage team, if issue is from OS side or Storage side. Storage team is able to see that only one wwwn is looged in their switch. I am not at server's physical location. What does below output says ? I... (0 Replies)
Discussion started by: solaris_1977
0 Replies

2. Solaris

Problem in SVM after live upgrade

Hi I am new to live upgrade. I would like to tell you about my new setup, where my boot disk(c0d0) is mirrored with secondary disk(c0d1). I have remove the secondary whole disk(C0d1) from the mirror, so that I can do live upgrade on this secondary disk. I have done live upgrade on s0 partition... (3 Replies)
Discussion started by: amity
3 Replies

3. Linux

EMC, PowerPath and issue on using LUN

Hello guys, I'm going crazy over here with a problem with a LUN created on a EMC CX3. I did sucessfully managed to create the LUN on the Storage (the LUN is named DBLNX25EC_TST), after doing the following process: echo "1" > /sys/class/fc_host/host<n>/issue_lip and echo "- - -" >... (10 Replies)
Discussion started by: Zarnick
10 Replies

4. Emergency UNIX and Linux Support

Mapping between "Pseudo name" and "Logical device ID" in powerpath with SVM changed....

Dear All, I was having powerpath 5.2 on SUN server with SVM connected to CLARIION box.Please find the following output : root # powermt display dev=all Pseudo name=emcpower3a CLARiiON ID=CK200073400372 Logical device ID=60060160685D1E004DD97FB647BFDC11 state=alive; policy=CLAROpt;... (1 Reply)
Discussion started by: Reboot
1 Replies

5. Solaris

Migrate from MPXIO to Powerpath

Here is the issue: I am building a database server using Solaris 10x86 U8. The system is jumpstarted with mpxio enabled and booting from the san. We need to have powerpath 5.3 installed and would like to have powerpath take control of the the boot san as well or have mpxio control the san... (2 Replies)
Discussion started by: nabru72
2 Replies

6. Solaris

Solaris 10 with Veritas and PowerPath

Hello, I have an issue with veritas volume manager version 5.0 in combination with EMC powerpath version 5.2.0 running on Solaris10(Sparc). This server is fiber connected to a EMC Clariion CX3-40 SAN. This is the situation : Server is a SUN Enterprise M5000 OS = Solaris10 Sparc Veritas... (1 Reply)
Discussion started by: chipke2005
1 Replies

7. Red Hat

Configure EMC Powerpath?

Hi , I have a redhat 5.3 server which has 2 vg.. one is rootvg in local harddisk and another one is applicationvg in SAN.. When I reboot the server , EMC powerpath driver is not starting up automatically. Hence applicationvg is not mounting properly. Therefore I need to unmount it manually and... (4 Replies)
Discussion started by: Makri
4 Replies

8. AIX

Cannot uninstall powerpath - bosboot issues

Hi Guys, I have a problem while trying to upgrade to more current EMC powerpath software on AIX 5.3 - respectively uninstalling the existing one. I tried to deinstall the powerpath software with removed disks - was working perfectly fine on 12 servers - but number 13 is failing with errors: ... (4 Replies)
Discussion started by: zxmaus
4 Replies

9. Solaris

change SP value in powerpath

How to change the following values in power path from SP B to SP A ? Owner: default=SP B, current=SP B this is excerpt from powermt display dev=all (3 Replies)
Discussion started by: fugitive
3 Replies
Login or Register to Ask a Question
scconf_dg_svm(1M)					  System Administration Commands					 scconf_dg_svm(1M)

NAME
scconf_dg_svm - change Solaris Volume Manager device group configuration. SYNOPSIS
scconf -c -D [generic_options] DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The following information is specific to the scconf command. To use the equivalent object-oriented commands, see the cldevicegroup(1CL) man page. A Solaris Volume Manager device group is defined by a name, the nodes upon which this group can be accessed, a global list of devices in the disk set, and a set of properties used to control actions such as potential primary preference and failback behavior. For Solaris Volume Manager device groups, only one disk set can be assigned to a device group, and the group name must always match the name of the disk set itself. In Solaris Volume Manager, a multihosted or shared device is a grouping of two or more hosts and disk drives that are accessible by all hosts, and that have the same device names on all hosts. This identical device naming requirement is achieved by using the raw disk devices to form the disk set. The device ID pseudo driver (DID) allows multihosted devices to have consistent names across the cluster. Only hosts already configured as part of a disk set itself can be configured into the nodelist of a Solaris Volume Manager device group. At the time drives are added to a shared disk set, they must not belong to any other shared disk set. The Solaris Volume Manager metaset command creates the disk set, which also initially creates and registers it as a Solaris Volume Manager device group. Next, you must use the scconf command to set the node preference list, the preferenced, failback and numsecondaries subop- tions. If you want to change the order of node preference list or the failback mode, you must specify all the nodes that currently exist in the device group in the nodelist. In addition, if you are changing the order of node preference, you must also set the preferenced suboption to true. If you do not specify the preferenced suboption with the "change" form of the command, the already established true or false setting is used. You cannot use the scconf command to remove the Solaris Volume Manager device group from the cluster configuration. Use the Solaris Volume Manager metaset command instead. You remove a device group by removing the Solaris Volume Manager disk set. OPTIONS
See scconf(1M) for the list of supported generic options. See metaset(1M) for the list of metaset related commands to create and remove disk sets and device groups. Only one action option is allowed in the command. The following action options are supported. -c Change the ordering of the node preference list, change preference and failback policy, and change the desired number of secondaries. EXAMPLES
Example 1 Creating and Registering a Disk Set The following metaset commands create the disk set disksetand register the disk set as a Solaris Volume Manager device group. Next, the scconf command is used to specify the order of the potential primary nodes for the device group, change the preferenced and fail- back options, and change the desired number of secondaries. host1# metaset -s diskset1 -a -h host1 host2 host1# scconf -c -D name=diskset1,nodelist=host2:host1, preferenced=true,failback=disabled,numsecondaries=1 ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevicegroup(1CL), scconf(1M), metaset(1M) Sun Cluster 3.2 10 Jul 2006 scconf_dg_svm(1M)