Powerpath upgrade with SVM disksets


 
Thread Tools Search this Thread
Operating Systems Solaris Powerpath upgrade with SVM disksets
# 1  
Old 09-29-2013
Powerpath upgrade with SVM disksets

Hi all
I am running the following hardware:
2 SUN Solaris 10 machines
They are connected to EMC clarion LUNs
We are running a Sun Cluster 3.2 managing both nodes
We are using Solaris Volume Manager to handle the shared storage for the cluster. It is using a number of named disk sets that is shared among both hosts.

Now the situation is as follows:
We are upgrading the storage from Clarion to VMax. The storage admin will handle data replication so that the new luns are exactly the same. The only problem at hand is that the PowerPath software will be updated which means that the luns will have new IDs.
I need to change the hardware IDs of the diskset disks to reflect the new hardware address provided that they still have their same DID.
How can i do that? can i edit some file and change the hardware address or should i attach the new disk and remove the old one? and if so how can i retain the same DID?
thanks a lot in advance
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Solaris

One emc powerpath failed

It seems like I lost one path on my Solaris-11 box. But I want to make sure before going to Storage team, if issue is from OS side or Storage side. Storage team is able to see that only one wwwn is looged in their switch. I am not at server's physical location. What does below output says ? I... (0 Replies)
Discussion started by: solaris_1977
0 Replies

2. Solaris

Problem in SVM after live upgrade

Hi I am new to live upgrade. I would like to tell you about my new setup, where my boot disk(c0d0) is mirrored with secondary disk(c0d1). I have remove the secondary whole disk(C0d1) from the mirror, so that I can do live upgrade on this secondary disk. I have done live upgrade on s0 partition... (3 Replies)
Discussion started by: amity
3 Replies

3. Linux

EMC, PowerPath and issue on using LUN

Hello guys, I'm going crazy over here with a problem with a LUN created on a EMC CX3. I did sucessfully managed to create the LUN on the Storage (the LUN is named DBLNX25EC_TST), after doing the following process: echo "1" > /sys/class/fc_host/host<n>/issue_lip and echo "- - -" >... (10 Replies)
Discussion started by: Zarnick
10 Replies

4. Emergency UNIX and Linux Support

Mapping between "Pseudo name" and "Logical device ID" in powerpath with SVM changed....

Dear All, I was having powerpath 5.2 on SUN server with SVM connected to CLARIION box.Please find the following output : root # powermt display dev=all Pseudo name=emcpower3a CLARiiON ID=CK200073400372 Logical device ID=60060160685D1E004DD97FB647BFDC11 state=alive; policy=CLAROpt;... (1 Reply)
Discussion started by: Reboot
1 Replies

5. Solaris

Migrate from MPXIO to Powerpath

Here is the issue: I am building a database server using Solaris 10x86 U8. The system is jumpstarted with mpxio enabled and booting from the san. We need to have powerpath 5.3 installed and would like to have powerpath take control of the the boot san as well or have mpxio control the san... (2 Replies)
Discussion started by: nabru72
2 Replies

6. Solaris

Solaris 10 with Veritas and PowerPath

Hello, I have an issue with veritas volume manager version 5.0 in combination with EMC powerpath version 5.2.0 running on Solaris10(Sparc). This server is fiber connected to a EMC Clariion CX3-40 SAN. This is the situation : Server is a SUN Enterprise M5000 OS = Solaris10 Sparc Veritas... (1 Reply)
Discussion started by: chipke2005
1 Replies

7. Red Hat

Configure EMC Powerpath?

Hi , I have a redhat 5.3 server which has 2 vg.. one is rootvg in local harddisk and another one is applicationvg in SAN.. When I reboot the server , EMC powerpath driver is not starting up automatically. Hence applicationvg is not mounting properly. Therefore I need to unmount it manually and... (4 Replies)
Discussion started by: Makri
4 Replies

8. AIX

Cannot uninstall powerpath - bosboot issues

Hi Guys, I have a problem while trying to upgrade to more current EMC powerpath software on AIX 5.3 - respectively uninstalling the existing one. I tried to deinstall the powerpath software with removed disks - was working perfectly fine on 12 servers - but number 13 is failing with errors: ... (4 Replies)
Discussion started by: zxmaus
4 Replies

9. Solaris

change SP value in powerpath

How to change the following values in power path from SP B to SP A ? Owner: default=SP B, current=SP B this is excerpt from powermt display dev=all (3 Replies)
Discussion started by: fugitive
3 Replies
Login or Register to Ask a Question
scgdevs(1M)						  System Administration Commands					       scgdevs(1M)

NAME
scgdevs - global devices namespace administration script SYNOPSIS
/usr/cluster/bin/scgdevs DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scgdevs command manages the global devices namespace. The global devices namespace is mounted under the /global directory and consists of a set of logical links to physical devices. As the /dev/global directory is visible to each node of the cluster, each physical device is visible across the cluster. This fact means that any disk, tape, or CD-ROM that is added to the global-devices namespace can be accessed from any node in the cluster. The scgdevs command enables you to attach new global devices (for example, tape drives, CD-ROM drives, and disk drives) to the global- devices namespace without requiring a system reboot. You must run the devfsadm command before you run the scgdevs command. Alternatively, you can perform a reconfiguration reboot to rebuild the global namespace and attach new global devices. See the boot(1M) man page for more information about reconfiguration reboots. You must run this command from a node that is a current cluster member. If you run this command from a node that is not a cluster member, the command exits with an error code and leaves the system state unchanged. You can use this command only in the global zone. You need solaris.cluster.system.modify RBAC authorization to use this command. See the rbac(5) man page. You must also be able to assume a role to which the Sun Cluster Commands rights profile has been assigned to use this command. Authorized users can issue privileged Sun Cluster commands on the command line from the pfsh, pfcsh, or pfksh profile shell. A profile shell is a spe- cial kind of shell that enables you to access privileged Sun Cluster commands that are assigned to the Sun Cluster Commands rights profile. A profile shell is launched when you run the su command to assume a role. You can also use the pfexec command to issue privileged Sun Clus- ter commands. EXIT STATUS
The following exit values are returned: 0 The command completed successfully. nonzero An error occurred. Error messages are displayed on the standard output. FILES
/devices Device nodes directory /global/.devices Global devices nodes directory /dev/md/shared Solaris Volume Manager metaset directory ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
pfcsh(1), pfexec(1), pfksh(1), pfsh(1), Intro(1CL), cldevice(1CL), boot(1M), devfsadm(1M), su(1M), did(7) Sun Cluster System Administration Guide for Solaris OS NOTES
The scgdevs command, called from the local node, will perform its work on remote nodes asynchronously. Therefore, command completion on the local node does not necessarily mean that the command has completed its work clusterwide. This document does not constitute an API. The /global/.devices directory and the /devices directory might not exist or might have different contents or interpretations in a future release. The existence of this notice does not imply that any other documentation that lacks this notice constitutes an API. This interface should be considered an unstable interface. Sun Cluster 3.2 10 Apr 2006 scgdevs(1M)