Powerpath upgrade with SVM disksets


 
Thread Tools Search this Thread
Operating Systems Solaris Powerpath upgrade with SVM disksets
# 1  
Old 09-29-2013
Powerpath upgrade with SVM disksets

Hi all
I am running the following hardware:
2 SUN Solaris 10 machines
They are connected to EMC clarion LUNs
We are running a Sun Cluster 3.2 managing both nodes
We are using Solaris Volume Manager to handle the shared storage for the cluster. It is using a number of named disk sets that is shared among both hosts.

Now the situation is as follows:
We are upgrading the storage from Clarion to VMax. The storage admin will handle data replication so that the new luns are exactly the same. The only problem at hand is that the PowerPath software will be updated which means that the luns will have new IDs.
I need to change the hardware IDs of the diskset disks to reflect the new hardware address provided that they still have their same DID.
How can i do that? can i edit some file and change the hardware address or should i attach the new disk and remove the old one? and if so how can i retain the same DID?
thanks a lot in advance
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Solaris

One emc powerpath failed

It seems like I lost one path on my Solaris-11 box. But I want to make sure before going to Storage team, if issue is from OS side or Storage side. Storage team is able to see that only one wwwn is looged in their switch. I am not at server's physical location. What does below output says ? I... (0 Replies)
Discussion started by: solaris_1977
0 Replies

2. Solaris

Problem in SVM after live upgrade

Hi I am new to live upgrade. I would like to tell you about my new setup, where my boot disk(c0d0) is mirrored with secondary disk(c0d1). I have remove the secondary whole disk(C0d1) from the mirror, so that I can do live upgrade on this secondary disk. I have done live upgrade on s0 partition... (3 Replies)
Discussion started by: amity
3 Replies

3. Linux

EMC, PowerPath and issue on using LUN

Hello guys, I'm going crazy over here with a problem with a LUN created on a EMC CX3. I did sucessfully managed to create the LUN on the Storage (the LUN is named DBLNX25EC_TST), after doing the following process: echo "1" > /sys/class/fc_host/host<n>/issue_lip and echo "- - -" >... (10 Replies)
Discussion started by: Zarnick
10 Replies

4. Emergency UNIX and Linux Support

Mapping between "Pseudo name" and "Logical device ID" in powerpath with SVM changed....

Dear All, I was having powerpath 5.2 on SUN server with SVM connected to CLARIION box.Please find the following output : root # powermt display dev=all Pseudo name=emcpower3a CLARiiON ID=CK200073400372 Logical device ID=60060160685D1E004DD97FB647BFDC11 state=alive; policy=CLAROpt;... (1 Reply)
Discussion started by: Reboot
1 Replies

5. Solaris

Migrate from MPXIO to Powerpath

Here is the issue: I am building a database server using Solaris 10x86 U8. The system is jumpstarted with mpxio enabled and booting from the san. We need to have powerpath 5.3 installed and would like to have powerpath take control of the the boot san as well or have mpxio control the san... (2 Replies)
Discussion started by: nabru72
2 Replies

6. Solaris

Solaris 10 with Veritas and PowerPath

Hello, I have an issue with veritas volume manager version 5.0 in combination with EMC powerpath version 5.2.0 running on Solaris10(Sparc). This server is fiber connected to a EMC Clariion CX3-40 SAN. This is the situation : Server is a SUN Enterprise M5000 OS = Solaris10 Sparc Veritas... (1 Reply)
Discussion started by: chipke2005
1 Replies

7. Red Hat

Configure EMC Powerpath?

Hi , I have a redhat 5.3 server which has 2 vg.. one is rootvg in local harddisk and another one is applicationvg in SAN.. When I reboot the server , EMC powerpath driver is not starting up automatically. Hence applicationvg is not mounting properly. Therefore I need to unmount it manually and... (4 Replies)
Discussion started by: Makri
4 Replies

8. AIX

Cannot uninstall powerpath - bosboot issues

Hi Guys, I have a problem while trying to upgrade to more current EMC powerpath software on AIX 5.3 - respectively uninstalling the existing one. I tried to deinstall the powerpath software with removed disks - was working perfectly fine on 12 servers - but number 13 is failing with errors: ... (4 Replies)
Discussion started by: zxmaus
4 Replies

9. Solaris

change SP value in powerpath

How to change the following values in power path from SP B to SP A ? Owner: default=SP B, current=SP B this is excerpt from powermt display dev=all (3 Replies)
Discussion started by: fugitive
3 Replies
Login or Register to Ask a Question
sccheck(1M)						  System Administration Commands					       sccheck(1M)

NAME
sccheck - check for and report on vulnerable Sun Cluster configurations SYNOPSIS
sccheck [-b] [-h nodename[,nodename]...] [-o output-dir] [-s severity] [-v verbosity] sccheck [-b] [-W] [-h nodename[,nodename]...] [-o output-dir] [-v verbosity] DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The sccheck utility examines Sun Cluster nodes for known vulnerabilities and configuration problems, and it delivers reports that describe all failed checks, if any. The utility runs one of these two sets of checks, depending on the state of the node that issues the command: o Preinstallation checks - When issued from a node that is not running as an active cluster member, the sccheck utility runs pre- installation checks on that node. These checks ensure that the node meets the minimum requirements to be successfully configured with Sun Cluster software. o Cluster configuration checks - When issued from an active member of a running cluster, the sccheck utility runs configuration checks on the specified or default set of nodes. These checks ensure that the cluster meets the basic configuration required for a cluster to be functional. The sccheck utility produces the same results for this set of checks regardless of which cluster node issues the command. The sccheck utility runs configuration checks and uses the explorer(1M) utility to gather system data for check processing. The sccheck utility first runs single-node checks on each nodename specified, then runs multiple-node checks on the specified or default set of nodes. Each configuration check produces a set of reports that are saved in the specified or default output directory. For each specified node- name, the sccheck utility produces a report of any single-node checks that failed on that node. Then the node from which sccheck was run produces an additional report for the multiple-node checks. Each report contains a summary that shows the total number of checks executed and the number of failures, grouped by check severity level. Each report is produced in both ordinary text and in XML. The DTD for the XML format is available in the /usr/cluster/lib/sccheck/checkre- sults.dtd file. The reports are produced in English only. The sccheck utility is a client-server program in which the server is started when needed by the inetd daemon. Environment variables in the user's shell are not available to this server. Also, some environment variables, in particular those that specify the non-default locations of Java and Sun Explorer software, can be overridden by entries in the /etc/default/sccheck file. The ports used by the sccheck utility can also be overridden by entries in this file, as can the setting for required minimum available disk space. The server logs error messages to syslog and the console. You can use this command only in the global zone. OPTIONS
The following options are supported: -b Specifies a brief report. This report contains only the summary of the problem and the severity level. Analysis and recommendations are omitted. You can use this option only in the global zone. You need solaris.cluster.system.read RBAC authorization to use this command option. See rbac(5). -h nodename[,nodename]... Specifies the nodes on which to run checks. If the -h option is not specified, the sccheck utility reports on all active cluster mem- bers. You can use this option only in the global zone. This option is only legal when issued from an active cluster member. -o output-dir Specifies the directory in which to save reports. You can use this option only in the global zone. The output-dir must already exist or be able to be created by the sccheck utility. Any previous reports in output-dir are overwritten by the new reports. If the -o option is not specified, /var/cluster/sccheck/reports.yyyy-mm-dd:hh:mm:ss is used as output-dir by default, where yyyy-mm- dd:hh:mm:ss is the year-month-day:hour:minute:second when the directory was created. -s severity Specifies the minimum severity level to report on. You can use this option only in the global zone. The value of severity is a number in the range of 1 to 4 that indicates one of the following severity levels: 1. Low 2. Medium 3. High 4. Critical Each check has an assigned severity level. Specifying a severity level will exclude any failed checks of lesser severity levels from the report. When the -s option is not specified, the default severity level is 0, which means that failed checks of all severity levels are reported. The -s option is mutually exclusive with the -W option. -v verbosity Specifies the sccheck utility's level of verbosity. You can use this option only in the global zone. The value of verbosity is a number in the range of 0 to 2 that indicates one of the following verbosity levels: o 0: No progress messages. This level is the default. o 1: Issues sccheck progress messages. o 2: Issues Sun Explorer and more detailed sccheck progress messages. You need solaris.cluster.system.read RBAC authorization to use this command option. See rbac(5). The -v option has no effect on report contents. -W Disables any warnings. The report generated is equivalent to -s 3. You can use this option only in the global zone. The -W option is mutually exclusive with the -s option. The -W option is retained for compatibility with prior versions of the sccheck utility. You need solaris.cluster.system.read RBAC authorization to use this command option. See rbac(5). EXIT STATUS
The following exit values are returned: 0 The command completed successfully. No violations were reported. 1-4 The code indicates that the highest severity level of all violations was reported. 100+ An error has occurred. Some reports might have been generated. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu, SUNWscsck | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ FILES
/etc/default/sccheck /usr/cluster/lib/sccheck/checkresults.dtd /var/cluster/sccheck/reports.yyyy-mm-dd:hh:mm:ss SEE ALSO
Intro(1CL), explorer(1M), sccheckd(1M), scinstall(1M), attributes(5) Sun Cluster Software Installation Guide for Solaris OS, Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 19 Sep 2006 sccheck(1M)