Sponsored Content
Operating Systems Solaris Moving Solaris 9 Host to new SAN. Post 302591154 by eptc on Wednesday 18th of January 2012 04:53:15 PM
Old 01-18-2012
Thanks for your reply.
All the zoning will remain the same.
I'm just wondering if the device files for the disks will change due to the switches being replaced.

D
 

9 More Discussions You Might Find Interesting

1. Solaris

SAN Configuration in Solaris

I Have IBM ESS 800, trying to configure SAN storage using Fiber card. I installed the SDD driver & can see the disks however, when I did # datapath query device I have the disks shown in slices # ./datapath query device Total Devices : 8 Dev#: 0 Device Name: vpath1a Type:... (0 Replies)
Discussion started by: Remi
0 Replies

2. UNIX for Advanced & Expert Users

Help with SDD, SAN ESS and AIX 5.3 Host

Hi all, Sorry if this is in the wrong place but needed to make sure lots of people saw this so that hopefully someone will be able to help. Basically i've upgraded a test server from 4.3 to 5.3 TL04. The server has hdisk0 and 1 as rootvg locally but then has another vg setup on our ESS... (1 Reply)
Discussion started by: djdavies
1 Replies

3. Solaris

Solaris and SAN

Hi, How can we differentiate a SAN disk with a Solaris local disk? Please respond. Thanks (4 Replies)
Discussion started by: balu_solaris
4 Replies

4. UNIX for Advanced & Expert Users

SAN Configuration in Solaris

Hi, I have 4 servers that I connected SUN HBAs on. (SAN is IBM - ESS 800 with Brocade switch) I installed the SAN foundation kit, IBMsdd drivers, Solaris Patch 108974-50 & SANsurfer CLI. (I'm running Solaris 8 - HBA is SG-XPC11FC-QL2 On 1 of the 4 servers show the HBAs as CONNECTED, but... (3 Replies)
Discussion started by: Remi
3 Replies

5. Solaris

PING - Unknown host 127.0.0.1, Unknown host localhost - Solaris 10

Hello, I have a problem - I created a chrooted jail for one user. When I'm logged in as root, everything work fine, but when I'm logged in as a chrooted user - I have many problems: 1. When I execute the command ping, I get weird results: bash-3.00$ usr/sbin/ping localhost ... (4 Replies)
Discussion started by: Przemek
4 Replies

6. Solaris

Solaris in a SAN BOOT

My current situation is like this, I have a v440 connected to a netapp central storage 3140 via fiber channel, my OS and oracle is installed in the internal drive of the v440. What I would like to do is to advertise another LUN in netapp to the v440 and let my v440 boot from it so I can start... (3 Replies)
Discussion started by: q8devilish
3 Replies

7. AIX

Find information for Host and SAN disconnect

Can someone point me in the right direction as to where I can find information on how to cleanly disconnect my AIX 5.3 host from our DS/4200 SAN. I have to do a firmware upgrade on the SAN. -Thanks (2 Replies)
Discussion started by: tfort73
2 Replies

8. Solaris

Solaris SAN configuration.

HI All, What are IPFC(Internet Protocol over Fibre channel) SAN devices. What all i got from guide is "Configuring IPFC over host system describes host recognition of IPFC devices and implementation of IP over FC in SAN" What does above statement actually mean. I would appreciate detailed... (2 Replies)
Discussion started by: ravijanjanam12
2 Replies

9. Solaris

Migration of Solaris 10 on physical host to Solaris Zones

Hi All Kindly let me know how can I move Solaris 10 OS running update 10 on physical machine to another machine solaris zone running Solaris 10 update 11 (2 Replies)
Discussion started by: amity
2 Replies
vxdarestore(1M) 														   vxdarestore(1M)

NAME
vxdarestore - restore simple or nopriv disk access records SYNOPSIS
/etc/vx/bin/vxdarestore DESCRIPTION
The vxdarestore utility is used to restore persistent simple or nopriv disk access (da) records that have failed due to changing the naming scheme used by vxconfigd from c#t#d#-based to enclosure-based. The use of vxdarestore is required if you use the vxdiskadm command to change from the c#t#d#-based to the enclosure-based naming scheme. As a result, some existing persistent simple or nopriv disks go into the "error" state and the VxVM objects on those disks fail. vxdarestore may be used to restore the disk access records that have failed. The utility also recovers the VxVM objects on the failed disk access records. Note: vxdarestore may only be run when vxconfigd is using the enclosure-based naming scheme. Note: You can use the command vxdisk list da_name to discover whether a disk access record is persistent. The record is non-persistent if the flags field includes the flag autoconfig; otherwise it is persistent. The following sections describe how to use the vxdarestore utility under various conditions. Persistent Simple/Nopriv Disks in the rootdg Disk Group If all persistent simple or nopriv disks in the rootdg disk group go into the "error" state, use the following procedure: 1. Use the vxdiskadm command to change back to the c#t#d# based naming scheme. 2. Either shut down and reboot the host, or run the following command: vxconfigd -kr reset 3. If you want to use the enclosure-based naming scheme, add a non-persistent simple disk to the rootdg disk group, use vxdiskadm to change to the enclosure-based naming scheme, and then run vxdarestore. Note: If not all the disks in rootdg go into the error state, simply running vxdarestore restores those disks in the error state and the objects that that they contain. Persistent Simple/Nopriv Disks in Disk Groups other than rootdg If all disk access records in an imported disk group consist only of persistent simple and/or nopriv disks, the disk group is put in the "online dgdisabled" state after changing to the enclosure-based naming scheme. For such disk groups, perform the following steps: 1. Deport the disk group using the following command: vxdg deport diskgroup 2. Run the vxdarestore command. 3. Re-import the disk group using the following command: vxdg import diskgroup NOTES
Use of the vxdarestore command is not required in the following cases: o If there are no persistent simple or nopriv disk access records on an HP-UX host. o If all devices on which simple or nopriv disks are present are not automatically configurable by VxVM. For example, third-party drivers export devices that are not automatically configured by VxVM. VxVM objects on simple/nopriv disks created from such disks are not affected by switching to the enclosure based naming scheme. The vxdarestore command does not handle the following cases: o If the enclosure-based naming scheme is in use and the vxdmpadm command is used to change the name of an enclosure, the disk access names of all devices in that enclosure are also changed. As a result, any persistent simple/nopriv disks in the enclosure are put into the "error" state, and VxVM objects configured on those disks fail. o If the enclosure-based naming scheme is in use and the system is rebooted after making hardware configuration changes to the host. This may change the disk access names and cause some persistent simple/nopriv disks to be put into the "error" state. o If the enclosure-based naming scheme is in use, the device discovery layer claims some disks under the JBOD category, and the vxdd- ladm rmjbod command is used to remove remove support for the JBOD category for disks from a particular vendor. As a result of the consequent name change, disks with persistent disk access records are put into the "error" state, and VxVM objects configured on those disks fail. EXIT CODES
A zero exit status is returned if the operation is successful or if no actions were necessary. An exit status of 1 is returned if vxdare- store is run while vxconfigd is using the c#t#d# naming scheme. An exit status of 2 is returned if vxconfigd is not running. SEE ALSO
vxconfigd(1M), vxdg(1M), vxdisk(1M), vxdiskadm(1M), vxdmpadm(1M), vxintro(1M), vxreattach(1M), vxrecover(1M) VxVM 5.0.31.1 24 Mar 2008 vxdarestore(1M)
All times are GMT -4. The time now is 12:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy