01-23-2013
Solaris_1977
Let me explain new LUNS and the discovery from the VxVM side ....
When the OS discovers a new LUN, there is a new daemon , called ESD (Event Source Daemon) that tries to do the "windows" "plug&play" bit.
Once a new LUN is discovered , ESD then broadcasts to all registered software (like EMCP and VxVM) that there is a new LUN.
Now, here comes the problem.
VxVM and EMCP sees the new LUN at the same time.
EMCP starts creating a new powerdevice for it, and VxVM creates a new disk record for it.
When EMC powerpath created the device, it again sends a message to ESD and ESD then broadcasts this to the other registered software (VxVM).
VxVM now tries to "link" the power device with the disk ...
The reason is that PowerPath already does DMP, and if VxVM also does DMP, you get double the work and double the time for IO.... VxVM does NOT do DMP to powerdevices to eliminate this double work. So, VxVM has to make the "link" between the disk and the power device .....
Now comes the problem.
This all happens so fast, because in the process, VxVM also sends out a ESD broadcast saying that it knows about a new disk and it can do DMP for it (which EMCP picks up and checks and .....)
OK, so how can you solve this ?
The best way is to stop VxVM from "linking" into ESD.
There is a process called "vxesd" (do a "ps" to see)
If you call support, they will tell you to stop VxVM ESD by looking at this link ....
(explains what I did above in more detail and gives the commands to stop it running)
(oops, can not yet post links ... so go to google and search for symanetc and TECH72540)
When the machine rebooted, the device discovery was done by the OS, then PowerPath and then VxVM (correct order), and as such eliminated the problems.
The steps that you followed (scandisks ...) is 100% correct, and should be followed once you have stopped vxesd from running again.
If you do have further questions, please feel free to ask, or if you want me to look at data on your specific machine, let me know
10 More Discussions You Might Find Interesting
1. Solaris
All solaris rescue gurus out there ....
I've a Solaris 2.6 E450 on which my sysadmin guy has deleted every file (not sub-directories) from the /etc directory.
The machine is (was) running Vxvm with the root volume encapsulated.
I've tried booting from CDROM, mounting the root volume... (3 Replies)
Discussion started by: andy11983
3 Replies
2. UNIX for Advanced & Expert Users
hi,
i have a volume on a LUN of an EMC-storage and i need to increase the
size.
i could increase the size of the LUN on the EMC, i could increase the
size of the disk for solaris, but how can i tell the veritas volume
manager that the disk is larger now?
what i've done:
1. LUN on EMC
2.... (3 Replies)
Discussion started by: pressy
3 Replies
3. Solaris
VxVM:
How to add one more disk into v08 the stripe should change from 7/128 to 8/128
v v08 - ENABLED ACTIVE 8954292224 SELECT v08-01 fsgen
pl v08-01 v08 ENABLED ACTIVE 8954292480 STRIPE 7/128 RW
sd bkpdg35-01 v08-01 bkpdg35 17216 ... (0 Replies)
Discussion started by: geoffry
0 Replies
4. Solaris
Hi, Im getting a downtime of 4 hrs to do porting of bootdisks.
Currently, the system is running on Sf4800. 2 internal disk 36G connected to a SE3510 storage.
We're getting 72G disks and we want to restore the OS from the current 36G to the 72G disk. System is under veritas volume manager ctrl.... (4 Replies)
Discussion started by: incredible
4 Replies
5. Solaris
Previously , i remove the disk by
#vxdg -g testdg -k rmdisk testdg02
But i got error when i -k adddisk
bash-2.03# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:none - - online invalid
c0t1d0s2 auto:none ... (1 Reply)
Discussion started by: waibabe
1 Replies
6. Solaris
Hi All,
We had a Sun Netra T1 go down the other day, the root disk was mirrored using vxvm. Upon boot from either disk, we had the following error appear:
WARNING: Error writing ufs log state
WARNING: ufs log for / changed state to Error
WARNING: Please umount(1M) / and run... (4 Replies)
Discussion started by: badoshi
4 Replies
7. Solaris
hi all,
how can we check whether vxvm is installed in our system or not in solaris?
Thanks in advance
dinu (4 Replies)
Discussion started by: dinu
4 Replies
8. Solaris
Hi All
Hope it's okay to post on this sub-forum, couldn't find a better place
I've got a 480R running solaris 8 with veritas volume manager managing all filesystems, including an encapsulated root disk (I believe the root disk is encapsulated as one of the root mirror disks has an entry under... (1 Reply)
Discussion started by: sunnyd76
1 Replies
9. Solaris
Hi,
Can you help me on booting x86 server configured under VxVM. Server boots fine normally from both the disks but if I try to boot server from mirror disk without starting veritas, then it does not boot.
vxplex -g rootdg dis var-02
vxplex -g rootdg dis swapvol-02
vxplex -g rootdg dis... (2 Replies)
Discussion started by: milindphanse604
2 Replies
10. HP-UX
Hello all,
So I made a rookie mistake today. I forgot to remove my disk from my disk group, before running the following command:for i in `ioscan -fnN | awk /NO/'{print $3}'`
do
rmsf -H $i
done
I am trying to run the following command, but not having any luck obviously:vxdg -g dgvol1 rmdisk... (0 Replies)
Discussion started by: mrkejames2
0 Replies
LEARN ABOUT HPUX
vxdarestore
vxdarestore(1M) vxdarestore(1M)
NAME
vxdarestore - restore simple or nopriv disk access records
SYNOPSIS
/etc/vx/bin/vxdarestore
DESCRIPTION
The vxdarestore utility is used to restore persistent simple or nopriv disk access (da) records that have failed due to changing the naming
scheme used by vxconfigd from c#t#d#-based to enclosure-based.
The use of vxdarestore is required if you use the vxdiskadm command to change from the c#t#d#-based to the enclosure-based naming scheme.
As a result, some existing persistent simple or nopriv disks go into the "error" state and the VxVM objects on those disks fail.
vxdarestore may be used to restore the disk access records that have failed. The utility also recovers the VxVM objects on the failed disk
access records.
Note: vxdarestore may only be run when vxconfigd is using the enclosure-based naming scheme.
Note: You can use the command vxdisk list da_name to discover whether a disk access record is persistent. The record is non-persistent if
the flags field includes the flag autoconfig; otherwise it is persistent.
The following sections describe how to use the vxdarestore utility under various conditions.
Persistent Simple/Nopriv Disks in the rootdg Disk Group
If all persistent simple or nopriv disks in the rootdg disk group go into the "error" state, use the following procedure:
1. Use the vxdiskadm command to change back to the c#t#d# based naming scheme.
2. Either shut down and reboot the host, or run the following command:
vxconfigd -kr reset
3. If you want to use the enclosure-based naming scheme, add a non-persistent simple disk to the rootdg disk group, use vxdiskadm to
change to the enclosure-based naming scheme, and then run vxdarestore.
Note: If not all the disks in rootdg go into the error state, simply running vxdarestore restores those disks in the error state and the
objects that that they contain.
Persistent Simple/Nopriv Disks in Disk Groups other than rootdg
If all disk access records in an imported disk group consist only of persistent simple and/or nopriv disks, the disk group is put in the
"online dgdisabled" state after changing to the enclosure-based naming scheme. For such disk groups, perform the following steps:
1. Deport the disk group using the following command:
vxdg deport diskgroup
2. Run the vxdarestore command.
3. Re-import the disk group using the following command:
vxdg import diskgroup
NOTES
Use of the vxdarestore command is not required in the following cases:
o If there are no persistent simple or nopriv disk access records on an HP-UX host.
o If all devices on which simple or nopriv disks are present are not automatically configurable by VxVM. For example, third-party
drivers export devices that are not automatically configured by VxVM. VxVM objects on simple/nopriv disks created from such disks
are not affected by switching to the enclosure based naming scheme.
The vxdarestore command does not handle the following cases:
o If the enclosure-based naming scheme is in use and the vxdmpadm command is used to change the name of an enclosure, the disk access
names of all devices in that enclosure are also changed. As a result, any persistent simple/nopriv disks in the enclosure are put
into the "error" state, and VxVM objects configured on those disks fail.
o If the enclosure-based naming scheme is in use and the system is rebooted after making hardware configuration changes to the host.
This may change the disk access names and cause some persistent simple/nopriv disks to be put into the "error" state.
o If the enclosure-based naming scheme is in use, the device discovery layer claims some disks under the JBOD category, and the vxdd-
ladm rmjbod command is used to remove remove support for the JBOD category for disks from a particular vendor. As a result of the
consequent name change, disks with persistent disk access records are put into the "error" state, and VxVM objects configured on
those disks fail.
EXIT CODES
A zero exit status is returned if the operation is successful or if no actions were necessary. An exit status of 1 is returned if vxdare-
store is run while vxconfigd is using the c#t#d# naming scheme. An exit status of 2 is returned if vxconfigd is not running.
SEE ALSO
vxconfigd(1M), vxdg(1M), vxdisk(1M), vxdiskadm(1M), vxdmpadm(1M), vxintro(1M), vxreattach(1M), vxrecover(1M)
VxVM 5.0.31.1 24 Mar 2008 vxdarestore(1M)