I have an old V490 running an old version of Soalris 10 and an old version of VxVM.
The system disks are configured using metathis and metathat, the remaining disks
Because of the brain-damaged way engineering run things here, I cannot change this. I cannot upgrade nor patch the OS. I cannot upgrade nor patch VxVM.
I recently successfully replaced the root disk, and rebuilt the root mirror. But consequently vxdisk list complains about the replaced device.
How can I get c0t0d0s2 to transition to "online invalid"?
Last edited by Scott; 07-20-2010 at 10:48 AM..
Reason: Code tags, please...
Hi,
I'd like to add two new disk (mirrored each other) in a machine still on line.
The machine is mirrored with SDS. the idea is to add two new disk and mirroring them in order to add a new mount point like "/produit" with only half of the new disk.
who could send me exemples or tell me how to... (1 Reply)
Hello,
We are using Solstice Disk Suite on Solaris 2.7.
We want to add two striped volume with six disks.
On each disk, we take a slice and we create the stripe.
That I want to know :
Is it necessary to add two replicas on the same slice on the new disks, as made before on the others... (1 Reply)
...I have a situation where I disks that I mirrored using Solstice Disksuite now need mirroring under Veritas Volume manager...would anyone know the best way to go about 'un-mirroring' and then removing SDS? I can see references in /etc/vfstab and /etc/system that I guess would need... (2 Replies)
All solaris rescue gurus out there ....
I've a Solaris 2.6 E450 on which my sysadmin guy has deleted every file (not sub-directories) from the /etc directory.
The machine is (was) running Vxvm with the root volume encapsulated.
I've tried booting from CDROM, mounting the root volume... (3 Replies)
Is there a way to create "softpartitions" with Veritas Volume Manager?
I have a bunch of disks and i want to create a RAID 5 with them. On that RAID5, i want to split that RAID into two separated file systems like i could do it with SDS/LVM or ZFS, but i don't want to create two RAID5 in the... (3 Replies)
Hi,
Quick question if anyone knows this. Is there a command I can use in Veritas Volume manager on Solaris that will tell me what the name of the SAN I am connected to? We have a number of SANs so I am unsure which one my servers are connected to. Thanks. (13 Replies)
Here's the scenario..
Server built with solaris 10 + SDS to mirror OS disk to 2nd disk.
If you pull the root disk while the system is running, would you expect :
1, The box to just stay running, ie off its mirror
2, The box would crash, reboot, try and boot of its primary, if not, the... (4 Replies)
I'd like to add some x/linux-based servers to my current AIX-based TDS/SDS server community. Reading the Fine Install Guide (rtfig ?) I believe this may be covered by the section "Upgrade an instance of a previous version to a different computer" i.e. I'm going to install latest/greatest SDS on a... (4 Replies)
Discussion started by: maraixadm
4 Replies
LEARN ABOUT HPUX
vxdarestore
vxdarestore(1M)vxdarestore(1M)NAME
vxdarestore - restore simple or nopriv disk access records
SYNOPSIS
/etc/vx/bin/vxdarestore
DESCRIPTION
The vxdarestore utility is used to restore persistent simple or nopriv disk access (da) records that have failed due to changing the naming
scheme used by vxconfigd from c#t#d#-based to enclosure-based.
The use of vxdarestore is required if you use the vxdiskadm command to change from the c#t#d#-based to the enclosure-based naming scheme.
As a result, some existing persistent simple or nopriv disks go into the "error" state and the VxVM objects on those disks fail.
vxdarestore may be used to restore the disk access records that have failed. The utility also recovers the VxVM objects on the failed disk
access records.
Note: vxdarestore may only be run when vxconfigd is using the enclosure-based naming scheme.
Note: You can use the command vxdisk list da_name to discover whether a disk access record is persistent. The record is non-persistent if
the flags field includes the flag autoconfig; otherwise it is persistent.
The following sections describe how to use the vxdarestore utility under various conditions.
Persistent Simple/Nopriv Disks in the rootdg Disk Group
If all persistent simple or nopriv disks in the rootdg disk group go into the "error" state, use the following procedure:
1. Use the vxdiskadm command to change back to the c#t#d# based naming scheme.
2. Either shut down and reboot the host, or run the following command:
vxconfigd -kr reset
3. If you want to use the enclosure-based naming scheme, add a non-persistent simple disk to the rootdg disk group, use vxdiskadm to
change to the enclosure-based naming scheme, and then run vxdarestore.
Note: If not all the disks in rootdg go into the error state, simply running vxdarestore restores those disks in the error state and the
objects that that they contain.
Persistent Simple/Nopriv Disks in Disk Groups other than rootdg
If all disk access records in an imported disk group consist only of persistent simple and/or nopriv disks, the disk group is put in the
"online dgdisabled" state after changing to the enclosure-based naming scheme. For such disk groups, perform the following steps:
1. Deport the disk group using the following command:
vxdg deport diskgroup
2. Run the vxdarestore command.
3. Re-import the disk group using the following command:
vxdg import diskgroup
NOTES
Use of the vxdarestore command is not required in the following cases:
o If there are no persistent simple or nopriv disk access records on an HP-UX host.
o If all devices on which simple or nopriv disks are present are not automatically configurable by VxVM. For example, third-party
drivers export devices that are not automatically configured by VxVM. VxVM objects on simple/nopriv disks created from such disks
are not affected by switching to the enclosure based naming scheme.
The vxdarestore command does not handle the following cases:
o If the enclosure-based naming scheme is in use and the vxdmpadm command is used to change the name of an enclosure, the disk access
names of all devices in that enclosure are also changed. As a result, any persistent simple/nopriv disks in the enclosure are put
into the "error" state, and VxVM objects configured on those disks fail.
o If the enclosure-based naming scheme is in use and the system is rebooted after making hardware configuration changes to the host.
This may change the disk access names and cause some persistent simple/nopriv disks to be put into the "error" state.
o If the enclosure-based naming scheme is in use, the device discovery layer claims some disks under the JBOD category, and the vxdd-
ladm rmjbod command is used to remove remove support for the JBOD category for disks from a particular vendor. As a result of the
consequent name change, disks with persistent disk access records are put into the "error" state, and VxVM objects configured on
those disks fail.
EXIT CODES
A zero exit status is returned if the operation is successful or if no actions were necessary. An exit status of 1 is returned if vxdare-
store is run while vxconfigd is using the c#t#d# naming scheme. An exit status of 2 is returned if vxconfigd is not running.
SEE ALSO vxconfigd(1M), vxdg(1M), vxdisk(1M), vxdiskadm(1M), vxdmpadm(1M), vxintro(1M), vxreattach(1M), vxrecover(1M)VxVM 5.0.31.1 24 Mar 2008 vxdarestore(1M)