Sponsored Content
Full Discussion: Veritas root disk mirroring
Operating Systems Solaris Veritas root disk mirroring Post 302098984 by reborg on Thursday 7th of December 2006 06:41:27 PM
Old 12-07-2006
Ok, you have something of a messy situation to deal with here.

And just a couple of comments:
1. If you have any other diskgroups, condier using them instead.
2. If you only have 2 disk why use veritas? Encapsulated roots are a pain SVM is a whole lot simpler for boot disks.

I generally don't like to use vxdiskadm, so I'm not
sure what options you've got there.

The basics:

for each plex on the mirror:
Code:
vxplex -g bootdg -o rm dis <plex>

then
Code:
vxdg -g bootdg rmdisk <mirror_disk_name>
/etc/vx/bin/vxdiskunsetup <mirror_disk_media_name>

remove mirror disk, and replace.
Code:
devfsadm -Cv
vxdctl int
vxdctl enable

layout the new partitions on disk
Code:
/etc/vx/bin/vxdisksetup -i c0t1d0 format=sliced
vxdg -g bootdg adddisk rootmirror=c0t1d0 
/etc/vx/bin/vxrootmir rootmirror

for each other volume
Code:
vxassist -g bootdg mirror <volname>

repeat the procedure for rootdisk, no need to unencapsulate.
 

9 More Discussions You Might Find Interesting

1. Solaris

Solaris mirroring / non-root disk

Hi Guys, Need to add 2 disks into a JBOD array (3310). Does anyone see anything wrong with my Procedure / Doco below? 1> Logon to system, check system logs for abnormal entries. 2> Make backups of related system files: A>cp -p /etc/system /etc/system.backup.081505 B>cp -p /etc/vfstab... (3 Replies)
Discussion started by: BG_JrAdmin
3 Replies

2. Solaris

mirroring root disk using svm - but no free slices for metadb's

Hi all, we have an existing system that was configured using just one of the (two) internal disks. I want to mirror the disk using SVM, but have realised there is no free slice for creating the metadb's. Is there a workaround I can use for this? In the past we have always kept slice 7 free -... (8 Replies)
Discussion started by: badoshi
8 Replies

3. Solaris

Root Disk mirroring in SVM

Dear All, Please help me to configure root mirroring using SVM in Solaris 9. Thanks and Regards, Lakkireddy BR (3 Replies)
Discussion started by: lbreddy
3 Replies

4. Solaris

Disk mismatch while trying to zfs mirroring non-root disks

Hello All, I am trying to mirror two non-root hard drives using zfs. But "fmthard" fails when I try to copy the vtoc due to disk mismatch. Please help me. --- iostat command shows the disk to be similiar --- format command shows disk to be different :confused: --- c1t2d0 is the active... (8 Replies)
Discussion started by: pingmeback
8 Replies

5. Solaris

root disk mirroring in solaris volume manager for solaris 10

Need a procedure document to do "root disk mirroring in solaris volume manager for solaris 10". I hope some one will help me asap. I need to do it production environment. Let me know if you need any deatils on this. Thanks, Rama (1 Reply)
Discussion started by: ramareddi16
1 Replies

6. Solaris

Root disk mirroring in SVM

I tried doing rootdisk mirroring in my local host , i added a new Ide disk in my system and copied the prtvtoc from root disk to the newly added disk, and then when i tried to add database replicas on both the disks, it was added for boot disk but for the newly added disk i gave the error, which... (6 Replies)
Discussion started by: Laxxi
6 Replies

7. Solaris

Root disk mirroring in Solaris 10

I would like to perform root disk mirroring task. Can someone please help me out on this. Thanks !! Regards, Rama (2 Replies)
Discussion started by: ramagore85
2 Replies

8. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies

9. Solaris

How to increase the /var (UFS) filesystem and root disk under veritas control?

I need to increase the /var (UFS) filesystem and root disk under veritas control or root disk is encapsulated # df -k /var Filesystem kbytes used avail capacity Mounted on /dev/vx/dsk/var 13241195 12475897 674524 96% /var # fstyp /dev/vx/dsk/var ufs # pkginfo... (1 Reply)
Discussion started by: amity
1 Replies
vxinfo(1M)																vxinfo(1M)

NAME
vxinfo - print accessibility and usability of volumes SYNOPSIS
vxinfo [-pV] [-g diskgroup] [-o useopt] [-U usetype] [volume...] DESCRIPTION
The vxinfo utility reports a usage-type-dependent condition on one or more volumes in a disk group. A report for each volume specified by the volume operand is written to the standard output. If no volume operands are given, then a volume condition report is provided for each volume in the selected disk group. Each invocation can be applied to only one disk group at a time. OPTIONS
-g diskgroup Specify the disk group for the operation, either by disk group ID or by disk group name. If this option is not specified, the default disk group is determined using the rules given in the vxdg(1M) manual page. -o useopt Pass in usage-type-specific options to the operation. -p Report the name and condition of each plex in each reported volume. -U usetype Specify the usage type for the operation. If no volume operands are specified, then the output is restricted to volumes with this usage type. If volume operands are specified, then this results in a failure message for all named volumes that do not have the indicated usage type. -V Write a list of utilities that would be called from vxinfo, along with the arguments that would be passed. The -V performs a preview run so the utilities are not actually called. Volume Conditions The volume condition is a usage-type-dependent summary of the state of a volume. This condition is derived from the volume's kernel- enabled state and the usage-type-dependent states of the volume's plexes. The vxinfo utility reports the following conditions for volumes: Startable A vxvol startall operation would likely succeed in starting the volume. Started The volume has been started and can be used. Started Unusable The volume has been started but is not operationally accessible. This condition may result from errors that have occurred since the volume was started, or may be a result of administrative actions, such as vxdg -k rmdisk. Unstartable The volume is not started and either is not correctly configured or doesn't meet the prerequisites for automatic startup (with volume startup) because of errors or other conditions. OUTPUT FORMAT
Summary reports for each volume are printed in one-line output records. Each volume output line consists of blank-separated fields for the volume name, volume usage type, and volume condition. The following example shows the volume summary: bigvol fsgen Startable vol2 fsgen Startable brokenvol gen Unstartable Each plex output line consists of blank-separated fields for the plex name and the plex condition. The plex records are accompanied by their volume records, as the following example shows: vol bigvol fsgen Startable plex bigvol-01 ACTIVE vol vol2 fsgen Startable plex vol2-01 ACTIVE vol brokenvol gen Unstartable FSGEN and GEN Usage Types The fsgen and gen usage types provide identical semantics for the vxinfo utility. The fsgen and gen usage types do not support any options passed in with -o. Plex conditions (reported with -p) can be one of the following: ACTIVE Either the volume is started and the plex is enabled, or the volume was not stopped cleanly and the plex was valid when the vol- ume was stopped. CLEAN The plex contains valid data and the volume was stopped cleanly. DCOSNP A data change object (DCO) plex that is attached to a volume, and which can be used by a snapshot plex to create a DCO volume during a snapshot operation. EMPTY The plex is part of a volume that has not yet been initialized. IOFAIL The plex was detached because of an uncorrectable I/O failure on one of the subdisks in the plex. LOG A dirty region logging (DRL) or RAID-5 log plex. NODAREC No physical disk was found for one of the subdisks in the plex. This implies either that the physical disk failed, making it unrecognizable, or that the physical disk is no longer attached through a known access path. NODEVICE A physical device could not be found corresponding to the disk ID in the disk media record for one of the subdisks associated with the plex. The plex cannot be used until this condition is fixed, or the affected subdisk is dissociated. OFFLINE The plex was disabled using the vxmend off operation. REMOVED A physical disk used by one of the subdisks in the plex was removed through administrative action with vxdg -k rmdisk. SNAPATT The plex is being attached as part of a backup operation by the vxassist snapstart operation. When the attach is complete, the condition changes to SNAPDONE. A system reboot or manual starting of the volume removes the plex and all of its subdisks. SNAPDIS A vxassist snapstart operation completed the process of attaching the plex. It is a candidate for selection by the vxplex snap- shot operation. A system reboot or manual starting of the volume dissociates the plex. SNAPDONE A vxassist snapstart operation completed the process of attaching the plex. It is a candidate for selection by the vxassist snapshot operation. A system reboot or manual starting of the volume removes the plex and all of its subdisks. SNAPTMP The plex is being attached as part of a backup operation by the vxplex snapstart operation. When the attach is complete, the condition changes to SNAPDIS. A system reboot or manual starting of the volume dissociates the plex. STALE The plex does not contain valid data, either as a result of a disk replacement affecting one of the subdisks in the plex, or as a result of an administrative action on the plex such as vxplex det. TEMP The plex is associated temporarily as part of a current operation, such as vxplex cp or vxplex att. A system reboot or manual starting of a volume dissociates the plex. TEMPRM The plex was created for temporary use by a current operation. A system reboot or manual starting of a volume removes the plex. TEMPRMSD The plex and its subdisks were created for temporary use by a current operation. A system reboot or manual starting of the vol- ume removes the plex and all of its subdisks. Volume conditions for these usage types are reported as follows: Startable This condition is reported if the volume is not enabled and if any of the plexes have a reported condition of ACTIVE or CLEAN. Started This condition is reported if the volume is enabled and at least one of the associated plexes is enabled in read-write mode (which is normal for enabled plexes in the ACTIVE and EMPTY conditions). Started Unusable This condition is reported if the volume is enabled, but the volume does not meet the criteria for being Started. Unstartable This condition is reported if the volume is not enabled, but the volume does not meet the criteria for being Startable. RAID-5 Usage Type Plexes of RAID-5 volumes can be either data plexes (that is, RAID-5 plexes) or log plexes. RAID-5 data and log plex conditions are as fol- lows: ACTIVE Either the volume is started and the plex is enabled, or the volume was not stopped cleanly and the plex was valid when the vol- ume was stopped. CLEAN The plex contains valid data and the volume was stopped cleanly. The raid5 usage type does not support any options passed in with -o. IOFAIL The plex was detached from use as a result of an uncorrectable I/O failure on one of the subdisks in the plex. NODAREC No physical disk was found for one of the subdisks in the plex. This implies either that the physical disk failed, making it unrecognizable, or that the physical disk is no longer attached through a known access path. OFFLINE The plex was disabled using the vxmend off operation. REMOVED A physical disk used by one of the subdisks in the plex was removed through administrative action with vxdg -k rmdisk. RAID-5 data plexes can have these additional conditions: DEGRADED Due to subdisk failures, the plex is in degraded mode. This indicates a loss of data redundancy in the RAID-5 volume and any fur- ther failures could cause data loss. STALEPRTY The parity is not in sync with the data in the plex. This indicates a loss of data redundancy in the RAID-5 volume and any fur- ther failures could cause data loss. UNUSABLE This indicates that a double-failure occurred within the plex. The plex is unusable due to subdisk failures and/or stale parity. Log plexes of RAID-5 volumes can have the following additional conditions: BADLOG The contents of the plex are not usable as logging data. Volume conditions for volumes of the raid5 usage type are the same as for the fsgen and gen usage types (Startable, Unstartable, Started and Started Unusable). In addition, the following conditions can modify the conditions: Degraded This condition indicates that the RAID-5 plex of the volume is in degraded mode due to the unavailability of a subdisk in that plex. Staleprty This condition indicates that some of the parity in the RAID-5 plex is stale and requires recovery. FILES
/usr/lib/vxvm/type/usetype/vxinfo The utility that performs vxinfo operations for a particular volume usage type. EXIT CODES
The vxinfo utility exits with a non-zero status if the attempted operation fails. A non-zero exit code is not a complete indicator of the problems encountered, but rather denotes the first condition that prevented further execution of the utility. See vxintro(1M) for a list of standard exit codes. SEE ALSO
vxassist(1M), vxintro(1M), vxmend(1M), vxplex(1M), vxsd(1M), vxvol(1M) VxVM 5.0.31.1 24 Mar 2008 vxinfo(1M)
All times are GMT -4. The time now is 08:53 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy