Sponsored Content
Operating Systems Solaris Need to remove a disk from Veritas Post 302113277 by reborg on Wednesday 4th of April 2007 02:00:39 PM
Old 04-04-2007
if you're sure it's really bogus:

Make sure there are no dead plexes hanging around, and remove them if there are.

You can check with
Code:
vxprint -pg hpdg | grep NODEVICE

Column 2 is plex name. If there are some dead plexes, disassociate and remove them:
Code:
vxplex -g hpdg -o rm dis <plex name>

Then vxedit the disk out of hpdg:
Code:
vxedit -g hpdg -rf rm disk

 

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

FSCK on veritas managed disk

I've had a VXFS filesystem get corrupted and now it won't mount. Can I run a fsck -y on the raw disk device or should something be done within veritas? Veritas does not see the disk at the moment. (2 Replies)
Discussion started by: ozzmosiz
2 Replies

2. Solaris

Veritas root disk mirroring

Hi there, My task is to replace the two 73 G disks with two 143 G disks , which has vxvm 4.1 running on it. I would like to know whether the steps iam following are correct. 1. Break the sub-disks, plexes of the root mirror. 2. Remove the sub-disks,plexes of the root mirror. 3. Remove one of... (10 Replies)
Discussion started by: Jartan
10 Replies

3. Shell Programming and Scripting

mapping device from an inq output to veritas disk groups

Hi, Does anyone have a clever way of mapping the following from two different files using perl. sample line from vxdisk list output (vxdisk-list.file): emcpower18s2 auto:sliced IDFAG1_1 (IDFAG1_appdg) online sample line from 'inq' output (inq.file): ... (0 Replies)
Discussion started by: csoesan
0 Replies

4. UNIX and Linux Applications

Veritas silent disk group creation

I am trying to write a kornshell script that will automatically create veritas disk groups. However, the only utility that I can find that will create the diskgroup is vxdiskadd, which prompts with interactive questions. I've tried to pass the answers through to vxdiskadd, but I receive the... (0 Replies)
Discussion started by: jm6601
0 Replies

5. Solaris

Help needed to find out the disk controller for veritas disks

hi all i am using vxvm 5.1 on my sun blade 150 which is running with solaris 5.10. When i give the command "vxdisk list" it gives the following output # vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:none - - online... (2 Replies)
Discussion started by: kingston
2 Replies

6. AIX

Remove internal disk from Veritas Control

I installed new internal disks in my p570. They will be part of a new AIX vg. Unfortunately, we have Veritas Volume Manager running on this AIX 5.2 ml 10 box. Veritas has grabbed control of the disks. I want AIX LVM to control the disks. I cannot get these disks free of Veritas: <lspv... (2 Replies)
Discussion started by: BobSmith
2 Replies

7. UNIX for Dummies Questions & Answers

Configure disk group with veritas

People i have an a storage i create two virtual disk 1 y 2. In the virtual disk 1 i configure 8 volumes and in the Vd2 configure 5 volumes. Now i want to create a disk group called Prod2 y Dev2 but when i go to vxdiskadm and i choose add disk o encapusalte when i press list to list the disk... (0 Replies)
Discussion started by: enkei17
0 Replies

8. Solaris

Veritas not attaching replaced disk

Hi, I`m on SunFire480R with Solaris 10. Disk in rootdg group failed, so it was replaced. However, I cannot make Veritas initalise the replaced disk: # vxdctl enable # vxdisk list c1t0d0s2 Device: c1t0d0s2 devicetag: c1t0d0 type: auto flags: online error private autoconfig... (1 Reply)
Discussion started by: masloff
1 Replies

9. Solaris

Veritas disk group not the same on each cluster node.

Need help getting all disk devices back on node 2 the same as node 1. Recently Veritas and/or Sun cluster got wrecked on my 2 node Sun cluster after installing the latest patch cluster. I backed out the patches and then node 2 could see only half of the devices and Veritas drives (though format... (0 Replies)
Discussion started by: buggin
0 Replies

10. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies
volinfo(8)						      System Manager's Manual							volinfo(8)

NAME
volinfo - Print accessibility and usability of volumes SYNOPSIS
/usr/sbin/volinfo [-Vp] [-g diskgroup] [-U usetype] [-o useopt] [volume...] OPTIONS
The following options are recognized: Writes a list of utilities that would be called from volinfo, along with the arguments that would be passed. The -V performs a ``mock run'' so the utilities are not actually called. Reports the name and condition of each plex in each reported volume. Specifies the usage type for the operation. If no volume operands are specified, the output is restricted to volumes with this usage type. If volume operands are specified, this will result in a failure message for all named volumes that do not have the indi- cated usage type. Specifies the disk group for the operation, either by disk group ID or by disk group name. By default, the disk group is chosen based on the volume operands. If no volume operands are specified, the disk group defaults to rootdg. Passes in usage-type-specific options to the operation. This option is currently unsupported. DESCRIPTION
The volinfo utility reports a usage-type-dependent condition on one or more volumes in a disk group. A report for each volume specified by the volume operand is written to the standard output. If no volume operands are given, a volume condition report is provided for each vol- ume in the selected disk group. Each invocation can be applied to only one disk group at a time, due to internal implementation constraints. Any volume operands will be used to determine a default disk group, according to the standard disk group selection rules described in volintro(8). A specific disk group can be forced with -g diskgroup. Output Format Summary reports for each volume are printed in one-line output records. Each volume output line consists of blank-separated fields for the volume name, volume usage type, and volume condition. Each plex output line consists of blank-separated fields for the plex name and the plex condition. The following example shows the volume summary: # volinfo bigvol fsgen Startable vol2 fsgen Started brokenvol gen Unstartable The following example shows the plex summary, with the plex records accompanied by their volume records: # volinfo -p vol bigvol fsgen Startable plex bigvol-01 ACTIVE vol vol2 fsgen Started plex vol2-01 ACTIVE vol brokenvol gen Unstartable Volume Conditions The volume condition is a usage-type-dependent summary of the state of a volume. This condition is derived from the volume's kernel-enabled state and the usage-type-dependent states of the volume's plexes. Volume conditions for the fsgen and gen usage types are reported as follows: The volume is not enabled and at least one of the plexes has a reported condition of ACTIVE or CLEAN. A volume startall operation would likely succeed in starting a volume in this condition. The vol- ume is not enabled and fails to meet the criteria for being Startable. A volume in this condition is not started and may be configured incorrectly or prevented from automatic startup (with volume startall) because of errors or other conditions. The volume is enabled and at least one of the associated plexes is enabled in read-write mode (which is normal for enabled plexes in the ACTIVE and EMTPY conditions). A volume in this condition has been started and can be used. The volume is enabled, but the volume does not meet the criteria for being Started. A volume in this condition has been started, but is inaccessible because of errors that have occurred since the volume was started, or because of administrative actions, such as voldg -k rmdisk. Volume conditions for volumes of the raid5 usage type include the following conditions used for the fsgen and gen usage types: Startable, Unstartable, Started, Started Unusable Additional volume conditions for raid5 volumes are: The RAID-5 plex of the volume is in degraded mode due to the unavailability of a sub- disk in that plex. Some of the parity in the RAID-5 plex is stale and requires recovery. Plex Conditions The following plex conditions (reported with -p) are reported for the fsgen and gen usage types: No physical disk was found for one of the subdisks in the plex. This implies either that the physical disk failed, making it unrecognizable, or that the physical disk is no longer attached through a known access path. A physical disk used by one of the subdisks in the plex was removed through administrative action with voldg -k rmdisk. The plex was detached from use as a result of an uncorrectable I/O failure on one of the subdisks in the plex. The plex does not contain valid data, either as a result of a disk replacement affecting one of the subdisks in the plex, or as a result of an administrative action on the plex such as volplex det. The plex contains valid data and the volume was stopped cleanly. Either the volume is started and the plex is enabled, or the volume was not stopped cleanly and the plex was valid when the volume was stopped. The plex was disabled using the volmend off operation. The plex is part of a volume that has not yet been initialized. The plex is associated tempo- rarily as part of a current operation, such as volplex cp or volplex att. A system reboot or manual starting of a volume will dissociate the plex. The plex was created for temporary use by a current operation. A system reboot or manual starting of a volume will remove the plex. The plex and its subdisks were created for temporary use by a current operation. A system reboot or manual starting of the volume will remove the plex and all of its subdisks. The plex is being attached as part of a backup operation by the volassist snapstart opera- tion. When the attach is complete, the condition will change to SNAPDONE. A system reboot or manual starting of the volume will remove the plex and all of its subdisks. A volassist snapstart operation completed the process of attaching the plex. It is a candidate for selection by the volassist snapshot operation. A system reboot or manual starting of the volume will remove the plex and all of its subdisks. The plex is being attached as part of a backup operation by the volplex snapstart operation. When the attach is complete, the condition will change to SNAPDIS. A system reboot or manual starting of the volume will dissociate the plex. A volassist snapstart operation completed the process of attaching the plex. It is a candidate for selection by the volplex snapshot operation. A system reboot or manual starting of the volume will dissociate the plex. Plexes of raid5 volumes can be either data plexes (that is, RAID-5 plexes) or log plexes. Plex conditions for RAID-5 plexes and log plexes include the following conditions used for the fsgen and gen usage types: NODAREC, REMOVED, IOFAIL, CLEAN, ACTIVE, OFFLINE RAID-5 plexes can have these additional conditions: Due to subdisk failures, the plex is in degraded mode. This indicates a loss of data redundancy in the RAID-5 volume and any further failures could cause data loss. The parity is not in sync with the data in the plex. This indicates a loss of data redundancy in the RAID-5 volume and any further failures could case data loss. A double failure occurred within the plex. The plex is unusable due to subdisk failures and/or stale parity. Log plexes of RAID-5 volumes can have this additional condition: The contents of the plex are not usable as logging data. EXIT CODES
The volinfo utility exits with a nonzero status if the attempted operation fails. A nonzero exit code is not a complete indicator of the problems encountered, but rather denotes the first condition that prevented further execution of the utility. See volintro(8) for a list of standard exit codes. SEE ALSO
volintro(8), volassist(8), volmend(8), volplex(8), volsd(8), volume(8) volinfo(8)
All times are GMT -4. The time now is 07:12 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy