Sponsored Content
Operating Systems Solaris Need to remove a disk from Veritas Post 302113277 by reborg on Wednesday 4th of April 2007 02:00:39 PM
Old 04-04-2007
if you're sure it's really bogus:

Make sure there are no dead plexes hanging around, and remove them if there are.

You can check with
Code:
vxprint -pg hpdg | grep NODEVICE

Column 2 is plex name. If there are some dead plexes, disassociate and remove them:
Code:
vxplex -g hpdg -o rm dis <plex name>

Then vxedit the disk out of hpdg:
Code:
vxedit -g hpdg -rf rm disk

 

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

FSCK on veritas managed disk

I've had a VXFS filesystem get corrupted and now it won't mount. Can I run a fsck -y on the raw disk device or should something be done within veritas? Veritas does not see the disk at the moment. (2 Replies)
Discussion started by: ozzmosiz
2 Replies

2. Solaris

Veritas root disk mirroring

Hi there, My task is to replace the two 73 G disks with two 143 G disks , which has vxvm 4.1 running on it. I would like to know whether the steps iam following are correct. 1. Break the sub-disks, plexes of the root mirror. 2. Remove the sub-disks,plexes of the root mirror. 3. Remove one of... (10 Replies)
Discussion started by: Jartan
10 Replies

3. Shell Programming and Scripting

mapping device from an inq output to veritas disk groups

Hi, Does anyone have a clever way of mapping the following from two different files using perl. sample line from vxdisk list output (vxdisk-list.file): emcpower18s2 auto:sliced IDFAG1_1 (IDFAG1_appdg) online sample line from 'inq' output (inq.file): ... (0 Replies)
Discussion started by: csoesan
0 Replies

4. UNIX and Linux Applications

Veritas silent disk group creation

I am trying to write a kornshell script that will automatically create veritas disk groups. However, the only utility that I can find that will create the diskgroup is vxdiskadd, which prompts with interactive questions. I've tried to pass the answers through to vxdiskadd, but I receive the... (0 Replies)
Discussion started by: jm6601
0 Replies

5. Solaris

Help needed to find out the disk controller for veritas disks

hi all i am using vxvm 5.1 on my sun blade 150 which is running with solaris 5.10. When i give the command "vxdisk list" it gives the following output # vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:none - - online... (2 Replies)
Discussion started by: kingston
2 Replies

6. AIX

Remove internal disk from Veritas Control

I installed new internal disks in my p570. They will be part of a new AIX vg. Unfortunately, we have Veritas Volume Manager running on this AIX 5.2 ml 10 box. Veritas has grabbed control of the disks. I want AIX LVM to control the disks. I cannot get these disks free of Veritas: <lspv... (2 Replies)
Discussion started by: BobSmith
2 Replies

7. UNIX for Dummies Questions & Answers

Configure disk group with veritas

People i have an a storage i create two virtual disk 1 y 2. In the virtual disk 1 i configure 8 volumes and in the Vd2 configure 5 volumes. Now i want to create a disk group called Prod2 y Dev2 but when i go to vxdiskadm and i choose add disk o encapusalte when i press list to list the disk... (0 Replies)
Discussion started by: enkei17
0 Replies

8. Solaris

Veritas not attaching replaced disk

Hi, I`m on SunFire480R with Solaris 10. Disk in rootdg group failed, so it was replaced. However, I cannot make Veritas initalise the replaced disk: # vxdctl enable # vxdisk list c1t0d0s2 Device: c1t0d0s2 devicetag: c1t0d0 type: auto flags: online error private autoconfig... (1 Reply)
Discussion started by: masloff
1 Replies

9. Solaris

Veritas disk group not the same on each cluster node.

Need help getting all disk devices back on node 2 the same as node 1. Recently Veritas and/or Sun cluster got wrecked on my 2 node Sun cluster after installing the latest patch cluster. I backed out the patches and then node 2 could see only half of the devices and Veritas drives (though format... (0 Replies)
Discussion started by: buggin
0 Replies

10. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies
vxnotify(1M)															      vxnotify(1M)

NAME
vxnotify - display Veritas Volume Manager configuration events SYNOPSIS
vxnotify [-ACcdefimprsv] [-g diskgroup] [-n number ] [-t timeout] [-w wait-time] DESCRIPTION
The vxnotify utility displays events related to disk and configuration changes, as managed by the Veritas Volume Manager (VxVM) configura- tion daemon, vxconfigd. If vxnotify is running on a system where the VxVM cluster feature is active, it displays events related to changes in the cluster state of the system on which it is running. vxnotify displays requested event types until killed by a signal, until a given number of events have been received, or until a given number of seconds have passed. CONFIGURATION EVENTS
Each event is displayed as a single-line output record on the standard output. added disk array da_serial_no The disk array with serial number da_serial_no is connected to the host. change dg groupname dgid groupid A change was made to the configuration for the named disk group. The transaction ID for the update was groupid. change disk accessname dm medianame dg groupname dgid groupid The disk header changed for the disk with a device access name of accessname. The disk group name and ID of the disk are group- name and groupid, respectively. The displayed groupname and groupid strings are ``-'' or blank if the disk is not currently in an imported disk group. changed dg groupname from disk array disk_array_vendor The configuration of the disk group named groupname changed. This disk group contains disks which belong to the disk array of vendor disk_array_vendor. connected A connection was established with vxconfigd. This event type is displayed immediately after successful startup and initializa- tion of vxnotify. A connected event is also displayed if the connection to vxconfigd is lost, and then regained. A connected event displayed after a reconnection indicates that some events may have been lost. degraded volume volume dg groupname dgid groupid The RAID-5 volume has become degraded due to the loss of one subdisk in the raid5 plex of the volume. Accesses to some parts of the volume may be slower than to other parts depending on the location of the failed subdisk and the subsequent I/O patterns. deport dg groupname dgid groupid The named disk group was deported. detach disk accessname dm medianame dg groupname dgid groupid The named disk, with device access name accessname and disk media name medianame was disconnected from the named disk group as a result of an apparent total disk failure. Total disk failures are checked for automatically when plexes or subdisks are detached by kernel failures, or explicitly by the vxdisk check operation (see vxdisk(1M)). detach plex plex volume volume dg groupname dgid groupid The named plex, in the named disk group, was detached as a result of an I/O failure detected during normal volume I/O, or dis- abled as a result of a detected total disk failure. detach subdisk subdisk plex plex volume volume dg groupname dgid groupid The named subdisk, in the named disk group, was detached as a result of an I/O failure detected during normal volume I/O, or dis- abled as a result of a detected disk failure. Failures of a subdisk in a RAID-5 volume or a log subdisk within a mirrored volume result in a subdisk detach; other subdisk failures generally result in the subdisk's plex being detached. detach volume volume dg groupname dgid groupid The named volume, in the named disk group, was detached as a result of an I/O failure detected during normal volume I/O, or as a result of a detected total disk failure. Usually, only plexes or subdisks are detached as a result of volume I/O failure. How- ever, if a volume would become entirely unusable by detaching a plex or subdisk, then the volume may be detached. disabled controller controllername belonging to disk array da_serial_no The host controller controllername connected to the disk array with disk array serial number da_serial_no is disabled for I/O. As a result, DMP does not allow I/Os to any of the paths that are connected to this host controller. disabled dg groupname dgid groupid The named disk group was disabled. A disabled disk group cannot be changed, and its records cannot be printed with vxprint. However, some volumes in a disabled disk group may still be usable, although it is unlikely that the volumes are usable after a system reboot. A disk group is disabled as a result of excessive failures. A disk group is disabled if the last disk in the disk group fails, or if errors occur when writing to all configuration and log copies in the disk group. disabled dmpnode dmpnodename The DMP metanode dmpnodename is disabled. The disk/LUN represented by the DMP metanode is not available for I/O. disabled path pathname belonging to dmpnode dmpnodename The path pathname is no longer available for I/O. It is a path to the disk/LUN represented by the DMP metanode dmpnodename. disconnected The connection to vxconfigd was lost. This normally results from vxconfigd being stopped (such as by vxdctl stop) or killed by a signal. In response to a disconnection, vxnotify displays a disconnected event and then waits until a reconnection succeeds. A connected event is then displayed. A disconnected event is also printed if vxconfigd is not accessible at the time vxnotify is started. In this case, the discon- nected event precedes the first connected event. enabled controller controllername belonging to disk array da_serial_no The host controller controllername connected to the disk array with the disk array serial number da_serial_no is enabled. As a result DMP allows I/Os to all paths connected to this host controller. enabled dmpnode dmpnodename The DMP metanode dmpnodename is enabled. At least one of the paths to the disk/LUN represented by this DMP metanode is now avail- able for I/O. enabled path pathname belonging to dmpnode dmpnodename The path pathname is now available for I/O. It is a path to the disk/LUN represented by the DMP metanode dmpnodename. import dg groupname dgid groupid The disk group named groupname was imported. The disk group ID of the imported disk group is groupid. joined cluster clustername as master node nodeid This system has joined the cluster named clustername as a master node. Its node ID is nodeid. If the system was already in the cluster as a slave, it has now become the master node. Available only if the VxVM cluster feature is enabled. joined cluster clustername as slave node nodeid This system has joined the cluster named clustername as a slave node. Its node ID is nodeid. Available only if the VxVM cluster feature is enabled. left cluster This system has left the cluster of which it was previously a member. Available only if the VxVM cluster feature is enabled. log-detach volume volume dg groupname dgid groupid All log copies for the volume (either log plexes for a RAID-5 volume or log subdisks for a regular mirrored volume) have become unusable, either as a result of I/O failures or as a result of a detected total disk failure. more events Due to internal buffer overruns or other problems, some events may have been lost. removed disk array da_serial_no The disk array with serial number da_serial_no is disconnected from the host. waiting ... If the -w option is specified, a waiting event is displayed after a defined period with no other events. Shell scripts can use waiting messages to collect groups of nearly simultaneous, or at least related, events. This can make shell scripts more effi- cient. This can also provide some scripts with better input because sets of detach events, in particular, often occur in groups that scripts can relate together. This is particularly important given that a typical shell script blocks until vxnotify pro- duces output, thus requiring output to indicate the end of a possible sequence of related events. OPTIONS
-A Displays disk array state change events. -C Displays growth events for cache objects (used by space-optimized instant snapshots). -c Displays disk group change events. -d Displays disk change events. -e Displays extended events that relate to the creation, deletion, association, dissociation and other changes to objects. -g diskgroup Restricts displayed events to those in the indicated disk group. The disk group can be specified either as a disk group name or a disk group ID. -f Displays plex, volume, and disk detach events. -i Displays disk group import, deport, and disable events. -m Displays multipath events. -n number Displays the indicated number of vxconfigd events, then exit. Events that are not generated by vxconfigd (that is, connect, dis- connect, and waiting events) do not count towards the number of counted events, and do not cause an exit to occur. -p Displays cluster communications protocol change events. -r Displays RLINK state change events. -s Displays cluster change events. If the -i option is also specified, the imports and deports of shared disk groups are displayed when a cluster change takes place. Available only if the VxVM cluster feature is enabled. -t timeout Displays events for up to timeout seconds, then exit. The -n and -t options can be combined to specify a maximum number of events and a maximum timeout to wait before exiting. -v Displays resynchronization state change events. -w wait_time Displays waiting events after wait_time seconds with no other events. If none of the -A, -c, -d, -f, -i, -p, -r, -s, or -v options are specified, vxnotify defaults to printing all event types that correspond to these options. If a disk group is specified with -g, vxnotify displays only events that are related to that disk group. EXAMPLES
The following example shell script sends mail to root for all detected plex, volume, and disk detaches: checkdetach() { d=`vxprint -AQdF '%name %nodarec' | awk '$2=="on" {print " " $1}'` p=`vxprint -AQpe 'pl_kdetach || pl_nodarec' -F ' %name'` v=`vxprint -AQvF ' %name' -e "((any aslist.pl_kdetach==true) || (any aslist.pl_nodarec)) && !(any aslist.pl_stale==false)"` if [ ! -z "$d" ] || [ ! -z "$p" ] || [ ! -z "$v" ] then ( cat <<EOF Failures have been detected by VxVM: EOF [ -z "$d" ] || echo "\nfailed disks:\n$d" [ -z "$p" ] || echo "\nfailed plexes:\n$p" [ -z "$v" ] || echo "\nfailed volumes:\n$v" ) | mailx -s "VxVM failures" root fi } vxnotify -f -w 30 | while read code more do case $code in waiting) checkdetach;; esac done EXIT CODES
The vxnotify utility exits with a non-zero status if an error is encountered while communicating with vxconfigd. See vxintro(1M) for a list of standard exit codes. SEE ALSO
vxconfigd(1M), vxdisk(1M), vxdmpadm(1M), vxintro(1M), vxtrace(1M) VxVM 5.0.31.1 24 Mar 2008 vxnotify(1M)
All times are GMT -4. The time now is 01:23 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy