mapping device from an inq output to veritas disk groups


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting mapping device from an inq output to veritas disk groups
# 1  
Old 01-06-2009
mapping device from an inq output to veritas disk groups

Hi, Does anyone have a clever way of mapping the following from two different files using perl.

sample line from vxdisk list output (vxdisk-list.file):

emcpower18s2 auto:sliced IDFAG1_1 (IDFAG1_appdg) online


sample line from 'inq' output (inq.file):


dev/rdsk/emcpower18c :EMC :SYMMETRIX :5670 :280076a000 : 8923200 :000287750328


I would like the resulting output to appear in
the following format

Diskgroup Device ID
======= =======
IDFAG_appdg, 280076a000

Many Thanks,
Collin
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies

2. Linux

Logical Volume to physical disk mapping

When installing Linux, I choose some default setting to use all the disk space. My server has a single internal 250Gb SCSI disk. By default the install appears to have created 3 logical volumes lv_root, lv_home and lv_swap. fdisk -l shows the following lab3.nms:/dev>fdisk -l Disk... (2 Replies)
Discussion started by: jimthompson
2 Replies

3. Solaris

T5220 disk mapping issue

Hi, More a Sun T5220 problem then a Solaris 10 problem, but perhaps someone had a similar issue. For starters the output with 1 disk in slot 0 of the server. It points to PhyNum 5, where I would expect PhyNum 0. {0} ok probe-scsi MPT Version 1.05, Firmware Version 1.22.00.00 Target... (2 Replies)
Discussion started by: ejdv
2 Replies

4. Solaris

Veritas not attaching replaced disk

Hi, I`m on SunFire480R with Solaris 10. Disk in rootdg group failed, so it was replaced. However, I cannot make Veritas initalise the replaced disk: # vxdctl enable # vxdisk list c1t0d0s2 Device: c1t0d0s2 devicetag: c1t0d0 type: auto flags: online error private autoconfig... (1 Reply)
Discussion started by: masloff
1 Replies

5. UNIX for Dummies Questions & Answers

Configure disk group with veritas

People i have an a storage i create two virtual disk 1 y 2. In the virtual disk 1 i configure 8 volumes and in the Vd2 configure 5 volumes. Now i want to create a disk group called Prod2 y Dev2 but when i go to vxdiskadm and i choose add disk o encapusalte when i press list to list the disk... (0 Replies)
Discussion started by: enkei17
0 Replies

6. AIX

Problem mapping LUN disk from VIOS to the LPAR

Hello guys, It would be so nice of you if someone can provide me with these informations. 1) My SAN group assigned 51G of LUN space to the VIO server.I ran cfgdev to discover the newly added LUN. Unfortunately most of the disks that are in VIO server is 51G. How would I know which is the newly... (3 Replies)
Discussion started by: solaix14
3 Replies

7. AIX

LPAR and vio disk mapping

We have a frame the uses 2 vios that assign disk storage to LPAR's. We have a LPAr with multiple disk and I want to know how do I tell which vio is serving the disk. For example the LPAr has hdisk 0, 1, 2, 3 all the same size. I want to know which vio is serving hdisk0, 1. (4 Replies)
Discussion started by: daveisme
4 Replies

8. Solaris

OpenSolaris 2008.11 Hard Drive Device mapping

Dear Solaris Experts, I am a bit confused about OpenSolaris Hard Drive device mapping. On RedHat Linux based system, an IDE on first channel master drive is mapped as /dev/hda, first channel slave drive will be /dev/hdb, etc. For (Open)Solaris systems I found it as /dev/rdsk/c3d0p0 : ... (0 Replies)
Discussion started by: Zepiroth
0 Replies

9. Solaris

Need to remove a disk from Veritas

I have a bogus disk that keeps showing up as failed from vxdisk list - - disk hpdg failed was:c2t0d11s2 There isnt any c2 devices on the system: # ls /dev/dsk/c2* /dev/dsk/c2*: No such file or directory # ls /dev/rdsk/c2* /dev/rdsk/c2*: No such file or... (3 Replies)
Discussion started by: kiem
3 Replies

10. Filesystems, Disks and Memory

FSCK on veritas managed disk

I've had a VXFS filesystem get corrupted and now it won't mount. Can I run a fsck -y on the raw disk device or should something be done within veritas? Veritas does not see the disk at the moment. (2 Replies)
Discussion started by: ozzmosiz
2 Replies
Login or Register to Ask a Question
vxnotify(1M)															      vxnotify(1M)

NAME
vxnotify - display Veritas Volume Manager configuration events SYNOPSIS
vxnotify [-ACcdefimprsv] [-g diskgroup] [-n number ] [-t timeout] [-w wait-time] DESCRIPTION
The vxnotify utility displays events related to disk and configuration changes, as managed by the Veritas Volume Manager (VxVM) configura- tion daemon, vxconfigd. If vxnotify is running on a system where the VxVM cluster feature is active, it displays events related to changes in the cluster state of the system on which it is running. vxnotify displays requested event types until killed by a signal, until a given number of events have been received, or until a given number of seconds have passed. CONFIGURATION EVENTS
Each event is displayed as a single-line output record on the standard output. added disk array da_serial_no The disk array with serial number da_serial_no is connected to the host. change dg groupname dgid groupid A change was made to the configuration for the named disk group. The transaction ID for the update was groupid. change disk accessname dm medianame dg groupname dgid groupid The disk header changed for the disk with a device access name of accessname. The disk group name and ID of the disk are group- name and groupid, respectively. The displayed groupname and groupid strings are ``-'' or blank if the disk is not currently in an imported disk group. changed dg groupname from disk array disk_array_vendor The configuration of the disk group named groupname changed. This disk group contains disks which belong to the disk array of vendor disk_array_vendor. connected A connection was established with vxconfigd. This event type is displayed immediately after successful startup and initializa- tion of vxnotify. A connected event is also displayed if the connection to vxconfigd is lost, and then regained. A connected event displayed after a reconnection indicates that some events may have been lost. degraded volume volume dg groupname dgid groupid The RAID-5 volume has become degraded due to the loss of one subdisk in the raid5 plex of the volume. Accesses to some parts of the volume may be slower than to other parts depending on the location of the failed subdisk and the subsequent I/O patterns. deport dg groupname dgid groupid The named disk group was deported. detach disk accessname dm medianame dg groupname dgid groupid The named disk, with device access name accessname and disk media name medianame was disconnected from the named disk group as a result of an apparent total disk failure. Total disk failures are checked for automatically when plexes or subdisks are detached by kernel failures, or explicitly by the vxdisk check operation (see vxdisk(1M)). detach plex plex volume volume dg groupname dgid groupid The named plex, in the named disk group, was detached as a result of an I/O failure detected during normal volume I/O, or dis- abled as a result of a detected total disk failure. detach subdisk subdisk plex plex volume volume dg groupname dgid groupid The named subdisk, in the named disk group, was detached as a result of an I/O failure detected during normal volume I/O, or dis- abled as a result of a detected disk failure. Failures of a subdisk in a RAID-5 volume or a log subdisk within a mirrored volume result in a subdisk detach; other subdisk failures generally result in the subdisk's plex being detached. detach volume volume dg groupname dgid groupid The named volume, in the named disk group, was detached as a result of an I/O failure detected during normal volume I/O, or as a result of a detected total disk failure. Usually, only plexes or subdisks are detached as a result of volume I/O failure. How- ever, if a volume would become entirely unusable by detaching a plex or subdisk, then the volume may be detached. disabled controller controllername belonging to disk array da_serial_no The host controller controllername connected to the disk array with disk array serial number da_serial_no is disabled for I/O. As a result, DMP does not allow I/Os to any of the paths that are connected to this host controller. disabled dg groupname dgid groupid The named disk group was disabled. A disabled disk group cannot be changed, and its records cannot be printed with vxprint. However, some volumes in a disabled disk group may still be usable, although it is unlikely that the volumes are usable after a system reboot. A disk group is disabled as a result of excessive failures. A disk group is disabled if the last disk in the disk group fails, or if errors occur when writing to all configuration and log copies in the disk group. disabled dmpnode dmpnodename The DMP metanode dmpnodename is disabled. The disk/LUN represented by the DMP metanode is not available for I/O. disabled path pathname belonging to dmpnode dmpnodename The path pathname is no longer available for I/O. It is a path to the disk/LUN represented by the DMP metanode dmpnodename. disconnected The connection to vxconfigd was lost. This normally results from vxconfigd being stopped (such as by vxdctl stop) or killed by a signal. In response to a disconnection, vxnotify displays a disconnected event and then waits until a reconnection succeeds. A connected event is then displayed. A disconnected event is also printed if vxconfigd is not accessible at the time vxnotify is started. In this case, the discon- nected event precedes the first connected event. enabled controller controllername belonging to disk array da_serial_no The host controller controllername connected to the disk array with the disk array serial number da_serial_no is enabled. As a result DMP allows I/Os to all paths connected to this host controller. enabled dmpnode dmpnodename The DMP metanode dmpnodename is enabled. At least one of the paths to the disk/LUN represented by this DMP metanode is now avail- able for I/O. enabled path pathname belonging to dmpnode dmpnodename The path pathname is now available for I/O. It is a path to the disk/LUN represented by the DMP metanode dmpnodename. import dg groupname dgid groupid The disk group named groupname was imported. The disk group ID of the imported disk group is groupid. joined cluster clustername as master node nodeid This system has joined the cluster named clustername as a master node. Its node ID is nodeid. If the system was already in the cluster as a slave, it has now become the master node. Available only if the VxVM cluster feature is enabled. joined cluster clustername as slave node nodeid This system has joined the cluster named clustername as a slave node. Its node ID is nodeid. Available only if the VxVM cluster feature is enabled. left cluster This system has left the cluster of which it was previously a member. Available only if the VxVM cluster feature is enabled. log-detach volume volume dg groupname dgid groupid All log copies for the volume (either log plexes for a RAID-5 volume or log subdisks for a regular mirrored volume) have become unusable, either as a result of I/O failures or as a result of a detected total disk failure. more events Due to internal buffer overruns or other problems, some events may have been lost. removed disk array da_serial_no The disk array with serial number da_serial_no is disconnected from the host. waiting ... If the -w option is specified, a waiting event is displayed after a defined period with no other events. Shell scripts can use waiting messages to collect groups of nearly simultaneous, or at least related, events. This can make shell scripts more effi- cient. This can also provide some scripts with better input because sets of detach events, in particular, often occur in groups that scripts can relate together. This is particularly important given that a typical shell script blocks until vxnotify pro- duces output, thus requiring output to indicate the end of a possible sequence of related events. OPTIONS
-A Displays disk array state change events. -C Displays growth events for cache objects (used by space-optimized instant snapshots). -c Displays disk group change events. -d Displays disk change events. -e Displays extended events that relate to the creation, deletion, association, dissociation and other changes to objects. -g diskgroup Restricts displayed events to those in the indicated disk group. The disk group can be specified either as a disk group name or a disk group ID. -f Displays plex, volume, and disk detach events. -i Displays disk group import, deport, and disable events. -m Displays multipath events. -n number Displays the indicated number of vxconfigd events, then exit. Events that are not generated by vxconfigd (that is, connect, dis- connect, and waiting events) do not count towards the number of counted events, and do not cause an exit to occur. -p Displays cluster communications protocol change events. -r Displays RLINK state change events. -s Displays cluster change events. If the -i option is also specified, the imports and deports of shared disk groups are displayed when a cluster change takes place. Available only if the VxVM cluster feature is enabled. -t timeout Displays events for up to timeout seconds, then exit. The -n and -t options can be combined to specify a maximum number of events and a maximum timeout to wait before exiting. -v Displays resynchronization state change events. -w wait_time Displays waiting events after wait_time seconds with no other events. If none of the -A, -c, -d, -f, -i, -p, -r, -s, or -v options are specified, vxnotify defaults to printing all event types that correspond to these options. If a disk group is specified with -g, vxnotify displays only events that are related to that disk group. EXAMPLES
The following example shell script sends mail to root for all detected plex, volume, and disk detaches: checkdetach() { d=`vxprint -AQdF '%name %nodarec' | awk '$2=="on" {print " " $1}'` p=`vxprint -AQpe 'pl_kdetach || pl_nodarec' -F ' %name'` v=`vxprint -AQvF ' %name' -e "((any aslist.pl_kdetach==true) || (any aslist.pl_nodarec)) && !(any aslist.pl_stale==false)"` if [ ! -z "$d" ] || [ ! -z "$p" ] || [ ! -z "$v" ] then ( cat <<EOF Failures have been detected by VxVM: EOF [ -z "$d" ] || echo "\nfailed disks:\n$d" [ -z "$p" ] || echo "\nfailed plexes:\n$p" [ -z "$v" ] || echo "\nfailed volumes:\n$v" ) | mailx -s "VxVM failures" root fi } vxnotify -f -w 30 | while read code more do case $code in waiting) checkdetach;; esac done EXIT CODES
The vxnotify utility exits with a non-zero status if an error is encountered while communicating with vxconfigd. See vxintro(1M) for a list of standard exit codes. SEE ALSO
vxconfigd(1M), vxdisk(1M), vxdmpadm(1M), vxintro(1M), vxtrace(1M) VxVM 5.0.31.1 24 Mar 2008 vxnotify(1M)