Disks are not visible in Veritas Volume manager


Login or Register for Dates, Times and to Reply

 
Thread Tools Search this Thread
# 1  
Disks are not visible in Veritas Volume manager

Hi

I have created a new setup for VCS for doing some testing on virtual box. After creating 3 Solaris 10 machine (147148-26) with VCS. Here I have used one machine as ISCSI Storage.

VCS Version is

Code:
bash-3.2# /opt/VRTS/bin/haclus -value EngineVersion
6.0.10.0
bash-3.2#
bash-3.2# pkginfo -l VRTSvcs
   PKGINST:  VRTSvcs
      NAME:  Veritas Cluster Server by Symantec
  CATEGORY:  system
      ARCH:  i386
   VERSION:  6.0.100.000
   BASEDIR:  /
    VENDOR:  Symantec Corporation
      DESC:  Veritas Cluster Server by Symantec
    PSTAMP:  6.0.100.000-GA-2012-07-20-16.30.01
  INSTDATE:  Nov 06 2019 19:40
    STATUS:  completely installed
     FILES:      278 installed pathnames
                  26 shared pathnames
                  56 directories
                 116 executables
              466645 blocks used (approx)

bash-3.2#

First VCS Node info is

Code:
bash-3.2# echo |format
Searching for disks...
Inquiry failed for this logical diskdone


AVAILABLE DISK SELECTIONS:
       0. c0d0 <▒x▒▒▒▒▒▒▒▒▒@▒▒▒ cyl 5242 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
       1. c2t600144F05DC281F100080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281f100080027e84b7300
       2. c2t600144F05DC281FF00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
       3. c2t600144F05DC2822B00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
       4. c2t600144F05DC2823A00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
       5. c2t600144F05DC2825E00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
       6. c2t600144F05DC2821500080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2821500080027e84b7300
       7. c2t600144F05DC2827000080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 2608 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2827000080027e84b7300
       8. c2t600144F05DC2820900080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2820900080027e84b7300
       9. c2t600144F05DC2825400080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825400080027e84b7300
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2# uname -a
SunOS node1 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2#
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2823A00080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

For another Node

Code:
bash-3.2# echo|format
Searching for disks...
Inquiry failed for this logical diskdone


AVAILABLE DISK SELECTIONS:
       0. c0d0 <SUN    -SOLARIS        -1   cyl 5242 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
       1. c2t600144F05DC281F100080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281f100080027e84b7300
       2. c2t600144F05DC281FF00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
       3. c2t600144F05DC2822B00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
       4. c2t600144F05DC2823A00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
       5. c2t600144F05DC2825E00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
       6. c2t600144F05DC2825400080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825400080027e84b7300
       7. c2t600144F05DC2827000080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 2608 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2827000080027e84b7300
       8. c2t600144F05DC2820900080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2820900080027e84b7300
       9. c2t600144F05DC2821500080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2821500080027e84b7300
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281FF00080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

One OS disk and one LUN is showing in vxdisk command. Further both LUNs ids are different on both nodes which are showing in vxdisk output. I have ran "vxdisk enable" , "vxdisk scandisks" , devfsadm and even taken all the reconfiguration reboot multiple time even then I didn't get all the disk are shown in vxdisk list command. How can I make all the disk visible in vxdisk list command
# 2  
My initial thoughts are:

1. Check cabling and disk jumpers for addressing conflicts.
2. Check disk labels are compatible with VCS
3. Check disk mode pages are not left in inconsistent state from previous use.

So (1) speaks for itself. (2) you could rewrite the disk labels to ensure they are compatible with VCS. They probably need to be Sun labels but check that out. (3) disks are highly programmable devices and the mode pages on them can lock a disk out from inquiry from any device other than the one it thinks it's locked to (as in a cluster failover). As you will know, only one node can read and write to any volume at one time otherwise corruption results. This can leave disks in a locked state so select the option "set all mode pages to default" to clear all settings.

So I would look at 1,2,& 3 first. Remember to do both 2 and/or 3 you need to run format in expert mode. By default, Solaris format doesn't offer such menu options. Add the -e switch:

Code:
# format -e

to run format in expert mode. You're telling Solaris that you're an expert so you'd better be one.
# 3  
Hi Hicksd8

I think first point we can ignore (Check cabling and disk jumpers for addressing conflicts.) as I am doing in virtual box.

Further I have label all the disk to SMI label and also enable "set all mode pages to default" and also took the reconfiguration reboot and still the issue is same

On Node1
Code:
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -       
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#
bash-3.2#

On Node2

Code:
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

--- Post updated at 01:37 PM ---

Note : This time LUN id is same on both nodes when we run the
Code:
vxdisk -e list

command and also this LUN id is different from previous two LUN ids in my previous output

--- Post updated at 01:50 PM ---

Just now I have noticed that after taking reconfiguration reboot LUN id show in vxdisk command output are different

On Node one
Code:
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825E00080027E84B7300d0s2 -       
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#


On Node Two
Code:
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281FF00080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

--- Post updated at 02:11 PM ---

again took a reboot the one dick is from previous on one node and one disk is new on another node

On node one
Code:
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -       
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

On another node
Code:
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281F100080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

# 4  
Perhaps some /dev/ links are missing (that point to the /devices/ paths)?
Code:
man devfsadm

Usually one can do
Code:
devfsadm -v -C -c disk

to rebuild the /dev/ links.
# 5  
Hi

I have run the mention command still the issue persist

Code:
devfsadm -v -C -c disk
vxdisk enable
vxdisk scandisks

On my first node output is after running above commands
Code:
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2821500080027E84B7300d0s2 -          
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

On my second Node output is after running above commands
Code:
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

Login or Register for Dates, Times and to Reply

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #11
Difficulty: Medium
IBM chose the name System/360 (or S/360) because it was one of the first computers to focus on graphics processing.
True or False?

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies

2. UNIX for Dummies Questions & Answers

VERITAS Volume Manager - mirror a disk/volume

I have a machine (5.10 Generic_142900-03 sun4u sparc SUNW,Sun-Fire-V210) that we are upgrading the storage and my task is to mirror what is already on the machine to the new disk. I have the disk, it is labeled and ready but I am not sure of the next steps to mirror the existing diskgroup and... (1 Reply)
Discussion started by: rookieuxixsa
1 Replies

3. Solaris

Veritas volume manager resize

Hiii, Can any one sugge me best practices for resizing a veritas voulume with vxfs file system? I tried doing this vxassist -g stg shrinkto stgvol 209715200 VxVM vxassist ERROR V-5-1-7236 Shrinking a FSGEN or RAID5 usage type volume can result in loss of data. It is recommended... (1 Reply)
Discussion started by: anwesh
1 Replies

4. Solaris

Veritas volume manager in solaris.

Can you please let me know the certification code for veritas volume manager in solaris ? Thanks in advance. (2 Replies)
Discussion started by: gowthamakanthan
2 Replies

5. Solaris

RAID manager or veritas volume manager

Can somebody kindly help me to determine which one i should choose to better manipulate OS volume. RAID manager or veritas volume manager? Any critical differences between those two? Thanks in advance. (5 Replies)
Discussion started by: beginningDBA
5 Replies

6. Solaris

veritas volume manager links

Hi all, Anybody know the URLs of veritas volume manager disk problems,volume problems,root disk problems ...etc. Please share the URL's. i really appreciate for cooperation. regards krishna (4 Replies)
Discussion started by: murthy76
4 Replies

7. UNIX for Advanced & Expert Users

Regarding Veritas Volume manager

hy guys I am new at this thread , i have installed sf 5.0 and wanted to encapsulate root disk but when i get to optionn to enter private region i get this error: Enter desired private region length (default: 65536) 512 VxVM ERROR V-5-2-338 The encapsulation operation failed with the... (2 Replies)
Discussion started by: charneet
2 Replies

8. Solaris

How to resize mirror volume in veritas volume manager 3.5 on Solaris 9 OE

Hi all, I have a problem with vxvm volume which is mirror with two disks. when i am try to increase file system, it is throwing an ERROR: can not allocate 5083938 blocks, ERROR: can not able to run vxassist on this volume. Please find a sutable solutions. Thanks and Regards B. Nageswar... (0 Replies)
Discussion started by: nageswarb
0 Replies

9. Filesystems, Disks and Memory

VEritas Volume Manager command

Hi, Somebody can help me to retrieve the command to use in Solaris 8 to display the space free on a Virtual disk created by VVM ? Thanks very much, Fabien Renaux (1 Reply)
Discussion started by: unclefab
1 Replies

10. UNIX for Dummies Questions & Answers

veritas filesystem and volume manager

WHat is the difference between Veritas filesystem and veritas volume manager? Regards (2 Replies)
Discussion started by: knarayan
2 Replies

Featured Tech Videos