I have created a new setup for VCS for doing some testing on virtual box. After creating 3 Solaris 10 machine (147148-26) with VCS. Here I have used one machine as ISCSI Storage.
VCS Version is
Code:
bash-3.2# /opt/VRTS/bin/haclus -value EngineVersion
6.0.10.0
bash-3.2#
bash-3.2# pkginfo -l VRTSvcs
PKGINST: VRTSvcs
NAME: Veritas Cluster Server by Symantec
CATEGORY: system
ARCH: i386
VERSION: 6.0.100.000
BASEDIR: /
VENDOR: Symantec Corporation
DESC: Veritas Cluster Server by Symantec
PSTAMP: 6.0.100.000-GA-2012-07-20-16.30.01
INSTDATE: Nov 06 2019 19:40
STATUS: completely installed
FILES: 278 installed pathnames
26 shared pathnames
56 directories
116 executables
466645 blocks used (approx)
bash-3.2#
First VCS Node info is
Code:
bash-3.2# echo |format
Searching for disks...
Inquiry failed for this logical diskdone
AVAILABLE DISK SELECTIONS:
0. c0d0 <â-'xâ-'â-'â-'â-'â-'â-'â-'â-'â-'@â-'â-'â-' cyl 5242 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c2t600144F05DC281F100080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc281f100080027e84b7300
2. c2t600144F05DC281FF00080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
3. c2t600144F05DC2822B00080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
4. c2t600144F05DC2823A00080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
5. c2t600144F05DC2825E00080027E84B7300d0 <SUN -SOLARIS -1 cyl 1303 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
6. c2t600144F05DC2821500080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc2821500080027e84b7300
7. c2t600144F05DC2827000080027E84B7300d0 <SUN -SOLARIS -1 cyl 2608 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f05dc2827000080027e84b7300
8. c2t600144F05DC2820900080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc2820900080027e84b7300
9. c2t600144F05DC2825400080027E84B7300d0 <SUN -SOLARIS -1 cyl 1303 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f05dc2825400080027e84b7300
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2# uname -a
SunOS node1 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2#
bash-3.2# vxdisk -e list
DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME ATTR
aluadisk0_0 auto:none - - online invalid c2t600144F05DC2823A00080027E84B7300d0s2 -
c0d0s2 auto:ZFS - - ZFS c0d0s2 -
bash-3.2#
For another Node
Code:
bash-3.2# echo|format
Searching for disks...
Inquiry failed for this logical diskdone
AVAILABLE DISK SELECTIONS:
0. c0d0 <SUN -SOLARIS -1 cyl 5242 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c2t600144F05DC281F100080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc281f100080027e84b7300
2. c2t600144F05DC281FF00080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
3. c2t600144F05DC2822B00080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
4. c2t600144F05DC2823A00080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
5. c2t600144F05DC2825E00080027E84B7300d0 <SUN -SOLARIS -1 cyl 1303 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
6. c2t600144F05DC2825400080027E84B7300d0 <SUN -SOLARIS -1 cyl 1303 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f05dc2825400080027e84b7300
7. c2t600144F05DC2827000080027E84B7300d0 <SUN -SOLARIS -1 cyl 2608 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f05dc2827000080027e84b7300
8. c2t600144F05DC2820900080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc2820900080027e84b7300
9. c2t600144F05DC2821500080027E84B7300d0 <SUN -SOLARIS -1 cyl 1021 alt 2 hd 64 sec 32>
/scsi_vhci/disk@g600144f05dc2821500080027e84b7300
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2# vxdisk -e list
DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME ATTR
aluadisk0_0 auto:none - - online invalid c2t600144F05DC281FF00080027E84B7300d0s2 -
c0d0s2 auto:ZFS - - ZFS c0d0s2 -
bash-3.2#
One OS disk and one LUN is showing in vxdisk command. Further both LUNs ids are different on both nodes which are showing in vxdisk output. I have ran "vxdisk enable" , "vxdisk scandisks" , devfsadm and even taken all the reconfiguration reboot multiple time even then I didn't get all the disk are shown in vxdisk list command. How can I make all the disk visible in vxdisk list command
Hi,
Somebody can help me to retrieve the command to use in Solaris 8 to display the space free on a Virtual disk created by VVM ?
Thanks very much,
Fabien Renaux (1 Reply)
Hi all,
I have a problem with vxvm volume which is mirror with two disks. when i am try to increase file system, it is throwing an ERROR: can not allocate 5083938 blocks, ERROR: can not able to run vxassist on this volume.
Please find a sutable solutions.
Thanks and Regards
B. Nageswar... (0 Replies)
hy guys
I am new at this thread , i have installed sf 5.0 and wanted to encapsulate root disk but when i get to optionn to enter private region i get this error:
Enter desired private region length
(default: 65536) 512
VxVM ERROR V-5-2-338
The encapsulation operation failed with the... (2 Replies)
Hi all,
Anybody know the URLs of veritas volume manager disk problems,volume problems,root disk problems ...etc.
Please share the URL's. i really appreciate for cooperation.
regards
krishna (4 Replies)
Can somebody kindly help me to determine which one i should choose to better manipulate OS volume.
RAID manager or veritas volume manager?
Any critical differences between those two?
Thanks in advance. (5 Replies)
I have a machine (5.10 Generic_142900-03 sun4u sparc SUNW,Sun-Fire-V210) that we are upgrading the storage and my task is to mirror what is already on the machine to the new disk. I have the disk, it is labeled and ready but I am not sure of the next steps to mirror the existing diskgroup and... (1 Reply)
Hi Experts,
I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system.
#hastatus -sum
-- System State Frozen
A node1 running 0
A node2 running 0
-- Group State
-- Group System Probed ... (1 Reply)