Metastat shows state needs maintenance


 
Thread Tools Search this Thread
Operating Systems Solaris Metastat shows state needs maintenance
# 1  
Old 08-13-2019
Metastat shows state needs maintenance

Hi,

We have a Solaris 10 machine with update 11 and was configured with IBM storage. It was assigned 2 LUNs (each 70GB) which were striped to make it 140GB. we have taken full backup of entire machine and Our storage team replaced IBM storage with Nimble storage (they did storage-level mirroring from IBM to Nimble). We modified /kernel/drv/ssd.conf and ran stmsboot -e and restarted the server.

after reboot, format did not show any new LUNs so we ran luxadm probe, cfgadm -a and devfsadm and after that format shows 2 LUNs but metastat shows "needs maintenance" for all meta devices. Please see below and advise. Many thanks!!

BEFORE Storage migration:

Quote:
-bash-3.2# df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d0 20G 15G 4.4G 78% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 58G 1.6M 58G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
20G 15G 4.4G 78% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
20G 15G 4.4G 78% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/md/dsk/d4 9.9G 5.3G 4.5G 55% /var
swap 58G 96K 58G 1% /tmp
swap 58G 88K 58G 1% /var/run
/dev/dsk/c6t2000000087585A28d0s1
32G 18G 13G 57% /patch
/dev/md/dsk/d3 4.9G 1.5G 3.4G 31% /home
/dev/md/dsk/d5 137G 69G 67G 51% /s
-bash-3.2#
-bash-3.2# mount
/ on /dev/md/dsk/d0 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540000 on Thu Jun 28 11:13:27 2018
/devices on /devices read/write/setuid/devices/rstchown/dev=4f80000 on Thu Jun 28 11:12:38 2018
/system/contract on ctfs read/write/setuid/devices/rstchown/dev=4fc0001 on Thu Jun 28 11:12:38 2018
/proc on proc read/write/setuid/devices/rstchown/dev=5000000 on Thu Jun 28 11:12:38 2018
/etc/mnttab on mnttab read/write/setuid/devices/rstchown/dev=5040001 on Thu Jun 28 11:12:38 2018
/etc/svc/volatile on swap read/write/setuid/devices/rstchown/xattr/dev=5080001 on Thu Jun 28 11:12:38 2018
/system/object on objfs read/write/setuid/devices/rstchown/dev=50c0001 on Thu Jun 28 11:12:38 2018
/etc/dfs/sharetab on sharefs read/write/setuid/devices/rstchown/dev=5100001 on Thu Jun 28 11:12:38 2018
/platform/sun4u-us3/lib/libc_psr.so.1 on /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1 read/write/setuid/devices/rstchown/dev=1540000 on Thu Jun 28 11:13:26 2018
/platform/sun4u-us3/lib/sparcv9/libc_psr.so.1 on /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1 read/write/setuid/devices/rstchown/dev=1540000 on Thu Jun 28 11:13:26 2018
/dev/fd on fd read/write/setuid/devices/rstchown/dev=5280001 on Thu Jun 28 11:13:27 2018
/var on /dev/md/dsk/d4 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540004 on Thu Jun 28 11:13:29 2018
/tmp on swap read/write/setuid/devices/rstchown/xattr/dev=5080002 on Thu Jun 28 11:13:29 2018
/var/run on swap read/write/setuid/devices/rstchown/xattr/dev=5080003 on Thu Jun 28 11:13:29 2018
/patch on /dev/dsk/c6t2000000087585A28d0s1 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1d800a1 on Thu Jun 28 11:13:36 2018
/home on /dev/md/dsk/d3 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540003 on Thu Jun 28 11:13:36 2018
/s on /dev/md/dsk/d5 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540005 on Thu Jun 28 11:13:36 2018
-bash-3.2#

-bash-3.2# metastat -p -a
d0 -m d10 d12 1
d10 1 1 /dev/dsk/c6t500000E0134C5C30d0s0
d12 1 1 /dev/dsk/c6t2000000087585A28d0s0
d4 -m d41 d42 1
d41 1 1 /dev/dsk/c6t500000E0134C5C30d0s4
d42 1 1 /dev/dsk/c6t2000000087585A28d0s4
d3 -m d31 d32 1
d31 1 1 /dev/dsk/c6t500000E0134C5C30d0s3
d32 1 1 /dev/dsk/c6t2000000087585A28d0s3
d5 -m d51 d52 1
d51 2 1 /dev/dsk/c6t500000E0134C5C30d0s5 \
1 /dev/dsk/c6t600507680180856B8000000000000621d0s0
d52 2 1 /dev/dsk/c6t2000000087585A28d0s5 \
1 /dev/dsk/c6t600507680180856B8000000000000620d0s0

-bash-3.2# echo|format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c6t500000E0134C5C30d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> LDISK0
/scsi_vhci/ssd@g500000e0134c5c30
1. c6t2000000087585A28d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/scsi_vhci/ssd@g2000000087585a28
2. c6t600507680180856B8000000000000621d0 <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g600507680180856b8000000000000621
3. c6t600507680180856B8000000000000620d0 <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g600507680180856b8000000000000620
4. vpath1a <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
/pseudo/vpathdd@1:1
5. vpath2a <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
/pseudo/vpathdd@2:2
Specify disk (enter its number): Specify disk (enter its number):
-bash-3.2#
-bash-3.2#
-bash-3.2# metastat -a
d0: Mirror
Submirror 0: d10
State: Okay
Submirror 1: d12
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 41945472 blocks (20 GB)

d10: Submirror of d0
State: Okay
Size: 41945472 blocks (20 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t500000E0134C5C30d0s0 0 No Okay Yes


d12: Submirror of d0
State: Okay
Size: 41945472 blocks (20 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t2000000087585A28d0s0 0 No Okay Yes


d4: Mirror
Submirror 0: d41
State: Okay
Submirror 1: d42
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 20982912 blocks (10 GB)

d41: Submirror of d4
State: Okay
Size: 20982912 blocks (10 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t500000E0134C5C30d0s4 0 No Okay Yes


d42: Submirror of d4
State: Okay
Size: 20982912 blocks (10 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t2000000087585A28d0s4 0 No Okay Yes


d3: Mirror
Submirror 0: d31
State: Okay
Submirror 1: d32
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 10501632 blocks (5.0 GB)

d31: Submirror of d3
State: Okay
Size: 10501632 blocks (5.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t500000E0134C5C30d0s3 0 No Okay Yes


d32: Submirror of d3
State: Okay
Size: 10501632 blocks (5.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t2000000087585A28d0s3 0 No Okay Yes


d5: Mirror
Submirror 0: d51
State: Okay
Submirror 1: d52
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 292010496 blocks (139 GB)

d51: Submirror of d5
State: Okay
Size: 292010496 blocks (139 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t500000E0134C5C30d0s5 0 No Okay Yes
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t600507680180856B8000000000000621d0s0 16384 No Okay Yes


d52: Submirror of d5
State: Okay
Size: 292010496 blocks (139 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t2000000087585A28d0s5 0 No Okay Yes
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t600507680180856B8000000000000620d0s0 16384 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
/dev/dsk/c6t500000E0134C5C30d0 Yes id1,ssd@n500000e0134c5c30
/dev/dsk/c6t600507680180856B8000000000000621d0 Yes id1,ssd@n600507680180856b8000000000000621
/dev/dsk/c6t2000000087585A28d0 Yes id1,ssd@n2000000087585a28
/dev/dsk/c6t600507680180856B8000000000000620d0 Yes id1,ssd@n600507680180856b8000000000000620

-bash-3.2# ls -l /dev/dsk/vpath1a
lrwxrwxrwx 1 root other 34 Jun 25 2018 /dev/dsk/vpath1a -> ../../devices/pseudo/vpathdd@1:1:a
-bash-3.2# ls -l /dev/rdsk/vpath1a
lrwxrwxrwx 1 root other 38 Jun 25 2018 /dev/rdsk/vpath1a -> ../../devices/pseudo/vpathdd@1:1:a,raw
-bash-3.2#
-bash-3.2# ls -l /dev/dsk/vpath2a
lrwxrwxrwx 1 root other 34 Jun 25 2018 /dev/dsk/vpath2a -> ../../devices/pseudo/vpathdd@2:2:a
-bash-3.2#
-bash-3.2# ls -l /dev/rdsk/vpath2a
lrwxrwxrwx 1 root other 38 Jun 25 2018 /dev/rdsk/vpath2a -> ../../devices/pseudo/vpathdd@2:2:a,raw
-bash-3.2#
===========AFTER Storage Migration to Nimble =========

Quote:
-bash-3.2# uname -a
SunOS sun02 5.10 Generic_150400-61 sun4u sparc SUNW,Sun-Fire-V490
-bash-3.2#
-bash-3.2# metastat
d0: Mirror
Submirror 0: d10
State: Needs maintenance
Submirror 1: d12
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 41945472 blocks (20 GB)

d10: Submirror of d0
State: Needs maintenance
Invoke: metasync d0
Size: 41945472 blocks (20 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t500000E0134C5C30d0s0 0 No Okay Yes


d12: Submirror of d0
State: Needs maintenance
Invoke: metasync d0
Size: 41945472 blocks (20 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t2000000087585A28d0s0 0 No Okay Yes


d4: Mirror
Submirror 0: d41
State: Needs maintenance
Submirror 1: d42
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 20982912 blocks (10 GB)

d41: Submirror of d4
State: Needs maintenance
Invoke: metasync d4
Size: 20982912 blocks (10 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t500000E0134C5C30d0s4 0 No Okay Yes


d42: Submirror of d4
State: Needs maintenance
Invoke: metasync d4
Size: 20982912 blocks (10 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t2000000087585A28d0s4 0 No Okay Yes


d3: Mirror
Submirror 0: d31
State: Needs maintenance
Submirror 1: d32
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 10501632 blocks (5.0 GB)

d31: Submirror of d3
State: Needs maintenance
Invoke: metasync d3
Size: 10501632 blocks (5.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t500000E0134C5C30d0s3 0 No Okay Yes


d32: Submirror of d3
State: Needs maintenance
Invoke: metasync d3
Size: 10501632 blocks (5.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t2000000087585A28d0s3 0 No Okay Yes


d5: Mirror
Submirror 0: d51
State: Needs maintenance
Submirror 1: d52
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 292010496 blocks (139 GB)

d51: Submirror of d5
State: Needs maintenance
Invoke: metasync d5
Size: 292010496 blocks (139 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t500000E0134C5C30d0s5 0 No Okay Yes
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t600507680180856B8000000000000621d0s0 16384 No Okay Yes


d52: Submirror of d5
State: Needs maintenance
Invoke: metasync d5
Size: 292010496 blocks (139 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t2000000087585A28d0s5 0 No Okay Yes
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
/dev/dsk/c6t600507680180856B8000000000000620d0s0 16384 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
/dev/dsk/c6t2000000087585A28d0 Yes id1,ssd@n2000000087585a28
/dev/dsk/c6t500000E0134C5C30d0 Yes id1,ssd@n500000e0134c5c30
-bash-3.2#
-bash-3.2# metastat =pc
metastat: sun02: =pc: No such file or directory

-bash-3.2#
-bash-3.2# metastat -pc
d0 m 20GB d10 (maint) d12 (maint)
d10 s 20GB /dev/dsk/c6t500000E0134C5C30d0s0
d12 s 20GB /dev/dsk/c6t2000000087585A28d0s0
d4 m 10GB d41 (maint) d42 (maint)
d41 s 10GB /dev/dsk/c6t500000E0134C5C30d0s4
d42 s 10GB /dev/dsk/c6t2000000087585A28d0s4
d3 m 5.0GB d31 (maint) d32 (maint)
d31 s 5.0GB /dev/dsk/c6t500000E0134C5C30d0s3
d32 s 5.0GB /dev/dsk/c6t2000000087585A28d0s3
d5 m 139GB d51 (maint) d52 (maint)
d51 s 139GB /dev/dsk/c6t500000E0134C5C30d0s5 /dev/dsk/c6t600507680180856B8000000000000621d0s0
d52 s 139GB /dev/dsk/c6t2000000087585A28d0s5 /dev/dsk/c6t600507680180856B8000000000000620d0s0
-bash-3.2#
-bash-3.2# df -
df: (- ) not a block device, directory or mounted resource
-bash-3.2# df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d0 20G 15G 4.4G 78% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 59G 1.4M 59G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
20G 15G 4.4G 78% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
20G 15G 4.4G 78% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/md/dsk/d4 9.9G 5.8G 3.9G 60% /var
swap 59G 0K 59G 0% /tmp
swap 59G 24K 59G 1% /var/run
/dev/dsk/c6t2000000087585A28d0s1
32G 24G 7.4G 77% /patch
/dev/md/dsk/d3 4.9G 1.5G 3.4G 31% /home
-bash-3.2# mount
/ on /dev/md/dsk/d0 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540000 on Tue Aug 13 12:31:19 2019
/devices on /devices read/write/setuid/devices/rstchown/dev=4f80000 on Tue Aug 13 12:22:20 2019
/system/contract on ctfs read/write/setuid/devices/rstchown/dev=4fc0001 on Tue Aug 13 12:22:20 2019
/proc on proc read/write/setuid/devices/rstchown/dev=5000000 on Tue Aug 13 12:22:20 2019
/etc/mnttab on mnttab read/write/setuid/devices/rstchown/dev=5040001 on Tue Aug 13 12:22:20 2019
/etc/svc/volatile on swap read/write/setuid/devices/rstchown/xattr/dev=5080001 on Tue Aug 13 12:22:20 2019
/system/object on objfs read/write/setuid/devices/rstchown/dev=50c0001 on Tue Aug 13 12:22:20 2019
/etc/dfs/sharetab on sharefs read/write/setuid/devices/rstchown/dev=5100001 on Tue Aug 13 12:22:20 2019
/platform/sun4u-us3/lib/libc_psr.so.1 on /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1 read/write/setuid/devices/rstchown/dev=1540000 on Tue Aug 13 12:31:16 2019
/platform/sun4u-us3/lib/sparcv9/libc_psr.so.1 on /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1 read/write/setuid/devices/rstchown/dev=1540000 on Tue Aug 13 12:31:16 2019
/dev/fd on fd read/write/setuid/devices/rstchown/dev=5280001 on Tue Aug 13 12:31:19 2019
/var on /dev/md/dsk/d4 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540004 on Tue Aug 13 12:31:19 2019
/tmp on swap read/write/setuid/devices/rstchown/xattr/dev=5080002 on Tue Aug 13 12:31:19 2019
/var/run on swap read/write/setuid/devices/rstchown/xattr/dev=5080003 on Tue Aug 13 12:31:19 2019
/patch on /dev/dsk/c6t2000000087585A28d0s1 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1d800a1 on Tue Aug 13 12:31:25 2019
/home on /dev/md/dsk/d3 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540003 on Tue Aug 13 12:31:25 2019
-bash-3.2#
-bash-3.2#
-bash-3.2# echo|format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c6t1DCF2DBED55F13686C9CE90085FD94E9d0 <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g1dcf2dbed55f13686c9ce90085fd94e9
1. c6t4CC2EE857E4392016C9CE90085FD94E9d0 <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g4cc2ee857e4392016c9ce90085fd94e9
2. c6t500000E0134C5C30d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> LDISK0
/scsi_vhci/ssd@g500000e0134c5c30
3. c6t2000000087585A28d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/scsi_vhci/ssd@g2000000087585a28
Specify disk (enter its number): Specify disk (enter its number):
-bash-3.2#

Last edited by prvnrk; 10-29-2019 at 11:20 AM..
# 2  
Old 08-13-2019
HI,

You've stated that the storage team did the migration of IBM Storage to Nimble storage, I'm guessing that that is where the problem lies. Or there could be issues with the vpath software and Nimble storage, does the Solaris version support Nimble?

I would have tackled this a different way;
  1. Added the new disk to the running system as normal.
  2. Extended the metadevice into a four way mirror.
  3. Remove the original disk.
  4. Recreate the metadb.

I'm not sure why you would attempt to do this using SAN replication, the only exception I would make is where you can replicate the device at a block level ensuring tha the boot block and everything else would come over.

You may get away with installing the boot block and re-labeling the disks, but I'd rather adopt the add the disks and extend the mirror approach.

Given the current situation, you may want to try the metarecover options to recover the individual devices.

Regards

Gull04

--- Post updated at 03:15 PM ---

Hi,

Just out of curiosity, have you made any required changes to /etc/vfstab - you don't mention it.

Regards

Gull04

Last edited by gull04; 08-13-2019 at 10:20 AM.. Reason: Additional Information.
This User Gave Thanks to gull04 For This Post:
# 3  
Old 08-13-2019
Hmmmmm........the fact that you can post the output that you have seems to indicate that the system is "on its feet" but the mirrors need maintenance because although your storage team migrated the filesystems (by whatever means), they weren't able to make the metadb coherent with the new legs of each mirror. I would think that you might be able to detach/reattach the mirror of each one.

@gull04.........what do you think?
This User Gave Thanks to hicksd8 For This Post:
# 4  
Old 08-13-2019
Hi hicksd8,

I think that you might be able to get away with that, you'd have to check some other stuff out incase you make it worse I think.

You could check that the disks are all available with # prtvtoc /dev/rdsk/cXtNdNsN it would also maybe be worth looking at the output of # metastat -p if I run it on one of my machines I see;
Code:
e434069 on PROD SERVER # metastat -p
d10 -m d11 d12 1
d11 1 1 /dev/dsk/c0t5000CCA07023D760d0s0
d12 1 1 /dev/dsk/c0t5000CCA07040B1F8d0s0
d20 -m d21 d22 1
d21 1 1 /dev/dsk/c0t5000CCA07023D760d0s1
d22 1 1 /dev/dsk/c0t5000CCA07040B1F8d0s1
d40 -m d41 d42 1
d41 1 1 /dev/dsk/c0t5000CCA07023D760d0s6
d42 1 1 /dev/dsk/c0t5000CCA07040B1F8d0s6
d203 1 1 /dev/dsk/c0t600507680C8082780000000000000561d0s0
d400 4 1 /dev/dsk/c0t600507680C80827800000000000004A0d0s0 \
         1 /dev/dsk/c0t600507680C8082780000000000000514d0s0 \
         1 /dev/dsk/c0t600507680C8082780000000000000515d0s0 \
         1 /dev/dsk/c0t600507680C8082780000000000000516d0s0
d722 1 1 /dev/dsk/c0t600507680C80827800000000000004EEd0s0
d723 1 1 /dev/dsk/c0t600507680C80827800000000000004ECd0s0
d340 1 1 /dev/dsk/c0t600507680C80827800000000000004A2d0s0
d840 1 1 /dev/dsk/c0t600507680C80827800000000000004A1d0s0
d270 1 1 /dev/dsk/c0t600507680C808278000000000000024Ed0s0
d300 1 1 /dev/dsk/c0t600507680C80827800000000000001DAd0s0
d202 1 1 /dev/dsk/c0t600507680C8082780000000000000132d0s0
d151 1 1 /dev/dsk/c0t600507680C8082780000000000000134d0s0
d150 1 1 /dev/dsk/c0t600507680C8082780000000000000133d0s0
d732 -p d700 -o 3498098560 -b 629145600
d700 2 1 /dev/dsk/c0t600507680C808278000000000000029Bd0s0 \
         1 /dev/dsk/c0t600507680C808278000000000000029Cd0s0
d731 -p d700 -o 3288383328 -b 209715200
d909 -p d900 -o 318783776 -b 4194304
d900 2 1 /dev/dsk/c0t600507680C808278000000000000049Fd0s0 \
         1 /dev/dsk/c0t600507680C80827800000000000004AFd0s0
d908 -p d900 -o 308297984 -b 10485760
d907 -p d900 -o 287326432 -b 20971520
d906 -p d900 -o 285229248 -b 2097152
d905 -p d900 -o 283132064 -b 2097152
d904 -p d900 -o 230703232 -b 52428800
d903 -p d900 -o 178274400 -b 52428800
d902 -p d900 -o 73416768 -b 104857600
d901 -p d900 -o 16416 -b 73400320
d807 -p d800 -o 102777088 -b 1024000  -o 20987968 -b 1048576  -o 26210752 -b 1024000
d800 1 1 /dev/dsk/c0t600507680C8082780000000000000421d0s0
d806 -p d800 -o 81805504 -b 4194304
d805 -p d800 -o 60833952 -b 20971520  -o 85999840 -b 16777216
d804 -p d800 -o 56639616 -b 4194304
d803 -p d800 -o 54542432 -b 2097152  -o 103801120 -b 1023680  -o 22036576 -b 4174144
d801 -p d800 -o 16416 -b 20971520
d873 -p d870 -o 184565856 -b 39845888
d870 1 1 /dev/dsk/c0t600507680C8082780000000000000423d0s0
d872 -p d870 -o 102776896 -b 81788928
d871 -p d870 -o 16416 -b 102760448
d857 -p d850 -o 1289781472 -b 3121152
d850 1 1 /dev/dsk/c0t600507680C8082780000000000000422d0s0
d856 -p d850 -o 1287684288 -b 2097152
d855 -p d850 -o 1283489952 -b 4194304
d854 -p d850 -o 1279295616 -b 4194304
d853 -p d850 -o 1258324064 -b 20971520
d852 -p d850 -o 629178432 -b 629145600
d851 -p d850 -o 32800 -b 629145600
d836 -p d820 -o 524280352 -b 1048576  -o 531620480 -b 1024000
d820 2 1 /dev/dsk/c0t600507680C8082780000000000000420d0s0 \
         1 /dev/dsk/c0t600507680C8082780000000000000424d0s2
d835 -p d820 -o 519037408 -b 4194304
d834 -p d820 -o 481288640 -b 37748736  -o 529523296 -b 2097152  -o 562004704 -b 3145728
d833 -p d820 -o 477094304 -b 4194304
d832 -p d820 -o 468705664 -b 8388608
d831 -p d820 -o 342876512 -b 125829120  -o 525328960 -b 4194304  -o 553616064 -b 8388608
d830 -p d820 -o 334487872 -b 8388608  -o 607093536 -b 1024000
d829 -p d820 -o 145744160 -b 188743680  -o 532644512 -b 20971520  -o 565150464 -b 41943040
d828 -p d820 -o 141549824 -b 4194304
d827 -p d820 -o 137355488 -b 4194304
d826 -p d820 -o 135258304 -b 2097152  -o 523231744 -b 1048576
d825 -p d820 -o 126869664 -b 8388608
d824 -p d820 -o 125845632 -b 1024000
d823 -p d820 -o 104874080 -b 20971520
d822 -p d820 -o 20987968 -b 83886080
d821 -p d820 -o 16416 -b 20971520
d730 -p d700 -o 4232101664 -b 4194304
d721 -p d700 -o 3047210528 -b 16777216
d720 -p d700 -o 3034627584 -b 12582912
d719 -p d700 -o 3214982944 -b 20971520
d718 -p d700 -o 3194011392 -b 20971520
d717 -p d700 -o 3173039840 -b 20971520
d716 -p d700 -o 3152068288 -b 20971520
d715 -p d700 -o 2919284192 -b 115343360
d714 -p d700 -o 1870708160 -b 1048576000
d713 -p d700 -o 1786822048 -b 83886080  -o 3063987776 -b 41943040
d712 -p d700 -o 1782627712 -b 4194304
d711 -p d700 -o 1761656160 -b 20971520
d710 -p d700 -o 1635827008 -b 125829120
d709 -p d700 -o 3131096736 -b 20971520
d708 -p d700 -o 3110125184 -b 20971520
d707 -p d700 -o 1541455136 -b 94371840
d706 -p d700 -o 492879104 -b 1048576000
d705 -p d700 -o 482393312 -b 10485760
d704 -p d700 -o 188792000 -b 293601280  -o 3235954496 -b 52428800
d703 -p d700 -o 146848928 -b 41943040
d702 -p d700 -o 125877376 -b 20971520  -o 3105930848 -b 4194304
d701 -p d700 -o 48224 -b 125829120
d295 -p d290 -o 885014688 -b 125829120
d290 1 1 /dev/dsk/c0t600507680C8082780000000000000250d0s0
d294 -p d290 -o 171982976 -b 713031680
d293 -p d290 -o 151011424 -b 20971520  -o 1010843840 -b 10485760
d292 -p d290 -o 125845568 -b 25165824
d291 -p d290 -o 16416 -b 125829120
d281 -p d280 -o 16416 -b 73400320
d280 1 1 /dev/dsk/c0t600507680C808278000000000000024Fd0s0
d440 -p d430 -o 133185856 -b 8388608
d430 1 1 /dev/dsk/c0t600507680C8082780000000000000253d0s0
d439 -p d430 -o 124797216 -b 8388608
d438 -p d430 -o 116408576 -b 8388608
d437 -p d430 -o 111165664 -b 5242880
d436 -p d430 -o 102777024 -b 8388608
d435 -p d430 -o 100679840 -b 2097152
d434 -p d430 -o 90194048 -b 10485760
d433 -p d430 -o 81805408 -b 8388608
d432 -p d430 -o 67125312 -b 14680064
d431 -p d430 -o 16416 -b 67108864
d421 -p d420 -o 16416 -b 81788928
d420 1 1 /dev/dsk/c0t600507680C8082780000000000000252d0s0
d411 -p d410 -o 32800 -b 1132462080
d410 1 1 /dev/dsk/c0t600507680C8082780000000000000251d0s0
d381 -p d380 -o 16416 -b 71303168
d380 1 1 /dev/dsk/c0t600507680C8082780000000000000247d0s0
d371 -p d370 -o 32800 -b 1310720000
d370 1 1 /dev/dsk/c0t600507680C8082780000000000000246d0s0
d366 -p d360 -o 163594432 -b 1048576
d360 1 1 /dev/dsk/c0t600507680C8082780000000000000245d0s0
d365 -p d360 -o 132137120 -b 31457280
d364 -p d360 -o 73416832 -b 58720256
d363 -p d360 -o 41959520 -b 31457280
d362 -p d360 -o 25182272 -b 16777216
d361 -p d360 -o 16416 -b 25165824
d533 -p d500 -o 2776678112 -b 1090519040
d500 2 1 /dev/dsk/c0t600507680C80827800000000000001E4d0s0 \
         1 /dev/dsk/c0t600507680C80827800000000000001E5d0s0
d532 -p d500 -o 2722152128 -b 54525952
d531 -p d500 -o 2699083424 -b 23068672
d530 -p d500 -o 2625683072 -b 73400320
d67 -p d50 -o 12583008 -b 41943040
d50 -m d51 d52 1
d51 1 1 /dev/dsk/c0t5000CCA07040B1F8d0s7
d52 1 1 /dev/dsk/c0t5000CCA07023D760d0s7
d66 -p d50 -o 10485824 -b 2097152
d394 -p d390 -o 631259264 -b 115343360
d390 1 1 /dev/dsk/c0t600507680C80827800000000000001DFd0s0
d393 -p d390 -o 44056672 -b 587202560
d392 -p d390 -o 41959488 -b 2097152
d391 -p d390 -o 16416 -b 41943040
d261 -p d260 -o 2080 -b 31457280
d260 1 1 /dev/dsk/c0t600507680C80827800000000000001DEd0s0
d257 -p d250 -o 392184032 -b 10485760
d250 1 1 /dev/dsk/c0t600507680C80827800000000000001DDd0s0
d256 -p d250 -o 381698240 -b 10485760
d255 -p d250 -o 88096928 -b 293601280
d254 -p d250 -o 79708288 -b 8388608
d253 -p d250 -o 75513952 -b 4194304
d252 -p d250 -o 33570880 -b 41943040
d251 -p d250 -o 16416 -b 33554432
d330 -p d320 -o 518013248 -b 94371840
d320 1 1 /dev/dsk/c0t600507680C80827800000000000001DCd0s0
d329 -p d320 -o 444612896 -b 73400320
d328 -p d320 -o 436224256 -b 8388608
d327 -p d320 -o 411058400 -b 25165824
d326 -p d320 -o 306200768 -b 104857600
d325 -p d320 -o 289423520 -b 16777216
d324 -p d320 -o 251674752 -b 37748736
d323 -p d320 -o 178274400 -b 73400320
d322 -p d320 -o 115359808 -b 62914560
d321 -p d320 -o 16416 -b 115343360
d311 -p d310 -o 16416 -b 83886080
d310 1 1 /dev/dsk/c0t600507680C80827800000000000001DBd0s0
d55 -p d50 -o 32 -b 10485760
d65 -p d50 -o 359661920 -b 31457280
d64 -p d50 -o 328204608 -b 31457280
d63 -p d50 -o 296747296 -b 31457280  -o 54526080 -b 10485760
d62 -p d50 -o 191889664 -b 104857600
d61 -p d50 -o 188743904 -b 3145728
d60 -p d50 -o 167772352 -b 20971520
d59 -p d50 -o 142606496 -b 25165824
d58 -p d50 -o 136315008 -b 6291456
d57 -p d50 -o 83886176 -b 52428800
d303 -p d30 -o 10672256 -b 2097152
d30 -m d31 d32 1
d31 1 1 /dev/dsk/c0t5000CCA07023D760d0s5
d32 1 1 /dev/dsk/c0t5000CCA07040B1F8d0s5
d301 -p d30 -o 32 -b 3356672  -o 9648224 -b 1024000
d302 -p d30 -o 3356736 -b 6291456
d106 -p d100 -o 484458688 -b 12582912
d100 5 1 /dev/dsk/c0t600507680C80827800000000000000AEd0s0 \
         1 /dev/dsk/c0t600507680C80827800000000000000ADd0s0 \
         1 /dev/dsk/c0t600507680C80827800000000000000AFd0s0 \
         1 /dev/dsk/c0t600507680C80827800000000000000B0d0s0 \
         1 /dev/dsk/c0t600507680C80827800000000000000B1d0s0
d107 -p d100 -o 497041632 -b 10485760
d108 -p d100 -o 507527424 -b 6291456
d109 -p d100 -o 513818912 -b 2097152
d609 -p d600 -o 174080288 -b 2097152
d600 1 1 /dev/dsk/c0t600507680C808278000000000000010Bd0s0
d611 -p d610 -o 32800 -b 1195376640
d610 1 1 /dev/dsk/c0t600507680C808278000000000000010Cd0s0
d601 -p d600 -o 16416 -b 25165824
d602 -p d600 -o 25182272 -b 4194304
d603 -p d600 -o 29376608 -b 8388608
d604 -p d600 -o 37765248 -b 62914560
d605 -p d600 -o 100679840 -b 27262976
d606 -p d600 -o 127942848 -b 6291456
d607 -p d600 -o 134234336 -b 35651584
d608 -p d600 -o 169885952 -b 4194304
d621 -p d620 -o 16416 -b 115343360
d620 1 1 /dev/dsk/c0t600507680C808278000000000000010Dd0s0
e434069 on PROD SERVER #

I'd like to have at least that information to re-create the metadevices manually and rebuild from there.

It should be noted that this is a late Solaris 10 system;

Code:
e434069 on PROD SERVER # echo | format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t5000CCA07023D760d0 <HITACHI-H109060SESUN600G-A690 cyl 64986 alt 2 hd 27 sec 668>  solaris
          /scsi_vhci/disk@g5000cca07023d760
       1. c0t5000CCA07025DA5Cd0 <SUN600G cyl 64986 alt 2 hd 27 sec 668>
          /scsi_vhci/disk@g5000cca07025da5c
       2. c0t5000CCA07040B1F8d0 <HITACHI-H109060SESUN600G-A690 cyl 64986 alt 2 hd 27 sec 668>  solaris
          /scsi_vhci/disk@g5000cca07040b1f8
       3. c0t5000CCA0704093E4d0 <HITACHI-H109060SESUN600G-A690 cyl 64986 alt 2 hd 27 sec 668>  solaris
          /scsi_vhci/disk@g5000cca0704093e4
       4. c0t600507680C80827800000000000004A0d0 <IBM-2145-0000-3.00TB>
          /scsi_vhci/ssd@g600507680c80827800000000000004a0
       5. c0t600507680C80827800000000000004A1d0 <IBM-2145-0000 cyl 38398 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000004a1
       6. c0t600507680C80827800000000000004A2d0 <IBM-2145-0000 cyl 44556 alt 2 hd 255 sec 189>
          /scsi_vhci/ssd@g600507680c80827800000000000004a2
       7. c0t600507680C80827800000000000000ADd0 <IBM-2145-0000 cyl 32766 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000000ad
       8. c0t600507680C80827800000000000000AEd0 <IBM-2145-0000 cyl 32766 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000000ae
       9. c0t600507680C80827800000000000000AFd0 <IBM-2145-0000 cyl 38398 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000000af
      10. c0t600507680C80827800000000000004AFd0 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
          /scsi_vhci/ssd@g600507680c80827800000000000004af
      11. c0t600507680C80827800000000000000B0d0 <IBM-2145-0000 cyl 32766 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000000b0
      12. c0t600507680C80827800000000000000B1d0 <IBM-2145-0000 cyl 32766 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000000b1
      13. c0t600507680C808278000000000000029Bd0 <IBM-2145-0000 cyl 44556 alt 2 hd 255 sec 189>
          /scsi_vhci/ssd@g600507680c808278000000000000029b
      14. c0t600507680C808278000000000000010Bd0 <IBM-2145-0000 cyl 12798 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c808278000000000000010b
      15. c0t600507680C80827800000000000001CFd0 <IBM-2145-0000 cyl 15358 alt 2 hd 32 sec 64>
          /scsi_vhci/ssd@g600507680c80827800000000000001cf
      16. c0t600507680C808278000000000000010Cd0 <IBM-2145-0000 cyl 38398 alt 2 hd 128 sec 256>
          /scsi_vhci/ssd@g600507680c808278000000000000010c
      17. c0t600507680C808278000000000000029Cd0 <IBM-2145-0000 cyl 44556 alt 2 hd 255 sec 189>
          /scsi_vhci/ssd@g600507680c808278000000000000029c
      18. c0t600507680C80827800000000000001DAd0 <IBM-2145-0000-2.00TB>
          /scsi_vhci/ssd@g600507680c80827800000000000001da
      19. c0t600507680C80827800000000000001DBd0 <IBM-2145-0000 cyl 6398 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000001db
      20. c0t600507680C80827800000000000001DCd0 <IBM-2145-0000 cyl 46078 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000001dc
      21. c0t600507680C80827800000000000001DDd0 <IBM-2145-0000 cyl 26878 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000001dd
      22. c0t600507680C80827800000000000001DEd0 <IBM-2145-0000 cyl 20478 alt 2 hd 32 sec 64>
          /scsi_vhci/ssd@g600507680c80827800000000000001de
      23. c0t600507680C80827800000000000001DFd0 <IBM-2145-0000 cyl 46078 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000001df
      24. c0t600507680C808278000000000000010Dd0 <IBM-2145-0000 cyl 7678 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c808278000000000000010d
      25. c0t600507680C80827800000000000001E4d0 <IBM-2145-0000 cyl 44556 alt 2 hd 255 sec 189>
          /scsi_vhci/ssd@g600507680c80827800000000000001e4
      26. c0t600507680C80827800000000000001E5d0 <IBM-2145-0000 cyl 44556 alt 2 hd 255 sec 189>
          /scsi_vhci/ssd@g600507680c80827800000000000001e5
      27. c0t600507680C80827800000000000004ECd0 <IBM-2145-0000 cyl 38398 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000004ec
      28. c0t600507680C80827800000000000004EEd0 <IBM-2145-0000 cyl 63998 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c80827800000000000004ee
      29. c0t600507680C808278000000000000024Ed0 <IBM-2145-0000 cyl 57109 alt 2 hd 255 sec 252>
          /scsi_vhci/ssd@g600507680c808278000000000000024e
      30. c0t600507680C808278000000000000024Fd0 <IBM-2145-0000 cyl 6398 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c808278000000000000024f
      31. c0t600507680C808278000000000000049Fd0 <IBM-2145-0000 cyl 19198 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c808278000000000000049f
      32. c0t600507680C8082780000000000000245d0 <IBM-2145-0000 cyl 10238 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000245
      33. c0t600507680C8082780000000000000250d0 <IBM-2145-0000 cyl 65278 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000250
      34. c0t600507680C8082780000000000000421d0 <IBM-2145-0000 cyl 6398 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000421
      35. c0t600507680C8082780000000000000423d0 <IBM-2145-0000 cyl 13822 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000423
      36. c0t600507680C8082780000000000000420d0 <IBM-2145-0000 cyl 34558 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000420
      37. c0t600507680C8082780000000000000424d0 <IBM-2145-0000 cyl 20478 alt 2 hd 32 sec 64>
          /scsi_vhci/ssd@g600507680c8082780000000000000424
      38. c0t600507680C8082780000000000000134d0 <IBM-2145-0000 cyl 8190 alt 2 hd 32 sec 64>
          /scsi_vhci/ssd@g600507680c8082780000000000000134
      39. c0t600507680C8082780000000000000422d0 <IBM-2145-0000 cyl 42238 alt 2 hd 128 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000422
      40. c0t600507680C8082780000000000000132d0 <IBM-2145-0000 cyl 18558 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000132
      41. c0t600507680C8082780000000000000514d0 <IBM-2145-0000 cyl 44556 alt 2 hd 255 sec 189>
          /scsi_vhci/ssd@g600507680c8082780000000000000514
      42. c0t600507680C8082780000000000000515d0 <IBM-2145-0000 cyl 44556 alt 2 hd 255 sec 189>
          /scsi_vhci/ssd@g600507680c8082780000000000000515
      43. c0t600507680C8082780000000000000133d0 <IBM-2145-0000 cyl 6142 alt 2 hd 32 sec 64>
          /scsi_vhci/ssd@g600507680c8082780000000000000133
      44. c0t600507680C8082780000000000000561d0 <IBM-2145-0000 cyl 6398 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000561
      45. c0t600507680C8082780000000000000247d0 <IBM-2145-0000 cyl 4478 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000247
      46. c0t600507680C8082780000000000000251d0 <IBM-2145-0000 cyl 35198 alt 2 hd 128 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000251
      47. c0t600507680C8082780000000000000252d0 <IBM-2145-0000 cyl 5118 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000252
      48. c0t600507680C8082780000000000000253d0 <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000253
      49. c0t600507680C8082780000000000000246d0 <IBM-2145-0000 cyl 40318 alt 2 hd 128 sec 256>
          /scsi_vhci/ssd@g600507680c8082780000000000000246
      50. c0t600507680C8082780000000000000516d0 <IBM-2145-0000 cyl 44556 alt 2 hd 255 sec 189>
          /scsi_vhci/ssd@g600507680c8082780000000000000516
Specify disk (enter its number): Specify disk (enter its number):
e434069 on PROD SERVER #

If you don't have the underlying stuff to hand, you have a problem when you try to reinstate - that's why I suggested the metarecover. It may be that the state databases have to be cleared down and the whole metadb recreated.

Regards

Gull04
These 2 Users Gave Thanks to gull04 For This Post:
# 5  
Old 08-13-2019
Thank you!

I somehow doubt if the SAN level mirroring was done correctly. Please see below outputs and suggest, many thanks!!

Quote:
-bash-3.2# echo|format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c6t1DCF2DBED55F13686C9CE90085FD94E9d0 <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g1dcf2dbed55f13686c9ce90085fd94e9
1. c6t4CC2EE857E4392016C9CE90085FD94E9d0 <IBM-2145-0000 cyl 8958 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g4cc2ee857e4392016c9ce90085fd94e9
2. c6t500000E0134C5C30d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> LDISK0
/scsi_vhci/ssd@g500000e0134c5c30
3. c6t2000000087585A28d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/scsi_vhci/ssd@g2000000087585a28
Specify disk (enter its number): Specify disk (enter its number):
-bash-3.2# prtvtoc /dev/rdsk/c6t1DCF2DBED55F13686C9CE90085FD94E9d0
prtvtoc: /dev/rdsk/c6t1DCF2DBED55F13686C9CE90085FD94E9d0: No such file or directory
-bash-3.2#
-bash-3.2# prtvtoc /dev/dsk/c6t1DCF2DBED55F13686C9CE90085FD94E9d0
prtvtoc: /dev/dsk/c6t1DCF2DBED55F13686C9CE90085FD94E9d0: No such file or directory
-bash-3.2#
-bash-3.2# prtvtoc /dev/rdsk/c6t4CC2EE857E4392016C9CE90085FD94E9d0
prtvtoc: /dev/rdsk/c6t4CC2EE857E4392016C9CE90085FD94E9d0: No such file or directory
-bash-3.2#
-bash-3.2# prtvtoc /dev/rdsk/c6t500000E0134C5C30d0
prtvtoc: /dev/rdsk/c6t500000E0134C5C30d0: No such file or directory
-bash-3.2# prtvtoc /dev/rdsk/c6t2000000087585A28d0
prtvtoc: /dev/rdsk/c6t2000000087585A28d0: No such file or directory
-bash-3.2#
-bash-3.2# metastat -p
d0 -m d10 d12 1
d10 1 1 /dev/dsk/c6t500000E0134C5C30d0s0
d12 1 1 /dev/dsk/c6t2000000087585A28d0s0
d4 -m d41 d42 1
d41 1 1 /dev/dsk/c6t500000E0134C5C30d0s4
d42 1 1 /dev/dsk/c6t2000000087585A28d0s4
d3 -m d31 d32 1
d31 1 1 /dev/dsk/c6t500000E0134C5C30d0s3
d32 1 1 /dev/dsk/c6t2000000087585A28d0s3
d5 -m d51 d52 1
d51 2 1 /dev/dsk/c6t500000E0134C5C30d0s5 \
1 /dev/dsk/c6t600507680180856B8000000000000621d0s0
d52 2 1 /dev/dsk/c6t2000000087585A28d0s5 \
1 /dev/dsk/c6t600507680180856B8000000000000620d0s0
-bash-3.2#
-bash-3.2#
-bash-3.2# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c6t500000E0134C5C30d0s1 - - swap - no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -
#/dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /var ufs 1 no -
#/dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /home ufs 2 yes -
#/dev/dsk/c1t0d0s5 /dev/rdsk/c1t0d0s5 /s ufs 2 yes -
# Below entries are part of mirrorsets
/dev/md/dsk/d3 /dev/md/rdsk/d3 /home ufs 2 yes -
/dev/md/dsk/d4 /dev/md/rdsk/d4 /var ufs 1 no -
/dev/md/dsk/d5 /dev/md/rdsk/d5 /s ufs 2 yes -
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
/dev/dsk/c6t2000000087585A28d0s1 /dev/rdsk/c6t2000000087585A28d0s1 /patch ufs 2 yes -
sharefs - /etc/dfs/sharetab sharefs - no -
-bash-3.2#
-bash-3.2#
-bash-3.2#
-bash-3.2# svcs -a
STATE STIME FMRI
disabled 12:31:03 svc:/system/device/mpxio-upgrade:default
disabled 12:31:04 svc:/network/ipfilter:default
disabled 12:31:04 svc:/network/ipsec/ike:default
disabled 12:31:04 svc:/network/ipsec/manual-key:default
disabled 12:31:04 svc:/network/rpc/keyserv:default
disabled 12:31:04 svc:/network/rpc/nisplus:default
disabled 12:31:04 svc:/network/nis/server:default
disabled 12:31:04 svc:/network/nis/client:default
disabled 12:31:04 svc:/network/ldap/client:default
disabled 12:31:04 svc:/network/winbind:default
disabled 12:31:04 svc:/network/inetd-upgrade:default
disabled 12:31:04 svc:/application/print/server:default
disabled 12:31:05 svc:/system/auditd:default
disabled 12:31:05 svc:/system/pools:default
disabled 12:31:05 svc:/system/rcap:default
disabled 12:31:05 svc:/system/patch-finish:delete
disabled 12:31:05 svc:/network/rpc/bootparams:default
disabled 12:31:05 svc:/network/nfs/server:default
disabled 12:31:05 svc:/network/rarp:default
disabled 12:31:05 svc:/network/dhcp-server:default
disabled 12:31:05 svc:/network/samba:default
disabled 12:31:05 svc:/network/wins:default
disabled 12:31:05 svc:/application/management/webmin:default
disabled 12:31:05 svc:/application/gdm2-login:default
disabled 12:31:05 svc:/network/dns/server:default
disabled 12:31:05 svc:/network/security/kadmin:default
disabled 12:31:05 svc:/network/security/krb5kdc:default
disabled 12:31:06 svc:/network/nis/passwd:default
disabled 12:31:06 svc:/network/nis/update:default
disabled 12:31:06 svc:/network/nis/xfr:default
disabled 12:31:06 svc:/network/http:apache2
disabled 12:31:06 svc:/network/http:apache24
disabled 12:31:06 svc:/network/http:tomcat8
disabled 12:31:06 svc:/network/apocd/udp:default
disabled 12:31:06 svc:/network/slp:default
disabled 12:31:06 svc:/system/filesystem/volfs:default
disabled 12:31:06 svc:/system/consadm:default
disabled 12:31:06 svc:/system/pools/dynamic:default
disabled 12:31:06 svc:/system/sar:default
disabled 12:31:06 svc:/application/management/common-agent-container-1:default
disabled 12:31:06 svc:/system/prepatch:default
disabled 12:31:06 svc:/milestone/patching:default
disabled 12:31:06 svc:/network/routing/legacy-routing:ipv4
disabled 12:31:06 svc:/network/routing/legacy-routing:ipv6
disabled 12:31:07 svc:/network/routing/ndp:default
disabled 12:31:07 svc:/network/routing/rdisc:default
disabled 12:31:07 svc:/network/ipv6-forwarding:default
disabled 12:31:07 svc:/network/routing/ripng:default
disabled 12:31:07 svc:/network/routing/zebra:quagga
disabled 12:31:07 svc:/network/routing/ripng:quagga
disabled 12:31:07 svc:/network/routing/route:default
disabled 12:31:07 svc:/network/ipv4-forwarding:default
disabled 12:31:07 svc:/network/routing/rip:quagga
disabled 12:31:07 svc:/network/routing/ospf:quagga
disabled 12:31:07 svc:/network/routing/ospf6:quagga
disabled 12:31:07 svc:/network/routing/bgp:quagga
disabled 12:31:07 svc:/system/hotplug:default
online 12:31:03 svc:/system/svc/restarter:default
online 12:31:04 svc:/network/pfil:default
online 12:31:04 svc:/network/loopback:default
online 12:31:05 svc:/system/installupdates:default
online 12:31:15 svc:/network/physical:default
online 12:31:15 svc:/system/identity:node
online 12:31:16 svc:/system/metainit:default
online 12:31:16 svc:/system/filesystem/root:default
online 12:31:16 svc:/system/scheduler:default
online 12:31:18 svc:/system/boot-archive:default
online 12:31:18 svc:/system/ibmsdd/ibmsdd-init:default
online 12:31:19 svc:/system/filesystem/usr:default
online 12:31:19 svc:/system/keymap:default
online 12:31:19 svc:/system/device/local:default
online 12:31:19 svc:/system/filesystem/minimal:default
online 12:31:19 svc:/system/rmtmpfiles:default
online 12:31:20 svc:/system/resource-mgmt:default
online 12:31:20 svc:/network/ilomconfig-interconnect:default
online 12:31:20 svc:/system/name-service-cache:default
online 12:31:20 svc:/system/identity:domain
online 12:31:20 svc:/system/cryptosvc:default
online 12:31:20 svc:/system/power:default
online 12:31:20 svc:/system/coreadm:default
online 12:31:20 svc:/system/sysevent:default
online 12:31:21 svc:/system/pkgserv:default
online 12:31:21 svc:/application/print/ppd-cache-update:default
online 12:31:22 svc:/system/manifest-import:default
online 12:31:22 svc:/system/patchchk:default
online 12:31:22 svc:/network/ipsec/ipsecalgs:default
online 12:31:22 svc:/system/picl:default
online 12:31:22 svc:/network/ipsec/policy:default
online 12:31:22 svc:/system/device/fc-fabric:default
online 12:31:23 svc:/milestone/network:default
online 12:31:23 svc:/milestone/devices:default
online 12:31:23 svc:/milestone/single-user:default
online 12:31:23 svc:/system/sysidtool:net
online 12:31:23 svc:/network/initial:default
online 12:31:23 svc:/network/service:default
online 12:31:24 svc:/network/dns/client:default
online 12:31:24 svc:/milestone/name-services:default
online 12:31:24 svc:/network/iscsi/initiator:default
online 12:31:27 svc:/milestone/sysconfig:default
online 12:31:27 svc:/system/boot-config:default
online 12:31:27 svc:/system/utmp:default
online 12:31:28 svc:/application/management/wbem:default
online 12:31:28 svc:/system/console-login:default
online 12:31:29 svc:/network/ntp:default
online 12:31:29 svc:/network/routing-setup:default
online 12:31:29 svc:/network/rpc/bind:default
online 12:31:30 svc:/network/nfs/mapid:default
online 12:31:30 svc:/network/nfs/cbd:default
offline 12:31:04 svc:/system/sysidtool:system
offline 12:31:04 svc:/network/nfs/status:default
offline 12:31:04 svc:/network/nfs/nlockmgr:default
offline 12:31:04 svc:/network/inetd:default
offline 12:31:04 svc:/network/nfs/client:default
offline 12:31:04 svc:/system/filesystem/autofs:default
offline 12:31:04 svc:/system/system-log:default
offline 12:31:05 svc:/network/smtp:sendmail
offline 12:31:05 svc:/system/cron:default
offline 12:31:05 svc:/system/mdmonitor:default
offline 12:31:05 svc:/milestone/multi-user:default
offline 12:31:05 svc:/application/management/seaport:default
offline 12:31:05 svc:/application/management/snmpdx:default
offline 12:31:05 svc:/application/management/dmi:default
offline 12:31:05 svc:/network/ssh:default
offline 12:31:05 svc:/milestone/multi-user-server:default
offline 12:31:05 svc:/application/font/fc-cache:default
offline 12:31:05 svc:/application/graphical-login/cde-login:default
offline 12:31:05 svc:/application/management/sma:default
offline 12:31:05 svc:/application/cde-printinfo:default
offline 12:31:05 svc:/application/print/ipp-listener:default
offline 12:31:06 svc:/system/sac:default
offline 12:31:06 svc:/system/dumpadm:default
offline 12:31:06 svc:/system/fmd:default
offline 12:31:06 svc:/system/webconsole:console
offline 12:31:06 svc:/system/zones:default
offline 12:31:06 svc:/system/basicreg:default
offline 12:31:06 svc:/com/sophos/sav/sav-update:default
offline 12:31:06 svc:/com/sophos/sav/sav-protect:default
offline 12:31:06 svc:/com/sophos/sav/sav-rms:default
offline 12:31:06 svc:/application/psncollector:default
offline 12:31:06 svc:/application/stosreg:default
offline 12:31:06 svc:/application/sthwreg:default
offline 12:31:07 svc:/network/shares/group:default
offline 12:31:07 svc:/network/sendmail-client:default
offline 12:31:07 svc:/system/boot-archive-update:default
maintenance 12:31:27 svc:/system/filesystem/local:default
uninitialized 12:31:04 svc:/network/rpc/gss:default
uninitialized 12:31:05 svc:/network/rpc/meta:default
uninitialized 12:31:05 svc:/application/x11/xfs:default
uninitialized 12:31:05 svc:/application/font/stfsloader:default
uninitialized 12:31:05 svc:/network/rpc/rstat:default
uninitialized 12:31:05 svc:/application/print/rfc1179:default
uninitialized 12:31:05 svc:/network/rpc/cde-calendar-manager:default
uninitialized 12:31:05 svc:/network/rpc/cde-ttdbserver:tcp
uninitialized 12:31:05 svc:/network/rpc/ocfserv:default
uninitialized 12:31:05 svc:/network/rpc/smserver:default
uninitialized 12:31:05 svc:/network/rpc/rex:default
uninitialized 12:31:05 svc:/network/rpc/mdcomm:default
uninitialized 12:31:05 svc:/network/rpc/metamed:default
uninitialized 12:31:05 svc:/network/rpc/metamh:default
uninitialized 12:31:05 svc:/network/rpc/rusers:default
uninitialized 12:31:05 svc:/network/rpc/spray:default
uninitialized 12:31:05 svc:/network/rpc/wall:default
uninitialized 12:31:05 svc:/network/cde-spc:default
uninitialized 12:31:05 svc:/network/tname:default
uninitialized 12:31:05 svc:/network/security/ktkt_warn:default
uninitialized 12:31:05 svc:/network/security/krb5_prop:default
uninitialized 12:31:06 svc:/network/telnet:default
uninitialized 12:31:06 svc:/network/nfs/rquota:default
uninitialized 12:31:06 svc:/network/uucp:default
uninitialized 12:31:06 svc:/network/chargen:dgram
uninitialized 12:31:06 svc:/network/chargen:stream
uninitialized 12:31:06 svc:/network/daytime:dgram
uninitialized 12:31:06 svc:/network/daytime:stream
uninitialized 12:31:06 svc:/network/discard:dgram
uninitialized 12:31:06 svc:/network/discard:stream
uninitialized 12:31:06 svc:/network/echo:dgram
uninitialized 12:31:06 svc:/network/echo:stream
uninitialized 12:31:06 svc:/network/time:dgram
uninitialized 12:31:06 svc:/network/time:stream
uninitialized 12:31:06 svc:/network/ftp:default
uninitialized 12:31:06 svc:/network/comsat:default
uninitialized 12:31:06 svc:/network/finger:default
uninitialized 12:31:06 svc:/network/login:eklogin
uninitialized 12:31:06 svc:/network/login:klogin
uninitialized 12:31:06 svc:/network/login:rlogin
uninitialized 12:31:06 svc:/network/rexec:default
uninitialized 12:31:06 svc:/network/shell:default
uninitialized 12:31:06 svc:/network/shell:kshell
uninitialized 12:31:06 svc:/network/talk:default
uninitialized 12:31:06 svc:/network/rpc-100235_1/rpc_ticotsord:default
uninitialized 12:31:06 svc:/network/stdiscover:default
uninitialized 12:31:06 svc:/network/stlisten:default
-bash-3.2#
# 6  
Old 08-13-2019
It has occurred to me that, since the filesystems have been ported to new devices, those devices are probably not configured as device nodes e.g. /dev/rdsk/<whatever>. Therefore, try running a configuration scan for them:

Code:
# devfsadm -c disk

and then trying again to prtvtoc them.
# 7  
Old 08-14-2019
Hi prvnrk,

I'd suggest that your best approach here is to take a couple of steps back with this one if you can, where you have two options the way that I see this.

Option 1.

Regress the changes to the system and go with the original configuration, bring the system up and attempt to bring in the Nimble storage with your existing IBM storage still available and then use host mirroring to bring the Nimble disks into the mirror sets. This would prove that all is well with the system and allow you to prove that the Nimble storage is compatible with the installed system. Here you would create new meta devices and add them into the mirror set metainit d53 and metainit d54 once created you would then use metattach d5 d53 and metattach d5 d54 this would give you a fourway mirror. Once everything is mirrored up you can remove the original d51 and d52 devices and clear the storage from the OS.

Once you have the new mirrors silvered and working you can remove the old IBM disks from the mirrors, result is that the system has been migrated to the new storage and there is no outage to the application.

Option 2.

Remove the Nimble storage and bring the system up - obviously without the application being started, remediate any issues to give you a clean running server. Then following the standard process add the Nimble storage and configure the device tree and metadb with the new devices ( d5, d51 and d52 ), then you'll have to restore your original data and restart the application.

Of the two options I would definately go with the first, it's less disruptive to the system and follows a more logical progression. I'm making the assumption here that your SAN technology is switched fabric and not direct connect.

If you are using direct connect on the SAN then you would have to go a slightly different way, but both options are still open to you if you have sufficient fibre connections available.

Solaris Volume Manager (SVM) whilst fairly old is a well proven and robust tool, in some respects it's simplicity is it's greatest strength along with it's biggest limitation. In this case it would have made the migration to new disk a simple task, I have successfully used it to migrate storage platforms several times without significant issues.

Regards

Gull04

Last edited by gull04; 08-14-2019 at 04:06 AM.. Reason: More Info
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

SSH : Restarting too quickly, changing state to maintenance

Hi, I'm new to Solaris. I have an issue with ssh service. When I restart the service it exits with an exit status of 0 $svcadm restart svc:/network/ssh:default $echo $? 0 $ However, the service goes into maintenance mode after restart. I'm able to connect even though the service is in... (3 Replies)
Discussion started by: maverick_here
3 Replies

2. AIX

Open firmware state to running state

Hi Admins, I am having a whole system lpar in open firmware state on HMC. How can I bring it to running state ? Let me know. Thanks. (2 Replies)
Discussion started by: snchaudhari2
2 Replies

3. Solaris

svcs command shows the state as disabled

Hi I need to export a directory named /sybase from my solaris machine via NFS. The svcs command shows the state as disabled. Please let me know how to export the directory. Once the directory is exported from the solaris machine it has to be mounted locally in an aix machine. Can some one... (2 Replies)
Discussion started by: newtoaixos
2 Replies

4. Solaris

metastat |grep Needs

Dear, Required an script such that : If metastat |grep Needs , results in some output then this command to be executed for the same : opcmsg object=metastat a=OS msg_grp=OpC severity=critical msg_text="Need maintenance for the system $line" With regards, Mjoshi (3 Replies)
Discussion started by: mjoshi87
3 Replies

5. Solaris

How to clear maintenance state for apache2 server?

Can any one of you suggest me the method to get apache server in online from maintenance mode. I tried in the following way, but couldn't get that service to online. bash-3.00# svcs -a | grep apache legacy_run 9:51:55 lrc:/etc/rc3_d/S50apache offline 9:51:22... (3 Replies)
Discussion started by: Sesha
3 Replies

6. Solaris

Metastat shows "maintenance" and "last-erred"

Hi All, Sorry to post a problem for my first post but I'm in a bit of a pickle at the minute! I have an Ultra45 connected to a Storedge 3100 series, 2 internal, 2 external disks with a db application running on the external disks. Now everything is working fine and we've had no downtime or... (4 Replies)
Discussion started by: TheSteed
4 Replies

7. Solaris

both mirrors in needs maintenance state.

Hi, Ii am facing the belwo problem: d50: Mirror Submirror 0: d30 State: Needs maintenance Submirror 1: d40 State: Needs maintenance Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 212176648 blocks (101 GB) d30:... (3 Replies)
Discussion started by: sag71155
3 Replies

8. Solaris

Softpartition State: Errored in Command MetaStat

Hi people, I have on problem when execute the command METASTAT... d60: Soft Partition Device: d10 State: Errored Size: 12582912 blocks (6.0 GB) Someone help me? Thank you very much (4 Replies)
Discussion started by: denisgomes
4 Replies

9. Solaris

Help on metastat

hi all, can someone pls pass on your suggestion? Firs thing I am testing a script which checks for the pattern 'Needs Maintenance' from metastat output and prints some messages in the screen. So i need to simulate an error in mirrored disk for metastat to give this message 'Needs Maintenance'.... (3 Replies)
Discussion started by: srirammad007
3 Replies

10. Solaris

SVM metastat -- needs maintenance

Running Solaris 9 with SVM. I'm not that familiar with it, but metastat output gives "needs maintenance" message on 2 of the mirrors. There are no errors in /var/adm/messages. What do I need to do to fix this error? Thanks. (14 Replies)
Discussion started by: dangral
14 Replies
Login or Register to Ask a Question