Are these SAN LUNs really not mounted?


 
Thread Tools Search this Thread
Operating Systems Solaris Are these SAN LUNs really not mounted?
# 1  
Old 12-02-2015
Are these SAN LUNs really not mounted?

Hello everyone.

I've been asked to check if something is wrong with the storage setup on these two SunOS 5.10 machines, which are being used as database servers with Oracle RAC configuration. Seems to be that the DB guy is complaining, telling that they are nearly out of space, which sounds crazy since there are three 700 GB volumes on the storage device just for these two servers and they are connected via fibre channel with multipathing.

I'm not very familiar with Solaris, so I did some research and tried to check mount points and such but I could not find any proof showing that these three 700 GB volumes are mounted and being utilised. I'll provide some outputs:

Here is a sample df -h output:

Code:
# df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d10         39G    29G   9.6G    76%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    23G   1.7M    23G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/platform/sun4v/lib/libc_psr/libc_psr_hwcap3.so.1
                        39G    29G   9.6G    76%    /platform/sun4v/lib/libc_psr.so.1
/platform/sun4v/lib/sparcv9/libc_psr/libc_psr_hwcap3.so.1
                        39G    29G   9.6G    76%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                    23G    32M    23G     1%    /tmp
swap                    23G    40K    23G     1%    /var/run
/dev/md/dsk/d30        994M   1.0M   933M     1%    /globaldevices
/dev/md/dsk/d50        219G   141G    76G    65%    /oracle

and /etc/vfstab file:

Code:
# cat /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/md/dsk/d20 -       -       swap    -       no      -
/dev/md/dsk/d10 /dev/md/rdsk/d10        /       ufs     1       no      logging
/dev/md/dsk/d30 /dev/md/rdsk/d30        /globaldevices  ufs     2       yes     -
/dev/md/dsk/d50 /dev/md/rdsk/d50        /oracle ufs     2       yes     logging
#/dev/dsk/c0t5000CCA03C295640d0s6       /dev/rdsk/c0t5000CCA03C295640d0s6       /local_yedek    ufs     2       yes     -
/devices        -       /devices        devfs   -       no      -
sharefs -       /etc/dfs/sharetab       sharefs -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -

the output of iostat -En command lists all three 700 GB volumes alongside with two 300GB local disks:
Code:
# iostat -En
c6t0d0           Soft Errors: 0 Hard Errors: 2 Transport Errors: 0
Vendor: AMI      Product: Virtual CDROM    Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 2 Recoverable: 0
Illegal Request: 2 Predictive Failure Analysis: 0
c0t5000CCA03C295640d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: HITACHI  Product: H106030SDSUN300G Revision: A2B0 Serial No: 1233NRRS7D
Size: 300.00GB <300000000000 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c3t6d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: TEAC     Product: DV-W28SS-V       Revision: 1.0B Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 1 Predictive Failure Analysis: 0
c0t5000CCA03C298DF0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: HITACHI  Product: H106030SDSUN300G Revision: A2B0 Serial No: 1233NRVG6D
Size: 300.00GB <300000000000 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c0t600000E00D1000000010049200010000d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: FUJITSU  Product: ETERNUS_DX400    Revision: 0000 Serial No:
Size: 751.62GB <751619276800 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c0t600000E00D1000000010049200050000d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: FUJITSU  Product: ETERNUS_DX400    Revision: 0000 Serial No:
Size: 751.62GB <751619276800 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c0t600000E00D1000000010049200040000d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: FUJITSU  Product: ETERNUS_DX400    Revision: 0000 Serial No:
Size: 751.62GB <751619276800 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
#

here is the metastat -p output in case if it helps:

Code:
# metastat -p
d30 -m d31 d32 1
d31 1 1 /dev/dsk/c0t5000CCA03C295640d0s3
d32 1 1 /dev/dsk/c0t5000CCA03C298DF0d0s3
d20 -m d21 d22 1
d21 1 1 /dev/dsk/c0t5000CCA03C295640d0s1
d22 1 1 /dev/dsk/c0t5000CCA03C298DF0d0s1
d50 -m d51 d52 1
d51 1 1 /dev/dsk/c0t5000CCA03C295640d0s6
d52 1 1 /dev/dsk/c0t5000CCA03C298DF0d0s6
d10 -m d11 d12 1
d11 1 1 /dev/dsk/c0t5000CCA03C295640d0s0
d12 1 1 /dev/dsk/c0t5000CCA03C298DF0d0s0
#

there's only one output so far that could be a sign of SAN LUN utilisation, which confuses me since I could not find any sign of mounting (4th line from the bottom):

Code:
# iostat -xnm
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.4    4.4    4.8   18.0  0.0  0.1    1.6   12.1   1   2 md/d10 (/)
    0.2    4.4    2.4   18.0  0.0  0.1    0.0   11.7   0   2 md/d11
    0.2    4.4    2.4   18.0  0.0  0.0    0.0    5.3   0   1 md/d12
    0.0    0.0    0.0    0.0  0.0  0.0    0.0   14.8   0   0 md/d20
    0.0    0.0    0.0    0.0  0.0  0.0    0.0   18.5   0   0 md/d21
    0.0    0.0    0.0    0.0  0.0  0.0    0.0   10.8   0   0 md/d22
    0.0    0.0    0.0    0.0  0.0  0.0    4.1    8.9   0   0 md/d30 (/globaldevices)
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    9.3   0   0 md/d31
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    8.0   0   0 md/d32
    0.5   26.1   20.6 1687.1  0.0  0.5    0.6   17.5   2   7 md/d50 (/oracle)
    0.3   26.1   10.3 1687.1  0.0  0.4    0.0   16.6   0   7 md/d51
    0.3   26.1   10.3 1687.1  0.0  0.1    0.0    5.6   0   3 md/d52
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t0d0
    0.5   31.8   12.7 1705.7  0.0  0.5    0.4   15.4   0  10 c0t5000CCA03C295640d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c3t6d0
    0.5   31.8   12.7 1705.7  0.0  0.2    0.3    5.0   0   4 c0t5000CCA03C298DF0d0
   69.2   19.4 37448.6  230.8  0.1  0.6    0.9    6.4   0  12 c0t600000E00D1000000010049200010000d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.1   0   0 c0t600000E00D1000000010049200050000d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.1   0   0 c0t600000E00D1000000010049200040000d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 db1:vold(pid551)

Any help will be much appreciated.
# 2  
Old 12-03-2015
Your iostat output clearly says that 3 x Fujitsu 750GB Luns are "not ready" so I don't suppose they can possibly be mounted.

Are these new LUNs provided by the storage team to this server? Did you rescan for new storage LUNs to create the device nodes?
# 3  
Old 12-03-2015
Quote:
Originally Posted by hicksd8
Your iostat output clearly says that 3 x Fujitsu 750GB Luns are "not ready" so I don't suppose they can possibly be mounted.

Are these new LUNs provided by the storage team to this server? Did you rescan for new storage LUNs to create the device nodes?
These LUNs had been presented to these two servers when they were being installed for the first time.

And both local and SAN disks have 0 value for Device Not Ready on that output, but at least one local disk (/dev/dsk/c0t5000CCA03C295640d0s6 is mounted as /local_yedek on /etc/vfstab for example).

The thing is there's a good amount of kilobytes read and write per second. Please look at the line with c0t600000E00D1000000010049200010000d0, that's one of those LUNs from Fujitsu DX440:

Code:
# iostat -xn
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.4    4.4    4.8   17.9  0.0  0.1    1.6   12.1   1   2 md/d10
    0.2    4.4    2.4   17.9  0.0  0.1    0.0   11.7   0   2 md/d11
    0.2    4.4    2.4   17.9  0.0  0.0    0.0    5.3   0   1 md/d12
    0.0    0.0    0.0    0.0  0.0  0.0    0.0   14.8   0   0 md/d20
    0.0    0.0    0.0    0.0  0.0  0.0    0.0   18.5   0   0 md/d21
    0.0    0.0    0.0    0.0  0.0  0.0    0.0   10.8   0   0 md/d22
    0.0    0.0    0.0    0.0  0.0  0.0    4.1    8.9   0   0 md/d30
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    9.3   0   0 md/d31
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    8.0   0   0 md/d32
    0.5   26.1   20.5 1684.6  0.0  0.5    0.6   17.5   2   7 md/d50
    0.3   26.1   10.3 1684.6  0.0  0.4    0.0   16.6   0   7 md/d51
    0.3   26.1   10.3 1684.6  0.0  0.1    0.0    5.6   0   3 md/d52
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t0d0
    0.5   31.7   12.7 1703.2  0.0  0.5    0.4   15.4   0  10 c0t5000CCA03C295640d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c3t6d0
    0.5   31.7   12.7 1703.1  0.0  0.2    0.3    5.0   0   4 c0t5000CCA03C298DF0d0
   69.1   19.4 37386.3  230.6  0.1  0.6    0.9    6.4   0  12 c0t600000E00D1000000010049200010000d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.1   0   0 c0t600000E00D1000000010049200050000d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.1   0   0 c0t600000E00D1000000010049200040000d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 db1:vold(pid551)

Could it be mounted via another method which I'm overlooking?
# 4  
Old 12-03-2015
Quote:
Originally Posted by hicksd8
Your iostat output clearly says that 3 x Fujitsu 750GB Luns are "not ready" so I don't suppose they can possibly be mounted.
where do you see it?

---------- Post updated at 02:21 PM ---------- Previous update was at 02:19 PM ----------

Quote:
Originally Posted by kacareu
I've been asked to check if something is wrong with the storage setup on these two SunOS 5.10 machines, which are being used as database servers with Oracle RAC configuration. Seems to be that the DB guy is complaining, telling that they are nearly out of space, which sounds crazy since there are three 700 GB volumes on the storage device just for these two servers and they are connected via fibre channel with multipathing.
May be they use Oracle ASM for storage management. In this case you will not see any mounted filesystems on the box.
This User Gave Thanks to agent.kgb For This Post:
# 5  
Old 12-03-2015
Newly allocated SAN LUNs are not automatically seen by Solaris. Check whether the device nodes have been created in /dev/rdsk and /dev/dsk.

Are the LUN's listed in 'format' output?

If not, the device node(s) need to be created.

Read this thread:

Can't see Newly created LUN by SAN admin

Particularly note post#13.

The fact that they appear in vfstab doesn't mean that they are ready to mount.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Faster way: SAN hd to SAN hd copying

hi! i got a rhel 6.3 host that already have an xfs filesystem mounted from a SAN (let's call it SAN-1) whose size is 9TB. i will be receiving another SAN (let's call it SAN-2) storage of 15TB size. this new addition is physically on another SAN storage. SAN-1 is on a Pillar storage while the new... (6 Replies)
Discussion started by: rino19ny
6 Replies

2. AIX

IBM SAN TO SAN Mirroring

Has anyone tried SAN to SAN mirroring on IBM DS SAN Storage. DS5020 mentions Enhanced Remote Mirror to multi-LUN applications I wonder if Oracle High availibility can be setup using Remote Mirror option of SAN ? (1 Reply)
Discussion started by: filosophizer
1 Replies

3. Solaris

SAN-based LUNs

running Solaris10 with Qlogic FC adapters connected to a multi-host FC array..using VXFS/VXVM.. Is there in Solaris10 or using some Qlogic drivers to filter out only the LUNs this host can see ?? Normally this is done by LUN masking in the FC switch or the FC array itself..however, neither... (2 Replies)
Discussion started by: ppchu99
2 Replies

4. Solaris

luns

how to create luns plzzzzzz (1 Reply)
Discussion started by: nsusheelgoud
1 Replies

5. AIX

LUNS in AIX

Hi team, 2E493F13 0612155010 P H dac0 ARRAY OPERATION ERROR 2E493F13 0612155010 P H dac0 ARRAY OPERATION ERROR 2E493F13 0612155010 P H dac0 ARRAY OPERATION ERROR 2E493F13 0612154910 P H dac0 ARRAY OPERATION ERROR 2E493F13 0612154910 P H dac0 ... (4 Replies)
Discussion started by: kkeng808
4 Replies

6. Solaris

need help for removing SAN LUNS on solaris 9

Hi I would like to remove SAN luns from system and do not want them even after system reboot, format Searching for disks...done df -kh I know last 4 partitions are from SAN but there could be more. Can someone please help me correct steps to remove SAN Luns. vxdisk list ... (2 Replies)
Discussion started by: upengan78
2 Replies

7. HP-UX

Target LUNs plus que deux sur un SAN de 2 fabrics

salut admin, Normalement mon serveur doit voir un de mes LDEV à partir de deux chemins sur le SAN et ainsi lui attribuer seulement deux Target LUNs. Or il lui en a attribué 8!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Très bizzare! Auriez vous une idée. Merci pour votre aide appréciée (2 Replies)
Discussion started by: hmaiida
2 Replies

8. AIX

san luns controller problem

Hi, I'd like to share this... I see all the luns on my aix when I do lspv, 10 of them, configured by the san contractor. I defined the first sanvg no problem, the second one no problem, but the third got problem and so on. It's random after the second, I think. The error is 0516-024 mkvg:... (5 Replies)
Discussion started by: itik
5 Replies

9. HP-UX

HP LV's for Hitachin Luns

Hi, This is my first post and I hope I can present my questions the right way. I am going to be asked to create logical volumes for a Hitachi SAN device on a New HP 11 server. Currently, They are running their data on a EMC (prod server). They are looking to convert the data and filesystems over to... (3 Replies)
Discussion started by: Uni-dev
3 Replies

10. Solaris

Thoughts/experiences of SAN attaching V880 to EMC SAN

Hi everyone, I wonder if I can canvas any opinions or thoughts (good or bad) on SAN attaching a SUN V880/490 to an EMC Clarion SAN? At the moment the 880 is using 12 internal FC-AL disks as a db server and seems to be doing a pretty good job. It is not I/O, CPU or Memory constrained and the... (2 Replies)
Discussion started by: si_linux
2 Replies
Login or Register to Ask a Question