Solaris 10, adding new LUN from SAN storage


 
Thread Tools Search this Thread
Operating Systems Solaris Solaris 10, adding new LUN from SAN storage
# 1  
Old 10-24-2013
Sun Solaris 10, adding new LUN from SAN storage

Hello to all,

Actually, currently on my Solaris box, I've a LUN (5TB space) from a EMC storage which is working fine, and a partition with ZFS filesystem is created for that LUN. as further you'll see in the logs, the "c4t6006016053802E00E6A9196B6506E211d0s2" is the current configured LUN in the system.

So, from Storage administration guys I've informed that I need to config our Solaris server to search for a new LUN (8TB space) and create another ZFS partition for the new LUN too.

Actually, I don't have much information from EMC storage side, but supposedly multi-pathing feature is activated (for both old and new LUNs).

Well from my box I've got these information:

I've only one HBA connected to a Fabric switch,
both old and new LUN should be reached from this HBA:

Please note that I've already used "cfgadm" and "devfsadm" for looking up any changes in LUNs.


Code:
root@host1 # fcinfo hba-port -l
HBA Port WWN: 2100001b3282c706
OS Device Name: /dev/cfg/c3
Manufacturer: QLogic Corp.
Model: 375-3355-02
Firmware Version: 05.03.02
FCode/BIOS Version: BIOS: 2.02; fcode: 2.01; EFI: 2.00;
Serial Number: 0402H00-0912711667
Driver Name: qlc
Driver Version: 20100301-3.00
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b3282c706
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0



root@host1 # fcinfo remote-port -ls -p 2100001b3282c706
Remote Port WWN: 5006016247200898
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7200898
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: LUNZ
OS Device Name: /dev/rdsk/c3t5006016247200898d0s2
LUN: 31
Vendor: DGC
Product: VRAID
OS Device Name: /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
Remote Port WWN: 5006016347200898
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7200898
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: LUNZ
OS Device Name: /dev/rdsk/c3t5006016347200898d0s2
LUN: 31
Vendor: DGC
Product: VRAID
OS Device Name: /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
Remote Port WWN: 5006016847204656
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7204656
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: RAID 5
OS Device Name: /dev/rdsk/c3t5006016847204656d0s2
Remote Port WWN: 5006016a47204656
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7204656
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: RAID 5
OS Device Name: /dev/rdsk/c3t5006016A47204656d0s2
Remote Port WWN: 5006016847200898
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7200898
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: LUNZ
OS Device Name: /dev/rdsk/c3t5006016847200898d0s2
LUN: 31
Vendor: DGC
Product: VRAID
OS Device Name: /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
Remote Port WWN: 5006016d47204656
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7204656
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: RAID 5
OS Device Name: /dev/rdsk/c3t5006016D47204656d0s2
Remote Port WWN: 5006016947200898
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7200898
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: LUNZ
OS Device Name: /dev/rdsk/c3t5006016947200898d0s2
LUN: 31
Vendor: DGC
Product: VRAID
OS Device Name: /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
Remote Port WWN: 5006016047204656
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7204656
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: RAID 5
OS Device Name: /dev/rdsk/c3t5006016047204656d0s2
Remote Port WWN: 5006016247204656
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7204656
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: RAID 5
OS Device Name: /dev/rdsk/c3t5006016247204656d0s2
Remote Port WWN: 5006016547204656
Active FC4 Types: SCSI
SCSI Target: yes
Node WWN: 50060160c7204656
Link Error Statistics:
Link Failure Count: 0
Loss of Sync Count: 0
Loss of Signal Count: 0
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 0
Invalid CRC Count: 0
LUN: 0
Vendor: DGC
Product: RAID 5
OS Device Name: /dev/rdsk/c3t5006016547204656d0s2

From above log, the new LUNs are:
/dev/rdsk/c3t5006016847204656d0s2
/dev/rdsk/c3t5006016A47204656d0s2
/dev/rdsk/c3t5006016D47204656d0s2
/dev/rdsk/c3t5006016047204656d0s2
/dev/rdsk/c3t5006016247204656d0s2
/dev/rdsk/c3t5006016547204656d0s2


Code:
root@host1 # cfgadm -al -o show_SCSI_LUN
Ap_Id Type Receptacle Occupant Condition
c3 fc-fabric connected configured unknown
c3::5006016047204656,0 disk connected configured unknown
c3::5006016247200898,0 disk connected configured unknown
c3::5006016247200898,31 disk connected configured unknown
c3::5006016247204656,0 disk connected configured unknown
c3::5006016347200898,0 disk connected configured unknown
c3::5006016347200898,31 disk connected configured unknown
c3::5006016547204656,0 disk connected configured unknown
c3::5006016847200898,0 disk connected configured unknown
c3::5006016847200898,31 disk connected configured unknown
c3::5006016847204656,0 disk connected configured unknown
c3::5006016947200898,0 disk connected configured unknown
c3::5006016947200898,31 disk connected configured unknown
c3::5006016a47204656,0 disk connected configured unknown
c3::5006016d47204656,0 disk connected configured unknown


root@host1 # format
Searching for disks...done

c3t5006016A47204656d0: configured with capacity of 8248.22GB
c3t5006016D47204656d0: configured with capacity of 8248.22GB
c3t5006016547204656d0: configured with capacity of 8248.22GB
c3t5006016247204656d0: configured with capacity of 8248.22GB
c3t5006016047204656d0: configured with capacity of 8248.22GB
c3t5006016847204656d0: configured with capacity of 8248.22GB


AVAILABLE DISK SELECTIONS:
0. c1t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 273>
/pci@0/pci@0/pci@2/scsi@0/sd@0,0
1. c3t5006016A47204656d0 <DGC-RAID 5-0531-8.05TB>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016a47204656,0
2. c3t5006016D47204656d0 <DGC-RAID 5-0531-8.05TB>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016d47204656,0
3. c3t5006016547204656d0 <DGC-RAID 5-0531-8.05TB>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016547204656,0
4. c3t5006016247204656d0 <DGC-RAID 5-0531-8.05TB>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016247204656,0
5. c3t5006016247200898d0 <drive type unknown>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016247200898,0
6. c3t5006016047204656d0 <DGC-RAID 5-0531-8.05TB>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016047204656,0
7. c3t5006016847204656d0 <DGC-RAID 5-0531-8.05TB>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016847204656,0
8. c3t5006016347200898d0 <drive type unknown>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016347200898,0
9. c3t5006016847200898d0 <drive type unknown>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016847200898,0
10. c3t5006016947200898d0 <drive type unknown>
/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016947200898,0
11. c4t6006016053802E00E6A9196B6506E211d0 <DGC-VRAID-0531-5.08TB>
/scsi_vhci/ssd@g6006016053802e00e6a9196b6506e211

So, by this logs i assume the new LUNs are already detected by Solaris.

Again "DGC-VRAID-0531-5.08TB" is the current configured LUN in the system, and new LUNs (DGC-RAID 5-0531-8.05TB) are the devices which I've mentioned before.

First, before i get into detail, the first difference here is that current configured LUN is defined as "c4" controller and also defined in "/scsi_vhci/ssd@", but for new LUNs it is "c3" and they're all defined in "/pci@0/pci@". also the new LUN device files are differ in naming scheme.

other thing which I found strange is that, the "mpathadm" does not list the new LUNs (but only the old LUN):

Code:
root@host1 # mpathadm list LU
        /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
                Total Path Count: 4
                Operational Path Count: 4


root@host1 # mpathadm show lu /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
Logical Unit:  /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
        mpath-support:  libmpscsi_vhci.so
        Vendor:  DGC    
        Product:  VRAID          
        Revision:  0531
        Name Type:  unknown type
        Name:  6006016053802e00e6a9196b6506e211
        Asymmetric:  yes
        Current Load Balance:  round-robin
        Logical Unit Group ID:  NA
        Auto Failback:  on
        Auto Probing:  NA

        Paths:  
                Initiator Port Name:  2100001b3282c706
                Target Port Name:  5006016847200898
                Override Path:  NA
                Path State:  OK
                Disabled:  no

                Initiator Port Name:  2100001b3282c706
                Target Port Name:  5006016247200898
                Override Path:  NA
                Path State:  OK
                Disabled:  no

                Initiator Port Name:  2100001b3282c706
                Target Port Name:  5006016347200898
                Override Path:  NA
                Path State:  OK
                Disabled:  no

                Initiator Port Name:  2100001b3282c706
                Target Port Name:  5006016947200898
                Override Path:  NA
                Path State:  OK
                Disabled:  no

        Target Port Groups:  
                ID:  2
                Explicit Failover:  yes
                Access State:  active optimized
                Target Ports:
                        Name:  5006016847200898
                        Relative ID:  9

                        Name:  5006016947200898
                        Relative ID:  10

                ID:  1
                Explicit Failover:  yes
                Access State:  active not optimized
                Target Ports:
                        Name:  5006016247200898
                        Relative ID:  3

                        Name:  5006016347200898
                        Relative ID:  4

and also I'm sure all new LUNs are actually refer to same storage, but why there are several device filenames (unlike the old LUN)

Code:
root@host1 # luxadm probe
No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):
  Node WWN:50060160c7204656  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t5006016A47204656d0s2
    Logical Path:/dev/rdsk/c3t5006016D47204656d0s2
    Logical Path:/dev/rdsk/c3t5006016847204656d0s2
    Logical Path:/dev/rdsk/c3t5006016047204656d0s2
    Logical Path:/dev/rdsk/c3t5006016247204656d0s2
    Logical Path:/dev/rdsk/c3t5006016547204656d0s2
  Node WWN:50060160c7200898  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t5006016247200898d0s2
    Logical Path:/dev/rdsk/c3t5006016847200898d0s2
    Logical Path:/dev/rdsk/c3t5006016347200898d0s2
    Logical Path:/dev/rdsk/c3t5006016947200898d0s2
  Node WWN:50060160c7200898  Device Type:Disk device
    Logical Path:/dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2


root@host1 # luxadm display /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
  Vendor:               DGC    
  Product ID:           VRAID          
  Revision:             0531
  Serial Num:           CKM00114600743
  Unformatted capacity: 5324800.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0x0
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c4t6006016053802E00E6A9196B6506E211d0s2
  /devices/scsi_vhci/ssd@g6006016053802e00e6a9196b6506e211:c,raw
   Controller           /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0
    Device Address              5006016847200898,1f
    Host controller port WWN    2100001b3282c706
    Class                       primary
    State                       ONLINE
   Controller           /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0
    Device Address              5006016247200898,1f
    Host controller port WWN    2100001b3282c706
    Class                       secondary
    State                       ONLINE
   Controller           /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0
    Device Address              5006016347200898,1f
    Host controller port WWN    2100001b3282c706
    Class                       secondary
    State                       ONLINE
   Controller           /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0
    Device Address              5006016947200898,1f
    Host controller port WWN    2100001b3282c706
    Class                       primary
    State                       ONLINE


root@host1 # luxadm display /dev/rdsk/c3t5006016847204656d0s2     # one of new LUNs (random!)          
DEVICE PROPERTIES for disk: /dev/rdsk/c3t5006016847204656d0s2
  Vendor:               DGC    
  Product ID:           RAID 5          
  Revision:             0531
  Serial Num:           CKM00123701178
  Unformatted capacity: 8446179.000 MBytes
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0x0
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c3t5006016847204656d0s2
  /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016847204656,0:c,raw
    LUN path port WWN:          5006016847204656
    Host controller port WWN:   2100001b3282c706
    Path status:                O.K.
  /dev/rdsk/c3t5006016A47204656d0s2
  /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016a47204656,0:c,raw
    LUN path port WWN:          5006016a47204656
    Host controller port WWN:   2100001b3282c706
    Path status:                O.K.
  /dev/rdsk/c3t5006016D47204656d0s2
  /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016d47204656,0:c,raw
    LUN path port WWN:          5006016d47204656
    Host controller port WWN:   2100001b3282c706
    Path status:                O.K.
  /dev/rdsk/c3t5006016047204656d0s2
  /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016047204656,0:c,raw
    LUN path port WWN:          5006016047204656
    Host controller port WWN:   2100001b3282c706
    Path status:                O.K.
  /dev/rdsk/c3t5006016247204656d0s2
  /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016247204656,0:c,raw
    LUN path port WWN:          5006016247204656
    Host controller port WWN:   2100001b3282c706
    Path status:                O.K.
  /dev/rdsk/c3t5006016547204656d0s2
  /devices/pci@0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006016547204656,0:c,raw
    LUN path port WWN:          5006016547204656
    Host controller port WWN:   2100001b3282c706
    Path status:                O.K.

So, can anybody please tell me what I'm missing here ? is there anything else i need to do ?

Regards,
# 2  
Old 10-24-2013
OK, that's a lot of info to go through, and I don't really feel like doing that now. Sorry about that.

But I have seen Solaris 10 multipathing get a little bolluxed up at times, so if you can do a reconfigure reboot you might get better results. Yeah, you can work out what's wrong and run the proper mpathadm commands to fix it. But a reconfigure reboot is a lot easier and faster.
# 3  
Old 10-24-2013
achenle, Thanks alot for reply.
Agreed, I also think reconfigure/booting Solaris would clear up the thing.

But to be honest, I'm was trying to solve this problem on fly without restarting the server. It's kinda of active server in the network, and restarting it requires lots of arrangement (requesting down-time, CR, ..) , you know ... Smilie
# 4  
Old 10-25-2013
Seems like multipathing is not active for those new luns from EMC.
Now you see same disk multiple times due multipath not working.

Check out exact model of EMC storage and modify the /kernel/drv/scsi_vhci.conf file for multipathing to work.

You will find information on your vendors site, either Oracle or EMC.

I believe you will need to reboot the operating system for this to work (after the change inside the file).

Hope this helps
Regards
Peasant.
# 5  
Old 10-25-2013
This can become very involved and there's a lot to learn to gain the knowledge required.

I suggest that you read this thread and also follow the other link that I provided in there to another thread. Download a copy of the EMC document about Solaris connectivity that I uploaded to Unix.com

Sit down and study that. You'll need a very large coffee!!!

After that post back your questions.

Really hope that helps.

(We all acknowledge EMC's Copyright in this material and thank them for it)

https://www.unix.com/solaris/227189-solaris-vxvm-emc-lun-configuration.html
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Solaris

Solaris SAN Storage Multipath and Messages Check Script

Hi everybody, i am working on the new script , its job discover the Storage Env ( especially MultiPATH ) and FC cards for solaris 11 sparc systems for now.. script is seem working ( but may contain any mistakes or bug ) on the oracle Qlogic fc cards on Emc_VMAx systems and Solaris 11 Sparc_64... (0 Replies)
Discussion started by: ygemici
0 Replies

2. Solaris

LUn' unmapped from storage still showing on Solaris 10

How can i remove LUN's from solaris 10 those are unmapped from SAN? from storage side i got confirmation that they removed lun. but in my solaris box still it's visible. i tried below. root@globalares2.example.com #luxadm -e offline /dev/rdsk/c2t600D02310007D16C42FF09E24B5B8255d0s2... (7 Replies)
Discussion started by: bentech4u
7 Replies

3. Solaris

Can't see Newly created LUN by SAN admin

hello, i am an oracle DBA and trying to scan a newly created LUN of 200 GB on fiber channel by SAN admin.we have solaris 10 and SANtoolkit is installed.i tried following to get the new LUN at my machine. go /opt/Netapp/Santoolkit/bin and then ./sanlun lun show but i see only the existing... (12 Replies)
Discussion started by: janakors
12 Replies

4. AIX

New to San Storage

Can anyone recommend a good book on san storage basics and how it communicates with an AIX server? (1 Reply)
Discussion started by: NycUnxer
1 Replies

5. UNIX for Dummies Questions & Answers

Basic SAN LUN query

Hi guys, We have created a LUN on SAN and assigned it to a server. On top of it we have created 4 LVM shares which are holding the data. Is it possible to add this LUN to some other server and access the data on LVM shares?(noob here) (1 Reply)
Discussion started by: pinga123
1 Replies

6. AIX

Mount a SAN LUN which contains clone copy - AIX 6.1

Hello Everyone, Can someone help me to mount a SAN hdisk which contains a clone data copy(san) of the remote server to the another machine. Both servers are running in AIX. Thanks in advance ! Regards, Gowtham.G (3 Replies)
Discussion started by: gowthamakanthan
3 Replies

7. Solaris

SAN Storage to solaris 10 server

Hi, I have configured our SAN Storage to be connected to our new SUN T5220. On the SAn it looks all fine on the server I do not see any connection: cfgadm -al Ap_Id Type Receptacle Occupant Condition c1 scsi-bus connected ... (4 Replies)
Discussion started by: manni2
4 Replies

8. Filesystems, Disks and Memory

Finding SAN Lun's on Linux.

Am trying to differentiate between the local disks and LUN's presented from SAN onto the server. Have Tried fdisk -l, however I quite cudn't differentiate the local disks from SAN presented LUN's. Can you pls. let me know the procedure and commands to find this. OS - RHEL 4 SAN - EMC... (3 Replies)
Discussion started by: Crazy_murli
3 Replies

9. Solaris

command to scan Enterprise storage Lun disk in Solaris

I just installed IBMsdd on Solaris diver along with the patches recommended. I also installed 2 - 2Gigs qlogic fiber cards & the corresponding pkges for the cards. What command can I use to scan this LUN disks from my Soalris servers. Solaris doen't seem to be seeing this disks presented on it.... (2 Replies)
Discussion started by: Remi
2 Replies
Login or Register to Ask a Question