Quote:
Originally Posted by
Phat
We talk about another aspect. For example, if you have 2 FC cards, each has 2 ports. So 4 ports are connected to the LUN. Assuming that, 3 ports failed and we have only 1 port. So we still can see the LUN? and in case 2 ports failed, we can see the LUN? In regard redundant, with 4 line connected to LUN, the system is only down if 4 lines were down/broken, even if 1 line is still available, the system still running. Please correct me this.
In principle: yes, you can. It depends on how your "zones" are configured. So, here is a short introduction to zoning:
When you plug a network card into a network you immediately have a "any to any" connection. For instance, you plug a network card (and an accompanying computer) in and you start a
ssh to some other computer on this network. The connection itself is immediately possible and only the remote computer will decide if you are allowed to proceed - that is, by asking your password or whatever. But on the network level, as in exchanging packets, the connection is immediate.
In an FC network this is not the case. When you plug your FC adapter in it is NOT allowed to contact anybody. On the other hand there is no further authentication: when you can access something you can immediately use it. You need to create "zones" to allow (on a per-case-basis) access to other entities on the network.
Now, what is a "zone"? Every item on a FC network - FC adapters, switch ports, but also LUNs - have a "WWPN", which serves about the same role as a MAC-address in a normal network. It is a unique identifier. A zone now is a rule which WWPN is allowed to contact/access which other WWPN. You can have more than one zone for an item, i.e. you may want a certain adapter to work with two disks, so you create a zone stating adapter X is allowed to access disk A and another zone allowing adapter X to access disk B. You may also have several zones for the same disk meaning that several adapters (and therefore maybe different systems?) are allowed to acccess it. This is dangerous because you want to avoid two systems writing to the same disk but on the other hand you need that in clusters. The cluster software will in this case make sure that only one system at a time can write to the disk.
So, depending on how your environment is set up (ask your storage guy - he probably knows more about zones than i do) you may (or may not) have multiple pathes to access a disk because the zoning is set up this way.
Also, a multipath driver will be able to recognise that if you see a disk (=LUN) via such multiple pathes it is still one and the same disk. Like if you have 4 pictures of the same house from different directions you understand that there is one house, not four of them. In case of the driver that means that you may have different device entries for each path but there is a pseudo-device "above" these, which you use on the LVM level. Depending on the driver used this is done differently but the principle is always the same: you have several devices (oftenly, but not always "hdisk"s) which represent the different views (pathes) to a single LUN. Then you have a pseudo-device, which represents the LUN itself and the driver will, when you address this pseudo-device, use just one available path (or even several of them concurrently) to address it.
Also notice that each adapter (physical as well as virtual) in an IBM environment has TWO WWPNs, not one! This is necessary for LPM (live partition mobility) and both these WWPNs need to be zoned.
I hope this helps.
bakunin