Hi Peasant, again, I cant thank you enough for your input.
So what we actually have is a t5-2 which has two sockets, 2x two port FC Cards and 4x gigabit Ethernet ports.
As you said the machine is split right down the middle with each root complex owning exactly half of the hardware including local Hard drives.
What we have is:
1x Primary Control domain (Control, IO, Service). Obviously all LDOMS are managed from the Primary.
1x Secondary (Or what some people call 'Alternate') IO, Service domain which can see bare metal Storage.
Im sure im telling you what you already know but it help me explain it out
The idea of us have two IO, service domains (Priamry and Secondary) is that we can actually take one of them down (i.e for patching) and all Guest LDOMS will continue to run, route traffic in/out, see LUNS etc.
And this is the case. When i init 6 or shutdown the primary LDOM, all Guests continue to operate via the Secndary (Alternate) Domain. And vice-a-versa.
So when I create a guest LDOM, i make sure to create two VNET's, one pointing to the Primary VSW and the other to the Secondary VSW. And when creating new LDOMS, i alterante which switch vnet0 point to so that all traffic does always go through one switch.
And this is the same principle for DISKS, i use multipathing groups (MPGROUP) to ensure that guest can see LUNs from both IO, SERVICE domains.
I think you are correct about the IPMP guest settings, I am just reading up more about that.
I also don't pretend to completely understand the difference between the trunk policies (L2, L3 etc.). I am also doing some more reading on that.
FYI, we also have some T5-2 servers which not only have 2x two port FC cards but also 2x two port ehternet cards in addition to the 4x on board ethernet ports. These serers follow the same principle as the on i use in the original post, but onviously each root complex has 4 ethernet ports each for the trunk.