This should not happen if everything is configured properly.
I checked your initial output more carefully (sorry for that
![Smilie Smilie](https://www.unix.com/images/smilies/smile.gif)
)
What looks wrong to me is that you are using L2 aggregation (aggro0 interface), and you have created from that interface two virtual switches, then you used those interfaces to create ipmp group inside ldom.
I don't think that is supported configuration, looks kinda silly
Since you have aggregated two interfaces (net0 and net1) which must be connected to the same physical switch, there is no need to use IPMP inside LDOM (guest domain, i don't think this is supported configuration at all, possibly why you are having mac collisions) or create multiple virtual switches over one interface (aggr0).
This schematic should be more illuminating :
Primary domain (hypervisor - bare metal)
---> net0 <> net1 [aggr0 L2] ---> primary-vsw50 (on primary, created using aggr0, add vsw command) ---> vnet0 for guest ldom1, ldom2 (add-vnet command)
Only one vnet is enough, since if net0 fails, all you will loose is bandwidth from one interface.
No need to tag the interfaces on the hypervisor os level (aggr5000, dladm create-vnet), since this is done for LDOMS on the vsw/vnet level (PVID,VID).
This should work, but it is a legacy way to implement vlan tagging in LDOMS.
As for bare metal domains (primary,secondary), let me offer a short explanation of domains as i understand it...
For instance, you have sparc t4-2 with two sockets, two 4 port network cards and two 2 port FC card.
You can create two hardware domains - primary and secondary, in which the actual I/O hardware is splited between those two domains (each has one PCI card and one FC card and one CPU socket and memory ).
Now you have a situation that you have one t4-2 sparc which is actually two machines separated on hardware level. So all LDOMS created on primary domain will use its resources (CPU,PCI - half of them) and ldoms on secondary will use its resources (other half)
Basically, if one socket fails due to hardware failure, only the primary domain and guest ldoms on them will fail, while secondary and guest ldoms on it will continue to run.
Those setups complicate things considerably and are done on machines which have more resources in redundant matter (like 4 cards or 4 sockets, 2 phys cards per domain for redundancy etc.)
For your setup i guess you need (keep it simple - as per scheme in the begining) :
One primary domain (bare metal)
One vsw created on top of the aggr0 interface in primary domain.
One vnet interface added to LDOM from primary-vsw on primary domain.
One VDS (virtual disk service) in primary domain per guest ldom (sneezy-vds@primary, otherguestldom-vds@primary etc.) in which you add disks for ldoms.
Hope that clears things out.
Regards
Peasant.