If you use LACP you have 4 interfaces and 4G or 40 G bandwidth.
Solaris has numerous ways of achieving bandwidth and redundancy, depending on the release (most of this will work on 10).
I will try to write about as many as i can remember from the head.
For the sake of argument lets say you have 2 cards with 2 interfaces each (net0 to net3).
They are all connected to the same physical switch.
- This is a requirement for LACP, after configuration on the Solaris host you will have one interface with bandwidth x 4 (4G if 1G switch/card, 40 G if 10 G switch/card).
- LACP interface (aggr) is a single logical link from switch and host side. You can configure balancing algorithm on the switch for that group (choice is switch dependant).
- SWITCH is single point of failure.
SWITCH --> net0/net1/net2/net3 --> aggr0 [40G/4G]
One card is connected to the first switch, while other is connected to second switch (or combinations)
- You can configure LACP between two interfaces on the same switch (so now you have two aggr interfaces) and then use IPMP between two created.
- By configuring ipmp (ipadm set-ifprop ..) you can configure active / active or active / passive configuration on the host.
- Option is as well to use transitive probing (default on newer Solaris) or using test addresses probing for failure detection.
- LACP switch side balancing algorithm mentioned in first case for those two interfaces still apply since you have two logical interfaces (aggr0/1) under IPMP group (ipmp0)
SWITCH0 --> net0/net3 --> aggr0
---------------------------------------------> ipmp0 [2G/20G] [Active/Active, Active/Passive host configured, BW is limited to the speed of 1 interface in IPMP group]
SWITCH1 --> net2/net1 --> aggr1
IPMP groups between any number of switches and card ports.
- BW is limited to speed of one interface.
- For instance, you can combine 10G / 1 G interfaces in one IPMP group, making 10G active interface and 1 G passive (for redudancy).
DLMP aggregation, similar to LACP feature wise regarding bandwidth, but it can span over multiple switches inside you network (no same switch requirement)
- Everything is done on the host, switches are not used for actual aggregation.
- This method is easier to configure and administer (one interface for all operations), by using VLAN tagging, VNICs and flowadm(1M) you can do basically whatever you want regarding shaping and securing your traffic (L2/L3/L4) on the host.
- I would recommend this method, but this is Solaris 11 feature.
Hope that helps.
Regards
Peasant.