If you use LACP you have 4 interfaces and 4G or 40 G bandwidth.
Solaris has numerous ways of achieving bandwidth and redundancy, depending on the release (most of this will work on 10).
I will try to write about as many as i can remember from the head.
For the sake of argument lets say you have 2 cards with 2 interfaces each (net0 to net3).
They are all connected to the same physical switch.
This is a requirement for LACP, after configuration on the Solaris host you will have one interface with bandwidth x 4 (4G if 1G switch/card, 40 G if 10 G switch/card).
LACP interface (aggr) is a single logical link from switch and host side. You can configure balancing algorithm on the switch for that group (choice is switch dependant).
SWITCH is single point of failure.
SWITCH --> net0/net1/net2/net3 --> aggr0 [40G/4G]
One card is connected to the first switch, while other is connected to second switch (or combinations)
You can configure LACP between two interfaces on the same switch (so now you have two aggr interfaces) and then use IPMP between two created.
By configuring ipmp (ipadm set-ifprop ..) you can configure active / active or active / passive configuration on the host.
Option is as well to use transitive probing (default on newer Solaris) or using test addresses probing for failure detection.
LACP switch side balancing algorithm mentioned in first case for those two interfaces still apply since you have two logical interfaces (aggr0/1) under IPMP group (ipmp0)
SWITCH0 --> net0/net3 --> aggr0
---------------------------------------------> ipmp0 [2G/20G] [Active/Active, Active/Passive host configured, BW is limited to the speed of 1 interface in IPMP group]
SWITCH1 --> net2/net1 --> aggr1
IPMP groups between any number of switches and card ports.
BW is limited to speed of one interface.
For instance, you can combine 10G / 1 G interfaces in one IPMP group, making 10G active interface and 1 G passive (for redudancy).
DLMP aggregation, similar to LACP feature wise regarding bandwidth, but it can span over multiple switches inside you network (no same switch requirement)
Everything is done on the host, switches are not used for actual aggregation.
This method is easier to configure and administer (one interface for all operations), by using VLAN tagging, VNICs and flowadm(1M) you can do basically whatever you want regarding shaping and securing your traffic (L2/L3/L4) on the host.
I would recommend this method, but this is Solaris 11 feature.
Hope that helps.
Regards
Peasant.
Hi Peasant,
Thanks for the reply and the detailed response.
However, if we are talking about pure IPMP without leveraging on top of LACP.
How does load sharing works on 2 active interface in an IPMP group ?
Will a single file transfer to 1 destination be load balance between the 2 interfaces ?
Regards,
Noob
---------- Post updated at 04:26 AM ---------- Previous update was at 04:24 AM ----------
Quote:
Originally Posted by MadeInGermany
The following is my experience with Solaris 10 (things change in Solaris 11).
There is dladm (dynamic link aggregation) that can create LACP links (none-LACP works only in rare conditions). Has strange defaults, e.g. LACPtimer=short (is only reliable if both links are connected to one LAN switch). dladm creates interface names aggr1,aggr2,... with one MAC address and hides the bonded interfaces, so applications don't see them. This aggregate-type is always all-active.
Example:
passive means the LAN switch initiates the LACP dialogue.
--
For completeness I show you the alternative.
One can create an IPMP group, it does not hide the interfaces and each interface keeps its individual MAC address. De-facto this only works with active/standby.(There is maybe an exotic LAN switch configuration that allows active/active.)
Test addresses (that do periodic line checks) must be explicitly added.
Example with preferred active bge0 interface (bge1 is standby), and no test address (link-detection-based):
Hi MadeInGermany,
Why do we need an exotic LAN setup for active active to work in IPMP ?
Regards,
Noob
---------- Post updated at 04:27 AM ---------- Previous update was at 04:26 AM ----------
Quote:
Originally Posted by DukeNuke2
IPMP is for redundancy not for load sharing!
Hi DukeNuke2,
I hope i am not mistaken but in the document, it states that IPMP can be use for load sharing and can be setup as active,active, which is the reason for my confusion ;(
Hello All,
I work for a Health care company at a local trauma hospital. I maintain a Picture Archiving and Communication System (PAC's). Basically, any medical images (X-Ray, CT, MRI, Mammo, etc) are stored digitally on the servers for viewing and dictation from diagnostic stations. I took over... (10 Replies)
Does Veritas Cluster work with IPMP on Solaris 10?
If anyone has set it up do you have a doc or tips?
I have heard several different statements ranging from , not working at all to Yes it works! Great How?
* Test and Base IPs????
* configure the MultiNICB agent ?
I can give details... (1 Reply)
Hi All,
I checked the old posts here. But could not find a solution for my question.
I have a file created by one application in HP-UX. My client wants it to be converted into ANSI PC version. I have heard about unixtodos and have worked with it also. But I am totally unaware of of this ANSI... (0 Replies)
Can any one please explain me the concept behind IPMP in solaris clustering.Basic explanation would be really appreciated...
Thanks in Advance
vks (2 Replies)
Hi,
This may have already been raised previously so sorry for the duplication. What I want to achieve is have a physical server using link based IPMP setup in the global zone (not problem doing that) and then create a zone set as Shared-IP so when the servers NIC has an issue the IP will... (0 Replies)
All.
I am trying to create a 10 branded zone on a Sol 11.1 T5. The Global is using IPMP...so aggregating is out of the question. Has anyone successfully created a branded zone with IPMP? If they have can you please show me the steps you took to get this to run.
Thanks (4 Replies)
hi all,
i start with solaris 11 and i am disapointed by the change on ip managing.
i want to set a ipmp over tow aggregate but i dont find any doc and i am lost with the new commande
switch1
net0 aggregate1 |
net1 aggregate1 |-----|
|... (1 Reply)
Hi all,
Just a few questions ->
Is an "OFFLINE" interface going back to "ONLINE" consider as a failback by IPMP ?
I have "FAILBACK=no" in my /etc/default/mpathd; however when i do the following
(igb0 and igb7 are in the same ipmp link based group)
q1) why does "if_mpadm -r igb7" cause... (0 Replies)
Hi,
I have Solaris-9 server, V240.
I got alert that one of the interface on IPMP configuration, is failed. Found that two IPs (192.168.120.32 and 192.168.120.35) are not pingable from this server. These two IPs were plumbed on another server and that is decommissioned now. That is the reason,... (5 Replies)