New to Solaris IPMP (conversion from Linux)


 
Thread Tools Search this Thread
Operating Systems Solaris New to Solaris IPMP (conversion from Linux)
# 1  
Old 09-30-2015
New to Solaris IPMP (conversion from Linux)

Hi all,

I been reading examples of how to setup IPMP and how it differs from Etherchannel. However, i am still unsure of how it really works and i hope gurus here can shed some light on the questions I have below while i will lab it up for my own test ->

q1) for IPMP, there is no such thing as a bond0 / bonded interface right ?

q2) for all interfaces in a IPMP group, do they have their own mac address ?

q3) for interfaces configured as active, active in a IPMP group, packets send out by the individual interfaces will have its own source IP and mac addresses right ?
-- in short, there are no sharing of mac address / ip address across the 2 physical interfaces right ?

q4) if the 2 interfaces have its own IP and source mac, how does load sharing work across the 2 interfaces ? if I am sending a file or sending some packets over a TCP session ? will the packets be round robin across the interfaces ?
-- if so, wouldn't there be some sequencing issue or FW session issue ?

q5) with regards to q4) how does an app or OS select the active, active interface for use ? for a single transaction / session, will it always stick to a particular interface ?
i am not very good in network, but i don't think sending a file across a network to a destination using 2 different source IP will work ?

Looking forward to hear your advices

P.s. I am on Solaris 10

Regards,
NoobSmilieSmilie

Last edited by javanoob; 09-30-2015 at 03:53 PM..
# 2  
Old 09-30-2015
The following is my experience with Solaris 10 (things change in Solaris 11).
There is dladm (dynamic link aggregation) that can create LACP links (none-LACP works only in rare conditions). Has strange defaults, e.g. LACPtimer=short (is only reliable if both links are connected to one LAN switch). dladm creates interface names aggr1,aggr2,... with one MAC address and hides the bonded interfaces, so applications don't see them. This aggregate-type is always all-active.
Example:
Code:
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
aggr1: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 2
        inet 47.11.12.13 netmask ffffff00 broadcast 47.11.12.255
        ether 18:a9:5:4e:47:11 
# dladm show-aggr
key: 1 (0x0001) policy: L3      address: 18:a9:5:4e:47:11 (auto)
           device       address                 speed           duplex  link    state
           bge0        18:a9:5:4e:47:11          1000  Mbps    full    up      attached
           bge1        18:a9:5:4e:47:12          1000  Mbps    full    up      attached
# dladm show-aggr -L
key: 1 (0x0001) policy: L3      address: 18:a9:5:4e:47:11 (auto)
                LACP mode: passive      LACP timer: long
    device    activity timeout aggregatable sync  coll dist defaulted expired
    bge0      passive  long    yes          no    no   no   yes       no     
    bge1      passive  long    yes          yes   yes  yes  no        no

passive means the LAN switch initiates the LACP dialogue.
--
For completeness I show you the alternative.
One can create an IPMP group, it does not hide the interfaces and each interface keeps its individual MAC address. De-facto this only works with active/standby.(There is maybe an exotic LAN switch configuration that allows active/active.)
Test addresses (that do periodic line checks) must be explicitly added.
Example with preferred active bge0 interface (bge1 is standby), and no test address (link-detection-based):
Code:
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 47.11.12.13 netmask ffffff00 broadcast 47.11.12.255
        groupname prod
        ether 18:a9:5:4e:47:11 
bge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
bge1: flags=69000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 0 index 3
        inet 0.0.0.0 netmask 0 
        groupname prod
        ether 18:a9:5:4e:47:12

This User Gave Thanks to MadeInGermany For This Post:
# 3  
Old 09-30-2015
Start reading here (for Solaris 10):
https://docs.oracle.com/cd/E26505_01...html#scrolltoc
# 4  
Old 10-01-2015
Hi DukeNuke2,

I have read that documentation, but it doesn't says how are the load spreading done ?

Will they be using same mac address / ip address for the multiple interfaces ?

How does the system chooses the interfaces for doing the load sharing ?
Will a single file be split over 2 interfaces for sending out ?

Regards,
Alan
# 5  
Old 10-01-2015
IPMP is for redundancy not for load sharing!
This User Gave Thanks to DukeNuke2 For This Post:
# 6  
Old 10-01-2015
If you use LACP you have 4 interfaces and 4G or 40 G bandwidth.

Solaris has numerous ways of achieving bandwidth and redundancy, depending on the release (most of this will work on 10).

I will try to write about as many as i can remember from the head.

For the sake of argument lets say you have 2 cards with 2 interfaces each (net0 to net3).

They are all connected to the same physical switch.
  • This is a requirement for LACP, after configuration on the Solaris host you will have one interface with bandwidth x 4 (4G if 1G switch/card, 40 G if 10 G switch/card).
  • LACP interface (aggr) is a single logical link from switch and host side. You can configure balancing algorithm on the switch for that group (choice is switch dependant).
  • SWITCH is single point of failure.

    SWITCH --> net0/net1/net2/net3 --> aggr0 [40G/4G]

One card is connected to the first switch, while other is connected to second switch (or combinations)
  • You can configure LACP between two interfaces on the same switch (so now you have two aggr interfaces) and then use IPMP between two created.
  • By configuring ipmp (ipadm set-ifprop ..) you can configure active / active or active / passive configuration on the host.
  • Option is as well to use transitive probing (default on newer Solaris) or using test addresses probing for failure detection.
  • LACP switch side balancing algorithm mentioned in first case for those two interfaces still apply since you have two logical interfaces (aggr0/1) under IPMP group (ipmp0)


    SWITCH0 --> net0/net3 --> aggr0
    ---------------------------------------------> ipmp0 [2G/20G] [Active/Active, Active/Passive host configured, BW is limited to the speed of 1 interface in IPMP group]
    SWITCH1 --> net2/net1 --> aggr1

IPMP groups between any number of switches and card ports.
  • BW is limited to speed of one interface.
  • For instance, you can combine 10G / 1 G interfaces in one IPMP group, making 10G active interface and 1 G passive (for redudancy).

DLMP aggregation, similar to LACP feature wise regarding bandwidth, but it can span over multiple switches inside you network (no same switch requirement)
  • Everything is done on the host, switches are not used for actual aggregation.
  • This method is easier to configure and administer (one interface for all operations), by using VLAN tagging, VNICs and flowadm(1M) you can do basically whatever you want regarding shaping and securing your traffic (L2/L3/L4) on the host.
  • I would recommend this method, but this is Solaris 11 feature.

Hope that helps.
Regards
Peasant.
This User Gave Thanks to Peasant For This Post:
# 7  
Old 10-01-2015
Quote:
Originally Posted by Peasant
If you use LACP you have 4 interfaces and 4G or 40 G bandwidth.

Solaris has numerous ways of achieving bandwidth and redundancy, depending on the release (most of this will work on 10).

I will try to write about as many as i can remember from the head.

For the sake of argument lets say you have 2 cards with 2 interfaces each (net0 to net3).

They are all connected to the same physical switch.
  • This is a requirement for LACP, after configuration on the Solaris host you will have one interface with bandwidth x 4 (4G if 1G switch/card, 40 G if 10 G switch/card).
  • LACP interface (aggr) is a single logical link from switch and host side. You can configure balancing algorithm on the switch for that group (choice is switch dependant).
  • SWITCH is single point of failure.

    SWITCH --> net0/net1/net2/net3 --> aggr0 [40G/4G]

One card is connected to the first switch, while other is connected to second switch (or combinations)
  • You can configure LACP between two interfaces on the same switch (so now you have two aggr interfaces) and then use IPMP between two created.
  • By configuring ipmp (ipadm set-ifprop ..) you can configure active / active or active / passive configuration on the host.
  • Option is as well to use transitive probing (default on newer Solaris) or using test addresses probing for failure detection.
  • LACP switch side balancing algorithm mentioned in first case for those two interfaces still apply since you have two logical interfaces (aggr0/1) under IPMP group (ipmp0)


    SWITCH0 --> net0/net3 --> aggr0
    ---------------------------------------------> ipmp0 [2G/20G] [Active/Active, Active/Passive host configured, BW is limited to the speed of 1 interface in IPMP group]
    SWITCH1 --> net2/net1 --> aggr1

IPMP groups between any number of switches and card ports.
  • BW is limited to speed of one interface.
  • For instance, you can combine 10G / 1 G interfaces in one IPMP group, making 10G active interface and 1 G passive (for redudancy).

DLMP aggregation, similar to LACP feature wise regarding bandwidth, but it can span over multiple switches inside you network (no same switch requirement)
  • Everything is done on the host, switches are not used for actual aggregation.
  • This method is easier to configure and administer (one interface for all operations), by using VLAN tagging, VNICs and flowadm(1M) you can do basically whatever you want regarding shaping and securing your traffic (L2/L3/L4) on the host.
  • I would recommend this method, but this is Solaris 11 feature.

Hope that helps.
Regards
Peasant.
Hi Peasant,

Thanks for the reply and the detailed response.
However, if we are talking about pure IPMP without leveraging on top of LACP.

How does load sharing works on 2 active interface in an IPMP group ?
Will a single file transfer to 1 destination be load balance between the 2 interfaces ?

Regards,
Noob

---------- Post updated at 04:26 AM ---------- Previous update was at 04:24 AM ----------

Quote:
Originally Posted by MadeInGermany
The following is my experience with Solaris 10 (things change in Solaris 11).
There is dladm (dynamic link aggregation) that can create LACP links (none-LACP works only in rare conditions). Has strange defaults, e.g. LACPtimer=short (is only reliable if both links are connected to one LAN switch). dladm creates interface names aggr1,aggr2,... with one MAC address and hides the bonded interfaces, so applications don't see them. This aggregate-type is always all-active.
Example:
Code:
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
aggr1: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 2
        inet 47.11.12.13 netmask ffffff00 broadcast 47.11.12.255
        ether 18:a9:5:4e:47:11 
# dladm show-aggr
key: 1 (0x0001) policy: L3      address: 18:a9:5:4e:47:11 (auto)
           device       address                 speed           duplex  link    state
           bge0        18:a9:5:4e:47:11          1000  Mbps    full    up      attached
           bge1        18:a9:5:4e:47:12          1000  Mbps    full    up      attached
# dladm show-aggr -L
key: 1 (0x0001) policy: L3      address: 18:a9:5:4e:47:11 (auto)
                LACP mode: passive      LACP timer: long
    device    activity timeout aggregatable sync  coll dist defaulted expired
    bge0      passive  long    yes          no    no   no   yes       no     
    bge1      passive  long    yes          yes   yes  yes  no        no

passive means the LAN switch initiates the LACP dialogue.
--
For completeness I show you the alternative.
One can create an IPMP group, it does not hide the interfaces and each interface keeps its individual MAC address. De-facto this only works with active/standby.(There is maybe an exotic LAN switch configuration that allows active/active.)
Test addresses (that do periodic line checks) must be explicitly added.
Example with preferred active bge0 interface (bge1 is standby), and no test address (link-detection-based):
Code:
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000 
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 47.11.12.13 netmask ffffff00 broadcast 47.11.12.255
        groupname prod
        ether 18:a9:5:4e:47:11 
bge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
bge1: flags=69000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 0 index 3
        inet 0.0.0.0 netmask 0 
        groupname prod
        ether 18:a9:5:4e:47:12

Hi MadeInGermany,

Why do we need an exotic LAN setup for active active to work in IPMP ?

Regards,
Noob

---------- Post updated at 04:27 AM ---------- Previous update was at 04:26 AM ----------

Quote:
Originally Posted by DukeNuke2
IPMP is for redundancy not for load sharing!
Hi DukeNuke2,

I hope i am not mistaken but in the document, it states that IPMP can be use for load sharing and can be setup as active,active, which is the reason for my confusion ;(

Regards,
Noob
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

IPMP group failed on Solaris 9

Hi, I have Solaris-9 server, V240. I got alert that one of the interface on IPMP configuration, is failed. Found that two IPs (192.168.120.32 and 192.168.120.35) are not pingable from this server. These two IPs were plumbed on another server and that is decommissioned now. That is the reason,... (5 Replies)
Discussion started by: solaris_1977
5 Replies

2. Solaris

Solaris 10 IPMP - failback=no

Hi all, Just a few questions -> Is an "OFFLINE" interface going back to "ONLINE" consider as a failback by IPMP ? I have "FAILBACK=no" in my /etc/default/mpathd; however when i do the following (igb0 and igb7 are in the same ipmp link based group) q1) why does "if_mpadm -r igb7" cause... (0 Replies)
Discussion started by: javanoob
0 Replies

3. Solaris

IPMP over aggregate in Solaris 11

hi all, i start with solaris 11 and i am disapointed by the change on ip managing. i want to set a ipmp over tow aggregate but i dont find any doc and i am lost with the new commande switch1 net0 aggregate1 | net1 aggregate1 |-----| |... (1 Reply)
Discussion started by: sylvain
1 Replies

4. Solaris

Solaris 10 branded zone with IPMP

All. I am trying to create a 10 branded zone on a Sol 11.1 T5. The Global is using IPMP...so aggregating is out of the question. Has anyone successfully created a branded zone with IPMP? If they have can you please show me the steps you took to get this to run. Thanks (4 Replies)
Discussion started by: aeroforce
4 Replies

5. Solaris

Link Based IPMP on Shared IP Solaris Zone

Hi, This may have already been raised previously so sorry for the duplication. What I want to achieve is have a physical server using link based IPMP setup in the global zone (not problem doing that) and then create a zone set as Shared-IP so when the servers NIC has an issue the IP will... (0 Replies)
Discussion started by: giles.cardew
0 Replies

6. Solaris

Solaris IPMP

Can any one please explain me the concept behind IPMP in solaris clustering.Basic explanation would be really appreciated... Thanks in Advance vks (2 Replies)
Discussion started by: vks47
2 Replies

7. Shell Programming and Scripting

Linux to ansi pc conversion

Hi All, I checked the old posts here. But could not find a solution for my question. I have a file created by one application in HP-UX. My client wants it to be converted into ANSI PC version. I have heard about unixtodos and have worked with it also. But I am totally unaware of of this ANSI... (0 Replies)
Discussion started by: Tuxidow
0 Replies

8. Solaris

how to configure IPMP in solaris 9

Hi friends , can anyone provide me the complete steps to configure IPMP in solaris 9 or 10 provided i have two NIC card ? regards jagan (4 Replies)
Discussion started by: jaganblore
4 Replies

9. Solaris

Does Veritas Cluster work with IPMP on Solaris 10?

Does Veritas Cluster work with IPMP on Solaris 10? If anyone has set it up do you have a doc or tips? I have heard several different statements ranging from , not working at all to Yes it works! Great How? * Test and Base IPs???? * configure the MultiNICB agent ? I can give details... (1 Reply)
Discussion started by: dfezz1
1 Replies

10. Solaris

Solaris IP Multipathing (IPMP) Help

Hello All, I work for a Health care company at a local trauma hospital. I maintain a Picture Archiving and Communication System (PAC's). Basically, any medical images (X-Ray, CT, MRI, Mammo, etc) are stored digitally on the servers for viewing and dictation from diagnostic stations. I took over... (10 Replies)
Discussion started by: mainegeek
10 Replies
Login or Register to Ask a Question