Sponsored Content
Operating Systems Solaris New to Solaris IPMP (conversion from Linux) Post 302959076 by techy1 on Wednesday 28th of October 2015 12:23:26 PM
Old 10-28-2015
Hi,

I don't know if you still need more info but I thought I would Chime in as I recently switched from IPMP over to Link-aggr and LACP.

What we found is due the application we were using, Load spreading was not working as expected.

IPMP is supposed to load balance only outbound traffic based on my research. What I found when reviewing dlstats, were only one port was actually being used.
- Due to this our application was impacted. Users reported slow connectivity during peek times. We took a look at the server and there were plenty resources available since this is a new implementation and only one application on the server at the time. The application is network heavy. (IPMP configuration is what was recommended to us. There is a possibility that when it was setup, it was not done correctly).

As Solaris was pretty new to me and I came into the project half way, I suggested switching over to using link aggregation instead. Once switched over, performance has improved from the user experience which was the goal.

Not sure if this helps.
(I'm not a solaris admin but was only helping out to troubleshoot the issues)
These 2 Users Gave Thanks to techy1 For This Post:
 

10 More Discussions You Might Find Interesting

1. Solaris

Solaris IP Multipathing (IPMP) Help

Hello All, I work for a Health care company at a local trauma hospital. I maintain a Picture Archiving and Communication System (PAC's). Basically, any medical images (X-Ray, CT, MRI, Mammo, etc) are stored digitally on the servers for viewing and dictation from diagnostic stations. I took over... (10 Replies)
Discussion started by: mainegeek
10 Replies

2. Solaris

Does Veritas Cluster work with IPMP on Solaris 10?

Does Veritas Cluster work with IPMP on Solaris 10? If anyone has set it up do you have a doc or tips? I have heard several different statements ranging from , not working at all to Yes it works! Great How? * Test and Base IPs???? * configure the MultiNICB agent ? I can give details... (1 Reply)
Discussion started by: dfezz1
1 Replies

3. Solaris

how to configure IPMP in solaris 9

Hi friends , can anyone provide me the complete steps to configure IPMP in solaris 9 or 10 provided i have two NIC card ? regards jagan (4 Replies)
Discussion started by: jaganblore
4 Replies

4. Shell Programming and Scripting

Linux to ansi pc conversion

Hi All, I checked the old posts here. But could not find a solution for my question. I have a file created by one application in HP-UX. My client wants it to be converted into ANSI PC version. I have heard about unixtodos and have worked with it also. But I am totally unaware of of this ANSI... (0 Replies)
Discussion started by: Tuxidow
0 Replies

5. Solaris

Solaris IPMP

Can any one please explain me the concept behind IPMP in solaris clustering.Basic explanation would be really appreciated... Thanks in Advance vks (2 Replies)
Discussion started by: vks47
2 Replies

6. Solaris

Link Based IPMP on Shared IP Solaris Zone

Hi, This may have already been raised previously so sorry for the duplication. What I want to achieve is have a physical server using link based IPMP setup in the global zone (not problem doing that) and then create a zone set as Shared-IP so when the servers NIC has an issue the IP will... (0 Replies)
Discussion started by: giles.cardew
0 Replies

7. Solaris

Solaris 10 branded zone with IPMP

All. I am trying to create a 10 branded zone on a Sol 11.1 T5. The Global is using IPMP...so aggregating is out of the question. Has anyone successfully created a branded zone with IPMP? If they have can you please show me the steps you took to get this to run. Thanks (4 Replies)
Discussion started by: aeroforce
4 Replies

8. Solaris

IPMP over aggregate in Solaris 11

hi all, i start with solaris 11 and i am disapointed by the change on ip managing. i want to set a ipmp over tow aggregate but i dont find any doc and i am lost with the new commande switch1 net0 aggregate1 | net1 aggregate1 |-----| |... (1 Reply)
Discussion started by: sylvain
1 Replies

9. Solaris

Solaris 10 IPMP - failback=no

Hi all, Just a few questions -> Is an "OFFLINE" interface going back to "ONLINE" consider as a failback by IPMP ? I have "FAILBACK=no" in my /etc/default/mpathd; however when i do the following (igb0 and igb7 are in the same ipmp link based group) q1) why does "if_mpadm -r igb7" cause... (0 Replies)
Discussion started by: javanoob
0 Replies

10. Solaris

IPMP group failed on Solaris 9

Hi, I have Solaris-9 server, V240. I got alert that one of the interface on IPMP configuration, is failed. Found that two IPs (192.168.120.32 and 192.168.120.35) are not pingable from this server. These two IPs were plumbed on another server and that is decommissioned now. That is the reason,... (5 Replies)
Discussion started by: solaris_1977
5 Replies
LAGG(4) 						   BSD Kernel Interfaces Manual 						   LAGG(4)

NAME
lagg -- link aggregation and link failover interface SYNOPSIS
To compile this driver into the kernel, place the following line in your kernel configuration file: device lagg Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5): if_lagg_load="YES" DESCRIPTION
The lagg interface allows aggregation of multiple network interfaces as one virtual lagg interface for the purpose of providing fault-toler- ance and high-speed links. A lagg interface can be created using the ifconfig laggN create command. It can use different link aggregation protocols specified using the laggproto proto option. Child interfaces can be added using the laggport child-iface option and removed using the -laggport child-iface option. The driver currently supports the aggregation protocols failover (the default), lacp, loadbalance, roundrobin, broadcast, and none. The pro- tocols determine which ports are used for outgoing traffic and whether a specific port accepts incoming traffic. The interface link state is used to validate if the port is active or not. failover Sends traffic only through the active port. If the master port becomes unavailable, the next active port is used. The first interface added is the master port; any interfaces added after that are used as failover devices. By default, received traffic is only accepted when they are received through the active port. This constraint can be relaxed by setting the net.link.lagg.failover_rx_all sysctl(8) variable to a nonzero value, which is useful for certain bridged network setups. loadbalance mode. lacp Supports the IEEE 802.1AX (formerly 802.3ad) Link Aggregation Control Protocol (LACP) and the Marker Protocol. LACP will nego- tiate a set of aggregable links with the peer in to one or more Link Aggregated Groups. Each LAG is composed of ports of the same speed, set to full-duplex operation. The traffic will be balanced across the ports in the LAG with the greatest total speed, in most cases there will only be one LAG which contains all ports. In the event of changes in physical connectivity, Link Aggregation will quickly converge to a new configuration. loadbalance Balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. This is a static setup and does not negotiate aggregation with the peer or exchange frames to monitor the link. The hash includes the Ethernet source and destination address, and, if available, the VLAN tag, and the IP source and destination address. roundrobin Distributes outgoing traffic using a round-robin scheduler through all active ports and accepts incoming traffic from any active port. broadcast Sends frames to all ports of the LAG and receives frames on any port of the LAG. none This protocol is intended to do nothing: it disables any traffic without disabling the lagg interface itself. Each lagg interface is created at runtime using interface cloning. This is most easily done with the ifconfig(8) create command or using the cloned_interfaces variable in rc.conf(5). The MTU of the first interface to be added is used as the lagg MTU. All additional interfaces are required to have exactly the same value. The loadbalance and lacp modes will use the RSS hash from the network card if available to avoid computing one, this may give poor traffic distribution if the hash is invalid or uses less of the protocol header information. Local hash computation can be forced per interface by setting the use_flowid ifconfig(8) flag. The default for new interfaces is set via the net.link.lagg.default_use_flowid sysctl(8). EXAMPLES
Create a link aggregation using LACP with two bge(4) Gigabit Ethernet interfaces: # ifconfig bge0 up # ifconfig bge1 up # ifconfig lagg0 laggproto lacp laggport bge0 laggport bge1 192.168.1.1 netmask 255.255.255.0 The following example uses an active failover interface to set up roaming between wired and wireless networks using two network devices. Whenever the wired master interface is unplugged, the wireless failover device will be used: # ifconfig em0 up # ifconfig ath0 ether 00:11:22:33:44:55 # ifconfig create wlan0 wlandev ath0 ssid my_net up # ifconfig lagg0 laggproto failover laggport em0 laggport wlan0 192.168.1.1 netmask 255.255.255.0 (Note the mac address of the wireless device is forced to match the wired device as a workaround.) SEE ALSO
ng_one2many(4), ifconfig(8), sysctl(8) HISTORY
The lagg device first appeared in FreeBSD 6.3. AUTHORS
The lagg driver was written under the name trunk by Reyk Floeter <reyk@openbsd.org>. The LACP implementation was written by YAMAMOTO Takashi for NetBSD. BUGS
There is no way to configure LACP administrative variables, including system and port priorities. The current implementation always performs active-mode LACP and uses 0x8000 as system and port priorities. BSD
October 1, 2014 BSD
All times are GMT -4. The time now is 11:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy