Sponsored Content
Special Forums IP Networking bonding lacp and link aggregation Post 302187691 by ramen_noodle on Monday 21st of April 2008 05:43:08 PM
Old 04-21-2008
If you have a source DTE with 4 1gb/s interfaces trunked into a duplex capable 10g switch then you should easily attain your desired throughput. Whether the end host system can receive 320/mbs throughput depends on it's configuration. It's noted that lacp is far from perfect without factoring in the latencies from NFS and/or VM translation. Also, your backend potential throughput is faster than a lot of local
disk buses.
 

10 More Discussions You Might Find Interesting

1. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

2. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

3. IP Networking

LACP aggregation on separates switches

Hello, I'm working on LACP architecture. I would like to know if it's possible to aggregate two links on two separate switches. Here an example I want : Aggregation of link1 and link2 to obtain a logical 2 gbit/s link. Also have redundancy, if one of them is down, the traffic goes through the... (1 Reply)
Discussion started by: jbemonet
1 Replies

4. IP Networking

bonding without switch link aggregation

I have some linux machines that I am trying to increase the throughput to on a single connection. They connect to a switch with two 1GbE lines and the switch does not have Link Aggregation support for these machines. I have tried bonding with balance-rr, balance-alb, but the machines can only... (4 Replies)
Discussion started by: Eruditass
4 Replies

5. Solaris

Link aggregation

Me again :) I'm trying to find a page describing the L2, L3 und L4 modes of dladm. It's nice to read "hashed by ip header", but how should I use that? On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Discussion started by: PatrickBaer
8 Replies

6. IP Networking

Interface bonding / Link aggregation (Multiple)

Hello, I've been using mode 4 with four slaves, however looking at ifconfig showed that the traffic was not balanced correctly between the interfaces, the outgoing traffic has been alot higher on the last slave. Example: eth0 RX 123.2 GiB TX 22.5 GiB eth1 RX 84.8 GiB TX 8.3 GiB eth2... (3 Replies)
Discussion started by: TehOne
3 Replies

7. Red Hat

Bonding a Bond with LACP

Does anyone know if it's possible to bond two bonds together? My situation is I have two older Cisco switches that cannot carry a LACP (bond level 4) aggregated between them, but separate aggregates can be setup on the switches themselves. In order to have redundancy of two switches I would... (0 Replies)
Discussion started by: christr
0 Replies

8. Solaris

Link Aggregation without LACP

Hi, I'm not from the Solaris world and some of these things are new to me. Can someone tell me if it is possible to configure link aggregation without using LACP? I am told etherchannel was setup without LACP. (3 Replies)
Discussion started by: techy1
3 Replies

9. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies

10. UNIX for Advanced & Expert Users

Bonding IEEE 802.3ad Dynamic link aggregation : Bond showing less than desired throughput

Hi All, I have done IEEE 802.3ad Dynamic link aggregation bond configuration with name bond0 which has 4 slaves (each 25GB/s) in it on cent os 6.8. Issue i am facing is bonding throughput is only 50GB/s not 100GB/s. below are the configuration files : DEVICE=bond0 IPADDR=xx.xx.xx.xx... (1 Reply)
Discussion started by: omkar.jadhav
1 Replies
LAGG(4) 						   BSD Kernel Interfaces Manual 						   LAGG(4)

NAME
lagg -- link aggregation and link failover interface SYNOPSIS
To compile this driver into the kernel, place the following line in your kernel configuration file: device lagg Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5): if_lagg_load="YES" DESCRIPTION
The lagg interface allows aggregation of multiple network interfaces as one virtual lagg interface for the purpose of providing fault-toler- ance and high-speed links. A lagg interface can be created using the ifconfig laggN create command. It can use different link aggregation protocols specified using the laggproto proto option. Child interfaces can be added using the laggport child-iface option and removed using the -laggport child-iface option. The driver currently supports the aggregation protocols failover (the default), fec, lacp, loadbalance, roundrobin, and none. The protocols determine which ports are used for outgoing traffic and whether a specific port accepts incoming traffic. The interface link state is used to validate if the port is active or not. failover Sends traffic only through the active port. If the master port becomes unavailable, the next active port is used. The first interface added is the master port; any interfaces added after that are used as failover devices. By default, received traffic is only accepted when they are received through the active port. This constraint can be relaxed by setting the net.link.lagg.failover_rx_all sysctl(8) variable to a nonzero value, which is useful for certain bridged network setups. fec Supports Cisco EtherChannel. This is a static setup and does not negotiate aggregation with the peer or exchange frames to mon- itor the link. lacp Supports the IEEE 802.3ad Link Aggregation Control Protocol (LACP) and the Marker Protocol. LACP will negotiate a set of aggre- gable links with the peer in to one or more Link Aggregated Groups. Each LAG is composed of ports of the same speed, set to full-duplex operation. The traffic will be balanced across the ports in the LAG with the greatest total speed, in most cases there will only be one LAG which contains all ports. In the event of changes in physical connectivity, Link Aggregation will quickly converge to a new configuration. loadbalance Balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. This is a static setup and does not negotiate aggregation with the peer or exchange frames to monitor the link. The hash includes the Ethernet source and destination address, and, if available, the VLAN tag, and the IP source and destination address. roundrobin Distributes outgoing traffic using a round-robin scheduler through all active ports and accepts incoming traffic from any active port. none This protocol is intended to do nothing: it disables any traffic without disabling the lagg interface itself. Each lagg interface is created at runtime using interface cloning. This is most easily done with the ifconfig(8) create command or using the cloned_interfaces variable in rc.conf(5). The MTU of the first interface to be added is used as the lagg MTU. All additional interfaces are required to have exactly the same value. EXAMPLES
Create a 802.3ad link aggregation using LACP with two bge(4) Gigabit Ethernet interfaces: # ifconfig bge0 up # ifconfig bge1 up # ifconfig lagg0 laggproto lacp laggport bge0 laggport bge1 192.168.1.1 netmask 255.255.255.0 The following example uses an active failover interface to set up roaming between wired and wireless networks using two network devices. Whenever the wired master interface is unplugged, the wireless failover device will be used: # ifconfig em0 up # ifconfig ath0 ether 00:11:22:33:44:55 # ifconfig create wlan0 wlandev ath0 ssid my_net up # ifconfig lagg0 laggproto failover laggport em0 laggport wlan0 192.168.1.1 netmask 255.255.255.0 (Note the mac address of the wireless device is forced to match the wired device as a workaround.) SEE ALSO
ng_fec(4), ng_one2many(4), sysctl(8), ifconfig(8) HISTORY
The lagg device first appeared in FreeBSD 6.3. AUTHORS
The lagg driver was written under the name trunk by Reyk Floeter <reyk@openbsd.org>. The LACP implementation was written by YAMAMOTO Takashi for NetBSD. BUGS
There is no way to configure LACP administrative variables, including system and port priorities. The current implementation always performs active-mode LACP and uses 0x8000 as system and port priorities. BSD
October 18, 2010 BSD
All times are GMT -4. The time now is 05:04 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy