Sponsored Content
Special Forums IP Networking bonding without switch link aggregation Post 302324355 by Eruditass on Wednesday 10th of June 2009 01:49:59 PM
Old 06-10-2009
heh sorry, yeah that was a typo.

scroll down to mode 6, balance alb
LiNUX Horizon - Bonding (Port Trunking)
Tips and Tuning for Ethernet Bonding With Linux
 

10 More Discussions You Might Find Interesting

1. Solaris

Link Aggregation

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (2 Replies)
Discussion started by: hcclnoodles
2 Replies

2. UNIX for Advanced & Expert Users

Link Aggregation and LACP

Hi there I have a requirement to provide failover to our customer boxes in case of interface / switch failure, I have been looking at Solaris Link Aggregation with LACP and I wanted to ask a question Ive seen multiple websites that say the following Does this also mean that if the... (1 Reply)
Discussion started by: hcclnoodles
1 Replies

3. IP Networking

bonding lacp and link aggregation

Hello, I am trying to get clarity on a few things and am looking for some info. In every article I have read about link aggregation and lacp, it can be used combine physical links to create 1 logical link for increased bandwidth. But what it doesn't say is if this is limited by source/dst. ... (1 Reply)
Discussion started by: jaredo
1 Replies

4. HP-UX

Link Aggregation HPUX

Hi, Hoping someone can offer some advice on something i have not dealt with before. We have a server that seems to have some very strange configuration done on it. It has 2 physical NIC's and rather than both be aggregated into LAN900 we have 1 in LAN900 and 1 in LAN901? (See Below)... (2 Replies)
Discussion started by: Andyp2704
2 Replies

5. Solaris

Link aggregation

Me again :) I'm trying to find a page describing the L2, L3 und L4 modes of dladm. It's nice to read "hashed by ip header", but how should I use that? On the file-server it's ok to have the six interfaces serving six clients each on it's own. But an rsync connection via switch between two... (8 Replies)
Discussion started by: PatrickBaer
8 Replies

6. IP Networking

Interface bonding / Link aggregation (Multiple)

Hello, I've been using mode 4 with four slaves, however looking at ifconfig showed that the traffic was not balanced correctly between the interfaces, the outgoing traffic has been alot higher on the last slave. Example: eth0 RX 123.2 GiB TX 22.5 GiB eth1 RX 84.8 GiB TX 8.3 GiB eth2... (3 Replies)
Discussion started by: TehOne
3 Replies

7. HP-UX

Break Link Aggregation in HP UX.

Hi, I want to Break the Link Aggregation. My aggregation are lan0+lan1 = lan900. Now I want to break this and put the IP in lan0. But i have cluster environment and this is the main database server. So It need to change in cluster script. But I dont know where I have to change it. Please... (1 Reply)
Discussion started by: mkiron
1 Replies

8. Solaris

Link Aggregation without LACP

Hi, I'm not from the Solaris world and some of these things are new to me. Can someone tell me if it is possible to configure link aggregation without using LACP? I am told etherchannel was setup without LACP. (3 Replies)
Discussion started by: techy1
3 Replies

9. IP Networking

Link Aggregation

Hi ihave three link of internet and iwant to put one linux front of a firewall that this three linux speard firewall such az load balance and fialover but dont close any port and protocol and only firewall have been internet what way can i use for it ? are there any script and services do that... (0 Replies)
Discussion started by: mnnn
0 Replies

10. UNIX for Advanced & Expert Users

Bonding IEEE 802.3ad Dynamic link aggregation : Bond showing less than desired throughput

Hi All, I have done IEEE 802.3ad Dynamic link aggregation bond configuration with name bond0 which has 4 slaves (each 25GB/s) in it on cent os 6.8. Issue i am facing is bonding throughput is only 50GB/s not 100GB/s. below are the configuration files : DEVICE=bond0 IPADDR=xx.xx.xx.xx... (1 Reply)
Discussion started by: omkar.jadhav
1 Replies
alt(7)							 Miscellaneous Information Manual						    alt(7)

NAME
alt - DEGPA Gigabit Ethernet interface SYNOPSIS
config_driver alt DESCRIPTION
The alt interface provides access to Gigabit Ethernet (1000Mbs) through the DEGPA device. The interface supports full-duplex operation in a switched or point-to-point configuration, and provides the following features: The interface has Link Autonegotiation enabled by default. Some switches do not support Link Autonegotiation. To turn Link Autonegotiation off, use the following command: # lan_config -ialt0 -a0 Note that you may add this command to the /etc/inet.local file to preserve the set- ting of Link Autonegotiation across system restarts. JUMBO packets are disabled by default. JUMBO packets provide a non-standard larger packet size. This enables the interface to carry more data with less CPU overhead. To enable JUMBO packets, use the following command: # ifconfig alt0 ipmtu 9000 Note that there are several interoperability issues with using JUMBO packets (for example, if your switch goes from 1000Mbps to a 100Mbps client, JUMBO packets will not work on a 100Mbps LAN). In order to use JUMBO frames, you will need a switch that supports JUMBO frames or a point-to-point configuration with a partner that supports JUMBO frames. Receive flow control is enabled. There is currently no way to turn this off. Gigabit Ethernet performance with TCP/IP depends on several factors. Some of the influencing factors are as follows: The speed at which data can be delivered to the interface influences throughput. If your CPU(s) are busy doing several tasks, the task using Gigabit Ethernet may not get enough run time to deliver packets. In general, faster CPUs will deliver better throughput. Fast access to the PCI bus is critical for high throughput. Using a 64-bit PCI slot will give you better performance and use less PCI resources than a 32-bit PCI slot. Putting the interface on the same PCI bus as other peripherals will degrade throughput. Each system type may also have different PCI-to- host speed considerations (the speed at which the PCI-to-host hardware allows the device to operate). The standard TCP/IP applications (for example, ftp and rcp) are not designed to run at Gigabit speeds. TCP applications that expect performance should use a message size of 65000 bytes and a window size of 128000 bytes. Even when an application is modified to use these settings, high throughput may not be attainable. This is particularly true when an application is waiting for data to send (data from a disk, for example). ERRORS
The following diagnostic and error messages contain relevant information provided by the alt interface, and are displayed to the console. Each message begins with the adapter identification, including the number of the adapter. The alt interface could not find adequate I/O addressing on this system to operate. This is a fatal error, and the DEGPA-SA will not operate on this system. There was a memory alloca- tion problem or the device initialization has failed. This is indicative of a hardware problem. Indicates that the Gigabit Ethernet link is up. The Autonegotiated keyword indicates that the link was autonegotiated. Note, this will only be on if autonegotiation is enabled). The ReceiveFlowControl keyword indicates that Receive Flow control is enabled on the link. Indicates that the link is no longer estab- lished. No communication will occur over the link while it is down. RELATED INFORMATION
Commands: ifconfig(8), lan_config(8) Files: inet.local(4) Network information: arp(7), inet(7), netintro(7) delim off alt(7)
All times are GMT -4. The time now is 08:06 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy