Sponsored Content
Special Forums IP Networking Centos/Redhat 7 - Team or Bond Post 303035344 by MadeInGermany on Tuesday 21st of May 2019 04:57:31 PM
Old 05-21-2019
The team driver might be a more modern design and with more features.
In practice I have always used the bond driver with IPv4 - and it works well.
 

8 More Discussions You Might Find Interesting

1. Ubuntu

CentOS, Fedora & RedHat in 1 box

Hi Linux gurus, My boss had asked me to setup a box consisting of this 3 OS (CentOS, Fedora, RedHat) for autopatching. So, whenever there is new patches for CentOS from the internet, this box will grab it, implement it, if tested ok and approved, the patches will then be push to Production... (23 Replies)
Discussion started by: raybakh
23 Replies

2. UNIX for Advanced & Expert Users

How to bond network interfaces

All, I have a quad NIC on a V880 running Solaris 9. I've heard you can bond interfaces together and get better throughput. I found this link that seems to describe the process well. However, the command mentioned (dladm) is missing. Is there some package I need to install to get this command? Thx.... (2 Replies)
Discussion started by: agcodba
2 Replies

3. UNIX for Dummies Questions & Answers

Centos 5.8 and RedHat 5.3 terminals strange behaviour

Hello, I have following problem. I have Centos 5.8 ( Final ) on Virtual Box. I noticed very strange behavior of the terminals when resizing them. My default shell is tcsh . Let's assume that we have trial script try.csh with text in it: #! /bin/tcsh -f # Trial script echo... (4 Replies)
Discussion started by: tyanata
4 Replies

4. IP Networking

RedHat/Centos Disable IPv6 Networking

Guide on how to disable ipv6 for Centos and RedHat 1) Edit /etc/sysconfig/network-scripts/ifcfg-eth0 and change: NETWORKING_IPV6=yes to NETWORKING_IPV6=no 2) Edit /etc/modprobe.conf and add these lines: alias net-pf-10 off alias ipv6 off 3) Stop the ipv6tables service: ... (0 Replies)
Discussion started by: zanna91
0 Replies

5. Red Hat

How to Upgrade Centos 5.7 using Centos 5.8 ISO image on Vmware workstation

Dear Linux Experts, On my windows 7 desktop with the help of Vmware workstation (Version 7.1), created virtual machine and installed Centos 5.7 successfully using ISO image. Query : Is this possible to upgrade the Centos 5.7 using Centos 5.8 ISO image to Centos version 5.8?.. if yes kindly... (2 Replies)
Discussion started by: Ananthcn
2 Replies

6. UNIX for Advanced & Expert Users

[Solved] Cannot install KVM guest on CentOS/RedHat

Hi, I've a CentOS Server and I need to create KVM guest machine without X. /usr/sbin/virt-install --name server1 --ram 4000 --vcpus=8 --file=/srv/virtual/server1.img --file-size=20 --cdrom /tmp/server1.iso --mac=52:54:00:fd:48:7c The iso was created with cobbler... So, now the machine is... (5 Replies)
Discussion started by: hiddenshadow
5 Replies

7. Red Hat

Bonding a Bond with LACP

Does anyone know if it's possible to bond two bonds together? My situation is I have two older Cisco switches that cannot carry a LACP (bond level 4) aggregated between them, but separate aggregates can be setup on the switches themselves. In order to have redundancy of two switches I would... (0 Replies)
Discussion started by: christr
0 Replies

8. Red Hat

Bond Configuration change not reflected.

I have changed bond configuration(mode) from TLB to ALB in modprobe.conf file. But after the restart of machine the change is not reflected in the system.While doing a >>cat /sys/class/net/bond0/bonding/mode, it shows TLB mode, where as in modprobe.conf file alb exists.Can somebody help me out to... (1 Reply)
Discussion started by: Anjan Ganguly
1 Replies
TEAMD.CONF(5)						     Team daemon configuration						     TEAMD.CONF(5)

NAME
teamd.conf -- libteam daemon configuration file DESCRIPTION
teamd uses JSON format configuration. OPTIONS
device (string, mandatory) Desired name of new team device. hwaddr (string) Desired hardware address of new team device. Usual MAC address format is accepted. runner.name (string, mandatory) Name of team device. The following runners are available: broadcast -- Simple runner which directs the team device to transmit packets via all ports. roundrobin -- Simple runner which directs the team device to transmits packets in a round-robin fashion. activebackup -- Watches for link changes and selects active port to be used for data transfers. loadbalance -- To do passive load balancing, runner only sets up BPF hash function which will determine port for packet transmit. To do active load balancing, runner moves hashes among available ports trying to reach perfect balance. lacp -- Implements 802.3ad LACP protocol. Can use same Tx port selection possibilities as loadbalance runner. notify_peers.count (int) Number of bursts of unsolicited NAs and gratuitous ARP packets sent after port is enabled or disabled. Default: 0 (disabled) Default for activebackup runner: 1 notify_peers.interval (int) Value is positive number in milliseconds. Specifies an interval between bursts of notify-peer packets. Default: 0 mcast_rejoin.count (int) Number of bursts of multicast group rejoin requests sent after port is enabled or disabled. Default: 0 (disabled) Default for activebackup runner: 1 mcast_rejoin.interval (int) Value is positive number in milliseconds. Specifies an interval between bursts of multicast group rejoin requests. Default: 0 link_watch.name | ports.PORTIFNAME.link_watch.name (string) Name of link watcher to be used. The following link watchers are available: ethtool -- Uses Libteam lib to get port ethtool state changes. arp_ping -- ARP requests are sent through a port. If an ARP reply is received, the link is considered to be up. nsna_ping -- Similar to the previous, except that it uses IPv6 Neighbor Solicitation / Neighbor Advertisement mechanism. This is an alternative to arp_ping and becomes handy in pure-IPv6 environments. ports (object) List of ports, network devices, to be used in a team device. See examples for more information. ports.PORTIFNAME.queue_id (int) ID of queue which this port should be mapped to. Default: None ACTIVE-BACKUP RUNNER SPECIFIC OPTIONS runner.hwaddr_policy (string) This defines the policy of how hardware addresses of team device and port devices should be set during the team lifetime. The fol- lowing are available: same_all -- All ports will always have the same hardware address as the associated team device. by_active -- Team device adopts the hardware address of the currently active port. This is useful when the port device is not able to change its hardware address. only_active -- Only the active port adopts the hardware address of the team device. The others have their own. Default: same_all ports.PORTIFNAME.prio (int) Port priority. The higher number means higher priority. Default: 0 ports.PORTIFNAME.sticky (bool) Flag which indicates if the port is sticky. If set, it means the port does not get unselected if another port with higher priority or better parameters becomes available. Default: false LOAD BALANCE RUNNER SPECIFIC OPTIONS
runner.tx_hash (array) List of fragment types (strings) which should be used for packet Tx hash computation. The following are available: eth -- Uses source and destination MAC addresses. vlan -- Uses VLAN id. ipv4 -- Uses source and destination IPv4 addresses. ipv6 -- Uses source and destination IPv6 addresses. ip -- Uses source and destination IPv4 and IPv6 addresses. l3 -- Uses source and destination IPv4 and IPv6 addresses. tcp -- Uses source and destination TCP ports. udp -- Uses source and destination UDP ports. sctp -- Uses source and destination SCTP ports. l4 -- Uses source and destination TCP and UDP and SCTP ports. runner.tx_balancer.name (string) Name of active Tx balancer. Active Tx balancing is disabled by default. The only value available is basic. Default: None runner.tx_balancer.balancing_interval (int) In tenths of a second. Periodic interval between rebalancing. Default: 50 LACP RUNNER SPECIFIC OPTIONS
runner.active (bool) If active is true LACPDU frames are sent along the configured links periodically. If not, it acts as "speak when spoken to". Default: true runner.fast_rate (bool) Option specifies the rate at which our link partner is asked to transmit LACPDU packets. If this is true then packets will be sent once per second. Otherwise they will be sent every 30 seconds. runner.tx_hash (array) Same as for load balance runner. runner.tx_balancer.name (string) Same as for load balance runner. runner.tx_balancer.balancing_interval (int) Same as for load balance runner. runner.sys_prio (int) System priority, value can be 0 - 65535. Default: 255 runner.min_ports (int) Specifies the minimum number of ports that must be active before asserting carrier in the master interface, value can be 1 - 255. Default: 0 runner.agg_select_policy (string) This selects the policy of how the aggregators will be selected. The following are available: lacp_prio -- Aggregator with highest priority according to LACP standard will be selected. Aggregator priority is affected by per- port option lacp_prio. lacp_prio_stable -- Same as previous one, except do not replace selected aggregator if it is still usable. bandwidth -- Select aggregator with highest total bandwidth. count -- Select aggregator with highest number of ports. port_options -- Aggregator with highest priority according to per-port options prio and sticky will be selected. This means that the aggregator containing the port with the highest priority will be selected unless at least one of the ports in the currently selected aggregator is sticky. Default: lacp_prio ports.PORTIFNAME.lacp_prio (int) Port priority according to LACP standard. The lower number means higher priority. ports.PORTIFNAME.lacp_key (int) Port key according to LACP standard. It is only possible to aggregate ports with the same key. Default: 0 ETHTOOL LINK WATCH SPECIFIC OPTIONS
link_watch.delay_up | ports.PORTIFNAME.link_watch.delay_up (int) Value is a positive number in milliseconds. It is the delay between the link coming up and the runner being notified about it. Default: 0 link_watch.delay_down | ports.PORTIFNAME.link_watch.delay_down (int) Value is a positive number in milliseconds. It is the delay between the link going down and the runner being notified about it. Default: 0 ARP PING LINK WATCH SPECIFIC OPTIONS
link_watch.interval | ports.PORTIFNAME.link_watch.interval (int) Value is a positive number in milliseconds. It is the interval between ARP requests being sent. link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int) Value is a positive number in milliseconds. It is the delay between link watch initialization and the first ARP request being sent. Default: 0 link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int) Maximum number of missed ARP replies. If this number is exceeded, link is reported as down. Default: 3 link_watch.source_host | ports.PORTIFNAME.link_watch.source_host (hostname) Hostname to be converted to IP address which will be filled into ARP request as source address. Default: 0.0.0.0 link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (hostname) Hostname to be converted to IP address which will be filled into ARP request as destination address. link_watch.validate_active | ports.PORTIFNAME.link_watch.validate_active (bool) Validate received ARP packets on active ports. If this is not set, all incoming ARP packets will be considered as a good reply. Default: false link_watch.validate_inactive | ports.PORTIFNAME.link_watch.validate_inactive (bool) Validate received ARP packets on inactive ports. If this is not set, all incoming ARP packets will be considered as a good reply. Default: false link_watch.send_always | ports.PORTIFNAME.link_watch.send_always (bool) By default, ARP requests are sent on active ports only. This option allows sending even on inactive ports. Default: false NS
/NA PING LINK WATCH SPECIFIC OPTIONS link_watch.interval | ports.PORTIFNAME.link_watch.interval (int) Value is a positive number in milliseconds. It is the interval between sending NS packets. link_watch.init_wait | ports.PORTIFNAME.link_watch.init_wait (int) Value is a positive number in milliseconds. It is the delay between link watch initialization and the first NS packet being sent. link_watch.missed_max | ports.PORTIFNAME.link_watch.missed_max (int) Maximum number of missed NA reply packets. If this number is exceeded, link is reported as down. Default: 3 link_watch.target_host | ports.PORTIFNAME.link_watch.target_host (hostname) Hostname to be converted to IPv6 address which will be filled into NS packet as target address. EXAMPLES
{ "device": "team0", "runner": {"name": "roundrobin"}, "ports": {"eth1": {}, "eth2": {}} } Very basic configuration. { "device": "team0", "runner": {"name": "activebackup"}, "link_watch": {"name": "ethtool"}, "ports": { "eth1": { "prio": -10, "sticky": true }, "eth2": { "prio": 100 } } } This configuration uses active-backup runner with ethtool link watcher. Port eth2 has higher priority, but the sticky flag ensures that if eth1 becomes active, it stays active while the link remains up. { "device": "team0", "runner": {"name": "activebackup"}, "link_watch": { "name": "ethtool", "delay_up": 2500, "delay_down": 1000 }, "ports": { "eth1": { "prio": -10, "sticky": true }, "eth2": { "prio": 100 } } } Similar to the previous one. Only difference is that link changes are not propagated to the runner immediately, but delays are applied. { "device": "team0", "runner": {"name": "activebackup"}, "link_watch": { "name": "arp_ping", "interval": 100, "missed_max": 30, "target_host": "192.168.23.1" }, "ports": { "eth1": { "prio": -10, "sticky": true }, "eth2": { "prio": 100 } } } This configuration uses ARP ping link watch. { "device": "team0", "runner": {"name": "activebackup"}, "link_watch": [ { "name": "arp_ping", "interval": 100, "missed_max": 30, "target_host": "192.168.23.1" }, { "name": "arp_ping", "interval": 50, "missed_max": 20, "target_host": "192.168.24.1" } ], "ports": { "eth1": { "prio": -10, "sticky": true }, "eth2": { "prio": 100 } } } Similar to the previous one, only this time two link watchers are used at the same time. { "device": "team0", "runner": { "name": "loadbalance", "tx_hash": ["eth", "ipv4", "ipv6"] }, "ports": {"eth1": {}, "eth2": {}} } Configuration for hash-based passive Tx load balancing. { "device": "team0", "runner": { "name": "loadbalance", "tx_hash": ["eth", "ipv4", "ipv6"], "tx_balancer": { "name": "basic" } }, "ports": {"eth1": {}, "eth2": {}} } Configuration for active Tx load balancing using basic load balancer. { "device": "team0", "runner": { "name": "lacp", "active": true, "fast_rate": true, "tx_hash": ["eth", "ipv4", "ipv6"] }, "link_watch": {"name": "ethtool"}, "ports": {"eth1": {}, "eth2": {}} } Configuration for connection to LACP capable counterpart. SEE ALSO
teamd(8), teamdctl(8), teamnl(8), bond2team(1) AUTHOR
Jiri Pirko is the original author and current maintainer of libteam. libteam 2013-07-09 TEAMD.CONF(5)
All times are GMT -4. The time now is 02:25 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy