Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

xnb(4) [freebsd man page]

XNB(4)							   BSD Kernel Interfaces Manual 						    XNB(4)

NAME
xnb -- Xen Paravirtualized Backend Ethernet Driver SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file: options XENHVM device xenpci DESCRIPTION
The xnb driver provides the back half of a paravirtualized xen(4) network connection. The netback and netfront drivers appear to their respective operating systems as Ethernet devices linked by a crossover cable. Typically, xnb will run on Domain 0 and the netfront driver will run on a guest domain. However, it is also possible to run xnb on a guest domain. It may be bridged or routed to provide the net- front's domain access to other guest domains or to a physical network. In most respects, the xnb device appears to the OS as an other Ethernet device. It can be configured at runtime entirely with ifconfig(8). In particular, it supports MAC changing, arbitrary MTU sizes, checksum offload for IP, UDP, and TCP for both receive and transmit, and TSO. However, see CAVEATS before enabling txcsum, rxcsum, or tso. SYSCTL VARIABLES
The following read-only variables are available via sysctl(8): dev.xnb.%d.dump_rings Displays information about the ring buffers used to pass requests between the netfront and netback. Mostly useful for debugging, but can also be used to get traffic statistics. dev.xnb.%d.unit_test_results Runs a builtin suite of unit tests and displays the results. Does not affect the operation of the driver in any way. Note that the test suite simulates error conditions; this will result in error messages being printed to the system log. SEE ALSO
arp(4), netintro(4), ng_ether(4), xen(4), ifconfig(8) HISTORY
The xnb device driver first appeared in FreeBSD 10.0. AUTHORS
The xnb driver was written by Alan Somers <alans@spectralogic.com> and John Suykerbuyk <johns@spectralogic.com>. CAVEATS
Packets sent through Xennet pass over shared memory, so the protocol includes no form of link-layer checksum or CRC. Furthermore, Xennet drivers always report to their hosts that they support receive and transmit checksum offloading. They "offload" the checksum calculation by simply skipping it. That works fine for packets that are exchanged between two domains on the same machine. However, when a Xennet inter- face is bridged to a physical interface, a correct checksum must be attached to any packets bound for that physical interface. Currently, FreeBSD lacks any mechanism for an Ethernet device to inform the OS that newly received packets are valid even though their checksums are not. So if the netfront driver is configured to offload checksum calculations, it will pass non-checksumed packets to xnb, which must then calculate the checksum in software before passing the packet to the OS. For this reason, it is recommended that if xnb is bridged to a physical interface, then transmit checksum offloading should be disabled on the netfront. The Xennet protocol does not have any mechanism for the netback to request the netfront to do this; the operator must do it manually. BUGS
The xnb driver does not properly checksum UDP datagrams that span more than one Ethernet frame. Nor does it correctly checksum IPv6 packets. To workaround that bug, disable transmit checksum offloading on the netfront driver. BSD
June 6, 2014 BSD

Check Out this Related Man Page

VTNET(4)						   BSD Kernel Interfaces Manual 						  VTNET(4)

NAME
vtnet -- VirtIO Ethernet driver SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file: device vtnet Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5): if_vtnet_load="YES" DESCRIPTION
The vtnet device driver provides support for VirtIO Ethernet devices. If the hypervisor advertises the appreciate features, the vtnet driver supports TCP/UDP checksum offload for both transmit and receive, TCP segmentation offload (TSO), TCP large receive offload (LRO), and hardware VLAN tag stripping/insertion features, as well as a multicast hash filter, as well as Jumbo Frames (up to 9216 bytes), which can be configured via the interface MTU setting. Selecting an MTU larger than 1500 bytes with the ifconfig(8) utility configures the adapter to receive and transmit Jumbo Frames. For more information on configuring this device, see ifconfig(8). LOADER TUNABLES
Tunables can be set at the loader(8) prompt before booting the kernel or stored in loader.conf(5). hw.vtnet.csum_disable hw.vtnet.X.csum_disable This tunable disables receive and send checksum offload. The default value is 0. hw.vtnet.tso_disable hw.vtnet.X.tso_disable This tunable disables TSO. The default value is 0. hw.vtnet.lro_disable hw.vtnet.X.lro_disable This tunable disables LRO. The default value is 0. hw.vtnet.mq_disable hw.vtnet.X.mq_disable This tunable disables multiqueue. The default value is 0. hw.vtnet.mq_max_pairs hw.vtnet.X.mq_max_pairs This tunable sets the maximum number of transmit and receive queue pairs. Multiple queues are only supported when the Multiqueue feature is negotiated. This driver supports a maximum of 8 queue pairs. The number of queue pairs used is the lesser of the maximum supported by the driver and the hypervisor, the number of CPUs present in the guest, and this tunable if not zero. The default value is 0. SEE ALSO
arp(4), netintro(4), ng_ether(4), virtio(4), vlan(4), ifconfig(8) HISTORY
The vtnet driver was written by Bryan Venteicher <bryanv@FreeBSD.org>. It first appeared in FreeBSD 9.0. CAVEATS
The vtnet driver only supports LRO when the hypervisor advertises the mergeable buffer feature. BSD
January 22, 2012 BSD
Man Page