Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

tbrconfig(8) [netbsd man page]

TBRCONFIG(8)						    BSD System Manager's Manual 					      TBRCONFIG(8)

NAME
tbrconfig -- configure a token bucket regulator for an output queue SYNOPSIS
tbrconfig interface [tokenrate [bucketsize]] tbrconfig -d interface tbrconfig -a DESCRIPTION
tbrconfig configures a token bucket regulator for the output network interface queue. A token bucket regulator limits both the average amount and instantaneous amount of packets that the underlying driver can dequeue from the network interface within the kernel. Conceptually, tokens accumulate in a bucket at the average tokenrate, up to the bucketsize. The driver can dequeue packets as long as there are positive amount of tokens, and the length of the dequeued packet is subtracted from the remaining tokens. Tokens can be negative as a deficit, and packets are not dequeued from the interface queue until the tokens become positive again. The tokenrate limits the average rate, and the bucketsize limits the maximum burst size. Limiting the burst size is essential to packet scheduling, since the scheduler schedules packets backlogged at the network interface. Limit- ing the burst size is also needed for drivers which dequeues more packets than they can send and end up with discarding excess packets. When the tokenrate is set to higher than the actual transmission rate, the transmission complete interrupt will trigger the next dequeue. On the other hand, when the tokenrate is set to lower than the actual transmission rate, the transmission complete interrupt would occur before the tokens become positive. In this case, the next dequeue will be triggered by a timer event. Because the kernel timer has a limited gran- ularity, a larger bucketsize is required for a higher tokenrate. The interface parameter is a string of the form ``name unit'', for example, ``en0''. The tokenrate parameter specifies the average rate in bits per second, and ``K'' or ``M'' can be appended to tokenrate as a short hand of ``Kilo-bps'' or ``Mega-bps'', respectively. When tokenrate is omitted, tbrconfig displays the current parameter values. The bucketsize parameter specifies the bucket size in bytes, and ``K'' can be appended to bucketsize as a short hand of ``Kilo-bytes''. When bucketsize is omitted, tbrconfig assumes the regulator is driven by transmission complete interrupts and, using heuristics, assigns a small bucket size according to the tokenrate. When the keyword ``auto'' is given as bucketsize, tbrconfig assumes the regulator is driven by the kernel timer, and computes the bucket size from tokenrate and the kernel clock frequency. If the -d flag is passed before an interface name, tbrconfig will remove the token bucket regulator for the specified interface. Optionally, the -a flag may be used instead of an interface name. This flag instructs tbrconfig to display information about all interfaces in the system. EXAMPLES
To configure a token bucket regulator for the interface en0 with 10Mbps token rate and 8KB bucket size, # tbrconfig en0 10M 8K To rate-limit the interface en0 up to 3Mbps, # tbrconfig en0 3M auto SEE ALSO
altq.conf(5), altqd(8) HISTORY
The tbrconfig command first appeared in WIDE/KAME IPv6 protocol stack kit as part of ALTQ tools. BSD
July 25, 2000 BSD

Check Out this Related Man Page

TC(8)                                                                  Linux                                                                 TC(8)

NAME
tbf - Token Bucket Filter SYNOPSIS
tc qdisc ... tbf rate rate burst bytes/cell ( latency ms | limit bytes ) [ mpu bytes [ peakrate rate mtu bytes/cell ] ] burst is also known as buffer and maxburst. mtu is also known as minburst. DESCRIPTION
The Token Bucket Filter is a classful queueing discipline available for traffic control with the tc(8) command. TBF is a pure shaper and never schedules traffic. It is non-work-conserving and may throttle itself, although packets are available, to ensure that the configured rate is not exceeded. It is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, send- ing out data exactly at the configured rates. Much higher rates are possible but at the cost of losing the minimal burstiness. In that case, data is on average dequeued at the config- ured rate but may be sent much faster at millisecond timescales. Because of further queues living in network adaptors, this is often not a problem. ALGORITHM
As the name implies, traffic is filtered based on the expenditure of tokens. Tokens roughly correspond to bytes, with the additional con- straint that each packet consumes some tokens, no matter how small it is. This reflects the fact that even a zero-sized packet occupies the link for some time. On creation, the TBF is stocked with tokens which correspond to the amount of traffic that can be burst in one go. Tokens arrive at a steady rate, until the bucket is full. If no tokens are available, packets are queued, up to a configured limit. The TBF now calculates the token deficit, and throttles until the first packet in the queue can be sent. If it is not acceptable to burst out packets at maximum speed, a peakrate can be configured to limit the speed at which the bucket empties. This peakrate is implemented as a second TBF with a very small bucket, so that it doesn't burst. To achieve perfection, the second bucket may contain only a single packet, which leads to the earlier mentioned 1mbit/s limit. This limit is caused by the fact that the kernel can only throttle for at minimum 1 'jiffy', which depends on HZ as 1/HZ. For perfect shap- ing, only a single packet can get sent per jiffy - for HZ=100, this means 100 packets of on average 1000 bytes each, which roughly corre- sponds to 1mbit/s. PARAMETERS
See tc(8) for how to specify the units of these values. limit or latency Limit is the number of bytes that can be queued waiting for tokens to become available. You can also specify this the other way around by setting the latency parameter, which specifies the maximum amount of time a packet can sit in the TBF. The latter calcula- tion takes into account the size of the bucket, the rate and possibly the peakrate (if set). These two parameters are mutually exclusive. burst Also known as buffer or maxburst. Size of the bucket, in bytes. This is the maximum amount of bytes that tokens can be available for instantaneously. In general, larger shaping rates require a larger buffer. For 10mbit/s on Intel, you need at least 10kbyte buffer if you want to reach your configured rate! If your buffer is too small, packets may be dropped because more tokens arrive per timer tick than fit in your bucket. The minimum buffer size can be calculated by dividing the rate by HZ. Token usage calculations are performed using a table which by default has a resolution of 8 packets. This resolution can be changed by specifying the cell size with the burst. For example, to specify a 6000 byte buffer with a 16 byte cell size, set a burst of 6000/16. You will probably never have to set this. Must be an integral power of 2. mpu A zero-sized packet does not use zero bandwidth. For ethernet, no packet uses less than 64 bytes. The Minimum Packet Unit determines the minimal token usage (specified in bytes) for a packet. Defaults to zero. rate The speed knob. See remarks above about limits! See tc(8) for units. Furthermore, if a peakrate is desired, the following parameters are available: peakrate Maximum depletion rate of the bucket. The peakrate does not need to be set, it is only necessary if perfect millisecond timescale shaping is required. mtu/minburst Specifies the size of the peakrate bucket. For perfect accuracy, should be set to the MTU of the interface. If a peakrate is needed, but some burstiness is acceptable, this size can be raised. A 3000 byte minburst allows around 3mbit/s of peakrate, given 1000 byte packets. Like the regular burstsize you can also specify a cell size. EXAMPLE &; USAGE To attach a TBF with a sustained maximum rate of 0.5mbit/s, a peakrate of 1.0mbit/s, a 5kilobyte buffer, with a pre-bucket queue size limit calculated so the TBF causes at most 70ms of latency, with perfect peakrate behaviour, issue: # tc qdisc add dev eth0 handle 10: root tbf rate 0.5mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540 To attach an inner qdisc, for example sfq, issue: # tc qdisc add dev eth0 parent 10:1 handle 100: sfq Without inner qdisc TBF queue acts as bfifo. If the inner qdisc is changed the limit/latency is not effective anymore. SEE ALSO
tc(8) AUTHOR
Alexey N. Kuznetsov, <kuznet@ms2.inr.ac.ru>. This manpage maintained by bert hubert <ahu@ds9a.nl> iproute2 13 December 2001 TC(8)
Man Page