Looking for a Low-Latency TCP Congestion Avoidance Algorithm


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users Looking for a Low-Latency TCP Congestion Avoidance Algorithm
Prev   Next
# 1  
Old 09-02-2011
Looking for a Low-Latency TCP Congestion Avoidance Algorithm

I was looking at differnt types of TCP Congestion Avoidance algorithms and realized that they are almost all tailored toward "high speed networks with high latency" (aka. LFN)

Anybody know of a Congestion Avoidance algorithm used in low-latency networks?
 
Login or Register to Ask a Question

Previous Thread | Next Thread

7 More Discussions You Might Find Interesting

1. Solaris

Too much TCP retransmitted and TCP duplicate on server Oracle Solaris 10

I have problem with oracle solaris 10 running on oracle sparc T4-2 server. Os information: 5.10 Generic_150400-03 sun4v sparc sun4v Output from tcpstat.d script TCP bytes: out outRetrans in inDup inUnorder 6833763 7300 98884 0... (2 Replies)
Discussion started by: insatiable1610
2 Replies

2. Solaris

Latency Test

Hi every one, we have a set up in solaris 8 and 9 and running many cshell scripts.. we are migrate to AIX . Now, i want to know the latency difference between two boxes(Solaris and AIX). Kindly help me to , how to do Latency test.. (2 Replies)
Discussion started by: Madhu Siddula
2 Replies

3. AIX

Latency Test

Hi every one, we have a set up in solaris 8 and 9 and running many cshell scripts.. we are migrate to AIX . Now, i want to know the latency difference between two boxes(Solaris and AIX). Kindly help me to , how to do Latency test.. (0 Replies)
Discussion started by: Madhu Siddula
0 Replies

4. Solaris

Change congestion protocol in Solaris 10

I have a client with a meshed Cisco backbone. 6500's on top, Nexus 7000 in the middle and 4500's in bottom. Solaris 10 servers connected to the 4500's backing up to a RedHat Linux backup server connected to the Nexus 7000's. The traffic is routed from 4500 --> Nexus 7000 --> 6500 --> Nexus 7000... (3 Replies)
Discussion started by: crusoe
3 Replies

5. AIX

High Runqueue (R) LOW CPU LOW I/O Low Network Low memory usage

Hello All I have a system running AIX 61 shared uncapped partition (with 11 physical processors, 24 Virtual 72GB of Memory) . The output from NMON, vmstat show a high run queue (60+) for continous periods of time intervals, but NO paging, relatively low I/o (6000) , CPU % is 40, Low network.... (9 Replies)
Discussion started by: IL-Malti
9 Replies

6. IP Networking

TCP initial congestion window (slow-start)

I have noticed that the initial congestion window in my traces is 8920bytes~=6*1448. rfc3390 states the initial cwand should be max 4000 bytes(around 3*1448). At first i thought it might be because i'm running my server on mac os x, so apple might have modified the tcp stack. Therefore I tried... (2 Replies)
Discussion started by: ddayan
2 Replies

7. IP Networking

When should TCP congestion avoidance be used?

I have a Cisco small business switch and I am wondering what I will gain (or lose) by enabling "TCP congestion avoidance". I read the definition of it but how does one know when one should use it? (0 Replies)
Discussion started by: herot
0 Replies
Login or Register to Ask a Question
RED(8)								       Linux								    RED(8)

NAME
red - Random Early Detection SYNOPSIS
tc qdisc ... red limit bytes [ min bytes ] [ max bytes ] avpkt bytes [ burst packets ] [ ecn ] [ harddrop] [ bandwidth rate ] [ probability chance ] [ adaptive ] DESCRIPTION
Random Early Detection is a classless qdisc which manages its queue size smartly. Regular queues simply drop packets from the tail when they are full, which may not be the optimal behaviour. RED also performs tail drop, but does so in a more gradual way. Once the queue hits a certain average length, packets enqueued have a configurable chance of being marked (which may mean dropped). This chance increases linearly up to a point called the max average queue length, although the queue might get bigger. This has a host of benefits over simple taildrop, while not being processor intensive. It prevents synchronous retransmits after a burst in traffic, which cause further retransmits, etc. The goal is to have a small queue size, which is good for interactivity while not disturbing TCP/IP traffic with too many sudden drops after a burst of traffic. Depending on if ECN is configured, marking either means dropping or purely marking a packet as overlimit. ALGORITHM
The average queue size is used for determining the marking probability. This is calculated using an Exponential Weighted Moving Average, which can be more or less sensitive to bursts. When the average queue size is below min bytes, no packet will ever be marked. When it exceeds min, the probability of doing so climbs lin- early up to probability, until the average queue size hits max bytes. Because probability is normally not set to 100%, the queue size might conceivably rise above max bytes, so the limit parameter is provided to set a hard maximum for the size of the queue. PARAMETERS
min Average queue size at which marking becomes a possibility. Defaults to max /3 max At this average queue size, the marking probability is maximal. Should be at least twice min to prevent synchronous retransmits, higher for low min. Default to limit /4 probability Maximum probability for marking, specified as a floating point number from 0.0 to 1.0. Suggested values are 0.01 or 0.02 (1 or 2%, respectively). Default : 0.02 limit Hard limit on the real (not average) queue size in bytes. Further packets are dropped. Should be set higher than max+burst. It is advised to set this a few times higher than max. burst Used for determining how fast the average queue size is influenced by the real queue size. Larger values make the calculation more sluggish, allowing longer bursts of traffic before marking starts. Real life experiments support the following guideline: (min+min+max)/(3*avpkt). avpkt Specified in bytes. Used with burst to determine the time constant for average queue size calculations. 1000 is a good value. bandwidth This rate is used for calculating the average queue size after some idle time. Should be set to the bandwidth of your interface. Does not mean that RED will shape for you! Optional. Default : 10Mbit ecn As mentioned before, RED can either 'mark' or 'drop'. Explicit Congestion Notification allows RED to notify remote hosts that their rate exceeds the amount of bandwidth available. Non-ECN capable hosts can only be notified by dropping a packet. If this parameter is specified, packets which indicate that their hosts honor ECN will only be marked and not dropped, unless the queue size hits limit bytes. Recommended. harddrop If average flow queue size is above max bytes, this parameter forces a drop instead of ecn marking. adaptive (Added in linux-3.3) Sets RED in adaptive mode as described in http://icir.org/floyd/papers/adaptiveRed.pdf Goal of Adaptive RED is to make 'probability' dynamic value between 1% and 50% to reach the target average queue : (max - min) / 2 EXAMPLE
# tc qdisc add dev eth0 parent 1:1 handle 10: red limit 400000 min 30000 max 90000 avpkt 1000 burst 55 ecn adaptive bandwidth 10Mbit SEE ALSO
tc(8), tc-choke(8) SOURCES
o Floyd, S., and Jacobson, V., Random Early Detection gateways for Congestion Avoidance. http://www.aciri.org/floyd/papers/red/red.html o Some changes to the algorithm by Alexey N. Kuznetsov. o Adaptive RED : http://icir.org/floyd/papers/adaptiveRed.pdf AUTHORS
Alexey N. Kuznetsov, <kuznet@ms2.inr.ac.ru>, Alexey Makarenko <makar@phoenix.kharkov.ua>, J Hadi Salim <hadi@nortelnetworks.com>, Eric Dumazet <eric.dumazet@gmail.com>. This manpage maintained by bert hubert <ahu@ds9a.nl> iproute2 13 December 2001 RED(8)