The
basic idea behind RED queue management is to detect incipient congestion early
and to convey congestion notification to the end-hosts, allowing them to reduce
their transmission rates before queues in the network overflow and packets are
dropped.
To do this, RED
maintain an exponentially-weighted moving average (EWMA) of the queue length
which it uses to detect congestion. When the average queue length exceeds a
minimum threshold (minth), packets are randomly dropped or
marked with an explicit congestion notification (ECN) bit . When the average
queue length exceeds a maximum threshold (maxth), all packets
are dropped or marked.
We choose to
study and evaluate RED-family AQM for the several reasons: First, while
recently proposed AQM mechanisms such as BLUE [5], REM do not strictly use the
average queue to compute congestion, they have performance goals similar to
that of RED- family AQMs. Second, since RED congestion control mechanisms are
relatively well-understood and are commonly used as a benchmark for evaluation
of other AQM mechanisms. Third, with the help of a general modification, it is
easy to configure RED- family AQMs to create various test scenarios that reveal
interesting AQM congestion control issues.. Lastly, since RED is already implemented
in some commercial routers, our optimized RED can be used to tune these
routers, thus realizing some of the potential benefits of ECN with few network
infrastructure changes.
While RED is
certainly an improvement over traditional droptail queues, it has several
shortcomings. One of the fundamental problems with RED is that they rely on
queue length as an estimator of congestion.. Since the RED algorithm relies on
queue lengths, it has an inherent problem in determining the severity of
congestion. As a result, RED requires a wide range of parameters to operate
correctly under different congestion scenarios. While RED can achieve an ideal
operating point, it can only do so when it has a sufficient amount of buffer
space and is correctly parameterized.
RED
[3] was designed with the objectives to (1) minimize packet loss and queuing
delay, (2) avoid global synchronization of sources, (3) maintain high link
utilization, and (4) remove biases against bursty sources. The basic idea
behind RED queue management is to detect incipient congestion early and to
convey congestion notification to the end-hosts, allowing them to reduce their
transmission rates before queues in the network overflow and packets are
dropped.
To do this, RED maintain an exponentially-weighted moving
average (EWMA) of the queue length which it uses to detect congestion. When the
average queue length exceeds a minimum threshold (minth),
packets are randomly dropped or marked with an explicit congestion notification
(ECN) bit [2]. When the average queue length exceeds a maximum threshold (maxth),
all packets are dropped or marked.
RED represents a class of queue
management mechanisms that does not keep the state of each flow. That is, they
put the data from the all the flows into one queue, and focus on their overall
performance. It is that which originate the problems caused by non-responsive
flows.
Our presented modified version of
RED lays great importance on stability of queue, as we can see that the
mechanism of RED is heavily dependent on the stability of queue length at
buffer.
Modifications:
The original RED model follows a linear model. This model can be enhanced by modifying the existing linear packet probability marking into a quadratic function. The Quadratic model offer both sides of the RED.
With the one above RED(concave one) being more aggressive and hence more stable, while the one below RED(convex one) offers less stabilization and packet loss.
The lower to RED convex quadratic model offers services equal to RED in different Scenarios, while performing better in some and also worst in some.
But the concave quadratic model appears to be a winner over RED, being more stabilized and having reduced packet loss.
More on this topic in further posts.
Viva La Raza
Siddharth