WAN Optimization Live Chat

WAN Optimization Simple Rate Limiting

Rate Limiting in the context of networks can be viewed from many different angles. However, in general, rate limiting is any technique or process that is used to control the rate at which network traffic is sent. A limit is set, and rates that are equal to or less than the set limit is transmitted while those above are either dropped or delayed.

Rate Limiting Methods

Rate limiting congestion avoidance should have two major components

  • Routers having the ability to re-order or drop packets under overload conditions
  • End points having flow control mechanisms that respond to congestion and take appropriate action

Traffic shaping may also be considered a form of rate limiting. Traffic shaping works on the volume of traffic rather than on the rate of traffic. When traffic shaping is used as a rate limiting method, it is generally used to guarantee performance, improve latency and or increase bandwidth available for some types of traffic at the expense of others. Traffic that meets certain criteria will be transmitted while those that do not will be delayed.

If significant congestion occurs in a network, latency can increase substantially. In such cases, traffic shaping can be utilized to keep latency within acceptable limits. Traffic shaping controls the volume of traffic sent to a network in a specific time period (bandwidth throttling) or limits the rate at which traffic can be sent (rate limiting) or very complex criteria such as GCRA (Generic Cell Rate Algorithm) GCRA is used in ATM networks. It measures the timing of cells on Virtual Channels (VCs) or Virtual Paths (VPs) against contract limits of jitter and bandwidth.

Rate Limiting Effects

When the rate limiting technique policing has been applied to traffic, the recipient will be able to observe packet loss throughout the transmission when conditions that exceed the contract are in place.

In the case of protocols such as TCP, which have rate limiting congestion correcting mechanisms, dropped packets will not get acknowledge by the receiver and thus will be resent which results in greater traffic. If traffic on the receiving end has been subjected to rate limiting by one technique or the other, then it will generally comply with the contract but some jitter may be introduced by elements downstream of the policing point.

Sources that have rate limiting congestion control such as TCP, quickly adjust to the contract by converging to a rate below the contract limit.


Data sent across networks continues to grow at an almost exponential rate and many times has the capacity to overwhelm networks. Network infrastructure capacity increases at a much lower pace and it is inevitable that a mismatch will occur sometime in the future or is occurring in instances where the networks physical architecture is old or obsolete. In the effort to squeeze the maximum performance out of existing networks, network administrators have to resort to rate limiting techniques to avoid total and catastrophic collapse. While rate limiting in itself may sound ominous, in reality if administered correctly can become a powerful tool and friend of administrators. Rate limiting is here to stay and rate limiting techniques will continue to evolve.