Flow Control and Congestion Control
Flow control and congestion control solve two different problems in reliable data transfer. Flow control keeps a fast sender from overwhelming a slow receiver. Congestion control keeps all senders from collectively overwhelming the network itself. Both adjust the sender's transmission rate, but they respond to different signals and protect different resources.
Flow Control
Role of Flow Control
The receiver in a TCP connection has a finite buffer. If the sender blasts data faster than the receiver's application can read from that buffer, the buffer fills up, and incoming segments get dropped. Flow control exists to prevent this.
The receiver communicates how much buffer space it has available, and the sender limits itself accordingly. This is a purely end-to-end mechanism: only the sender and receiver are involved, and the network doesn't play a role.
- Prevents buffer overflow and data loss at the receiver
- Implemented at the transport layer (TCP's receive window field in the header)
- The sender continuously adjusts its rate based on the receiver's advertised capacity

Techniques for Flow Control
Sliding Window Mechanism
TCP uses this approach. The receiver advertises a receive window () in every ACK, telling the sender how many bytes of buffer space remain.
- The sender tracks a window of unacknowledged data it's allowed to have in flight
- Window size is bounded by : the sender cannot have more unacknowledged bytes outstanding than the receiver's advertised window
- As ACKs arrive, the window "slides" forward, allowing the sender to transmit new data
- If the receiver advertises , the sender stops transmitting (except for periodic probe segments to detect when the window reopens)
Credit-Based Mechanism
This is an alternative model used in some protocols (not standard TCP, but worth understanding for comparison).
- The receiver grants explicit credits representing the number of packets (or bytes) the sender may transmit
- The sender decrements a credit counter with each packet sent
- When credits hit zero, the sender pauses until the receiver issues more credits
- The receiver sends credit updates based on how much buffer space it has freed up
The key difference: sliding window ties flow control to ACKs, while credit-based systems use separate credit messages.
Congestion Control

Concept of Congestion Control
Congestion happens when the aggregate traffic from all senders exceeds what the network (routers, links) can handle. Routers' buffers overflow, packets get dropped, delays spike, and throughput collapses. Worse, retransmissions from senders reacting to those losses can pile on even more traffic.
Congestion control adjusts each sender's rate to keep the network operating near capacity without tipping into overload. The goals are:
- Efficiency: use available bandwidth without wasting it
- Fairness: competing flows should each get a reasonable share
- Stability: the network shouldn't oscillate between empty and overloaded
Senders detect congestion through two types of feedback:
- Implicit: packet loss (timeout or duplicate ACKs) signals that a router's buffer overflowed
- Explicit: routers mark packets with congestion indicators (e.g., ECN bits), and the receiver echoes this back to the sender
TCP Congestion Control Mechanisms
TCP maintains a congestion window () that limits how much unacknowledged data can be in flight. The actual sending window is , combining both congestion control and flow control.
Slow Start
- When a connection first opens (or after a timeout), starts at a small value, typically 1 MSS (Maximum Segment Size).
- For every ACK received, increases by 1 MSS. Since each RTT roughly doubles the number of outstanding segments, growth is exponential.
- This continues until reaches the slow start threshold (), or until a loss event occurs.
- If a timeout occurs, is set to , and resets to 1 MSS (back to slow start).
The name is a bit misleading: "slow start" actually ramps up quickly. It's "slow" only compared to immediately sending at full blast.
Congestion Avoidance
- Once , TCP switches from exponential to linear growth.
- For each RTT (roughly), increases by 1 MSS. This is the additive increase phase.
- The sender cautiously probes for more bandwidth, adding capacity slowly to avoid triggering congestion.
- If a loss is detected via timeout, and resets to 1 MSS.
This additive-increase, multiplicative-decrease (AIMD) pattern is what gives TCP its characteristic "sawtooth" throughput graph.
Fast Recovery
- Triggered when the sender receives 3 duplicate ACKs (indicating an isolated packet loss, not a full timeout).
- The sender sets and sets MSS (accounting for the 3 duplicate ACKs).
- It performs a fast retransmit of the lost segment immediately, without waiting for a timeout.
- For each additional duplicate ACK, increases by 1 MSS (inflating the window to keep data flowing).
- When a new (non-duplicate) ACK arrives, drops back to , and the sender enters congestion avoidance.
The advantage: fast recovery avoids resetting all the way to 1 MSS. Since duplicate ACKs mean some segments are still getting through, the network isn't completely congested, so a full restart would be too aggressive.
Flow Control vs. Congestion Control
| Flow Control | Congestion Control | |
|---|---|---|
| Protects | The receiver's buffer | The network (routers, links) |
| Signal source | Receiver (advertised window) | Network (packet loss, ECN) |
| Scope | End-to-end (sender ↔ receiver) | Sender ↔ network (involves intermediate nodes) |
| TCP mechanism | Receive window () | Congestion window () |
| Layer | Transport layer only | Transport + network layer interaction |
Both mechanisms constrain the sender simultaneously. The effective window at any moment is . A sender bottlenecked by a slow receiver is flow-control limited. A sender bottlenecked by a congested network is congestion-control limited. In practice, one of these two usually dominates at any given time.