When you download a file, TCP adjusts its sending rate to avoid overwhelming the network. The algorithm that controls this is called congestion control. For decades, we used loss-based algorithms. Google BBR changed the game.
The Problem with Loss-Based Control
CUBIC (Linux default) and Reno work by increasing send rate until packets are lost, then backing off. This has problems:
- Bufferbloat: Modern networks have large buffers. By the time packets drop, the buffer is full, causing high latency.
- Underutilization: On lossy links (Wi-Fi, cellular), random packet loss triggers unnecessary slowdowns.
- Slow Start: After a loss, CUBIC cuts the window by half and slowly recovers.
BBR: Measure, Do Not Guess
BBR (Bottleneck Bandwidth and Round-trip propagation time) takes a different approach. Instead of waiting for loss, it actively measures:
- Bottleneck Bandwidth: Maximum delivery rate achieved
- RTprop: Minimum round-trip time (when buffers are empty)
BBR targets a send rate that fills the pipe without filling buffers: BDP = Bandwidth ร RTprop
Real-World Results
Google deployed BBR on YouTube servers:
- 4% higher throughput globally
- 14% in developing countries with lossy last-mile connections
- 33% lower latency by avoiding buffer bloat
BBRv2: Addressing Fairness
BBR v1 was criticized for being too aggressive against CUBIC flows. BBRv2 adds loss detection to be fairer when sharing links with loss-based algorithms.
Enabling BBR
On Linux servers: sysctl net.ipv4.tcp_congestion_control=bbr
Cloudflare, Google, and most major CDNs now use BBR on their edge servers.