Google’s Network Congestion Algorithm Isn’t Fair, Researchers Say
Credit to Author: Karl Bode| Date: Thu, 31 Oct 2019 13:51:36 +0000
Researchers at Carnegie Mellon University say a Google algorithm designed help reduce internet congestion is mathematically unfair, resulting in network management systems that may disadvantage some traffic over others.
Several years back, Google began work on a new open source congestion control algorithm (CCA) designed to improve the way the internet functions. The result was BBR, short for Bottleneck Bandwidth and RTT (Round-Trip Time). The goal of the project: to improve how network packets travel between servers to mitigate congestion on the internet.
CCAs have long been used to help manage congestion—ideally while treating all traffic equally. But in a study unveiled last week at the Internet Measurement Conference in Amsterdam, researchers revealed that BBR doesn’t actually do a very good job of that last part.
In fact, they found that during periods of congestion, BBR connections would take up 40 percent of the available bandwidth, leaving the remaining 60 percent to be fought over by the remaining users on the network.
BBR has already been implemented across various parts of Alphabet and Google, and is the dominant CCA used to help deliver YouTube traffic. Carnegie Melon researcher Ranysha Ware told Motherboard the problem only creeps up during heavy periods of congestion.
“Imagine you're at a crowded coffee shop, or in a part of the world with only 2G Internet speeds,” Ware said. “In these scenarios, the congestion control algorithms determine how to divide up the bandwidth. If you're doing a large download, having more bandwidth means your download will finish faster—and conversely, that others downloads will take longer.”
Another example: imagine six people were all trying to use a 50 Mbps broadband connection to stream Netflix. A user connected to Netflix using BBR would get 20 Mbps of bandwidth, leaving the remaining 30 Mbps to be split between the remaining five users. For those five users, the difference would be watching videos in standard definition versus HD or 4K.
Ware told me that the flaws in Google’s algorithm weren’t intentional, and that the company has been highly receptive to criticism. She said work had already begun at Google on version 2 of the algorithm, though testing will need to be done to confirm whether the problem was fixed.
She was also quick to note that the fact that Google made the algorithm open source is an act of transparency that made discovering the flaws possible in the first place. Many companies deploy CCAs that are neither transparent nor fair, she said.
“Lots of companies deploy new congestion control algorithms, but most of them are proprietary,” she said. For example, content delivery network and cloud service provider Akamai uses an algorithm dubbed FastTCP whose implementation is entirely secret. She also noted that Microsoft also uses its own non-transparent congestion control algorithm for Skype.
“It's a classically challenging problem to get these algorithms to play nice with each other, and therefore we worry that there is a lot more unfairness going on on the Internet than we know about,” she said.
Many older algorithms (like Reno and CUBIC) only responded to congestion after congestion was detected. Newer CCAs like BBR are being developed that can respond to internet congestion in real time, and will be important for improving network performance.
Equally important: transparency that allows researchers to look under the hood to make sure the end result is an internet that works equally well for everybody.
This article originally appeared on VICE US.