Is the Net Too Neutral?
At the end of February, the Federal Communications Commission (FCC) held a public hearing at Harvard University, investigating claims that the cable giant Comcast had been stifling traffic sent over its network using the popular peer-to-peer file-sharing protocol BitTorrent. Comcast argued that it acted only during periods of severe network congestion, slowing bandwidth-hogging traffic sent by computers that probably didn’t have anyone sitting at them, anyway. But critics countered that Comcast had violated the Internet’s prevailing principle of “Net neutrality,” the idea that network operators should treat all the data packets that travel over their networks the same way.
So far, the FCC has been reluctant to adopt hard and fast rules mandating Net neutrality; at the same time, it has shown itself willing to punish clear violations of the principle. But however it rules in this case, there are some Internet experts who feel that Net neutrality is an idea that may have outlived its usefulness.
Mung Chiang, an assistant professor of electrical engineering at Princeton University and a member of last year’s TR35, says that in the name of Net neutrality, network operators and content distributors maintain a mutual ignorance that makes the Internet less efficient. Measures that one group takes to speed data transfers, he explains, may unintentionally impede measures taken by the other. In a peer-to-peer network, “the properties based on which peers are selected are influenced to a large degree by how the network does its traffic management,” Chiang says. But the peer selection process “will have impact in turn on the traffic management.” The result, he says, can be a feedback loop in which one counterproductive procedure spawns another.
Programs using BitTorrent, for instance, download files from a number of different peers at once. But if a particular peer isn’t sending data quickly enough, Chiang says, the others might drop it in favor of one that’s more reliable. Activity patterns among BitTorrent users can thus change very quickly. Network operators, too, try to maximize efficiency; if they notice a bandwidth bottleneck, they route around it. But according to Chiang, they operate on a much different timescale. A bottleneck caused by BitTorrent file transfers may have moved elsewhere by the time the network operator responds to it. Traffic could end up being rerouted around a vanished bottleneck and down a newly congested pipe.
A little information about the data they’re ferrying, Chiang argues, could help network operators manage congestion better. He points out, for example, that the BitTorrent transfers that tend to consume the most bandwidth are video files. But not all frames of video are created equal. Some contain information that will stay fairly constant throughout a scene. Other frames, however, describe minor modifications that occur over time, and these can occasionally be dropped without disrupting the viewing experience. Chiang and his colleagues have created some videos comparing the results of congestion management techniques that selectively drop some frames of video.
Treating data packets differently–prioritizing some over others–is a violation of the most austere version of Net neutrality. But the idea finds support in what may at first seem an unlikely place. Eric Klinker is chief technology officer at BitTorrent, the company founded by Bram Cohen, inventor of the BitTorrent protocol and another alumnus of the TR35. Klinker testified on behalf of his company at the FCC hearings at Harvard, but he agrees with Chiang that on occasion, impeding BitTorrent video transfers can be harmless. Someone watching a movie, for instance, places a higher priority on the next 10 minutes’ worth of data than on the last 10 minutes’. Packets containing the movie’s end credits could thus be tagged to indicate their low priority, and the network operator would know that they could be delayed during periods of congestion.
“I don’t like the idea that, if it’s video traffic, Comcast might target it differently than they might target a software download,” Klinker says. “But this mechanism could be implemented by any application.” In order for such a system to work, however, “we absolutely have to have a guarantee from the network that if there is capacity, they will allow this traffic to use as much of it as is available,” Klinker says. “And I think there’s a degree of mistrust at the moment where I’m not sure we would believe the operators.”
Chiang believes that the mistrust between network operators and content managers has forced the industry into a false choice. Net neutrality seems to be the only alternative to anticompetitive collusion, in which network operators give preferential treatment to their own content or that of their partners. Chiang, however, thinks that there’s a middle ground, and that one of the obstacles to reaching it has been the inability to accurately quantify the costs and benefits of different types of information sharing between network operators and content distributors. Later this month, at the Institute of Electrical and Electronics Engineers’ annual Conference on Information Sciences and Systems, Chiang will outline a mathematical framework for performing just such a cost-benefit analysis. “There is a notion of capacity for a pipe” such as Comcast’s network, Chiang says, “and there is also the notion of capacity for content distribution,” through peer-to-peer networks and other, similar channels. But, Chiang adds, “there hasn’t been a notion of capacity for this joint interaction.”
Chiang says that he and his colleagues have already applied their model to the “special case” of peer-to-peer video streaming. “In this special case, we have recently obtained the exact answer–what capacity is, and how to construct a peer selection algorithm to reach arbitrarily close to that value.” The researchers have also developed a second model that depicts the economic interactions between all the parties involved in Internet content distribution–not just network operators and the developers of peer-to-peer programs, but also content creators (like movie studios), network equipment vendors, end users, and the like.
Chiang acknowledges that his work is just the first step in a process that will be a long time unfolding. “Finding the network capacity,” he says, “will take many years of hard work by computer scientists and mathematicians.” But with the explosion in the popularity of bandwidth-hogging Internet video, the prevalence of peer-to-peer networks, and the increasing frequency with which people buy Internet and television service from the same vendors, any clarification of the complex dynamics of Internet content distribution is welcome.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.