The Next Internet
A leading advocate of radical change in the Internet says research solutions will straddle the twin concepts of replace and revamp.
As the National Science Foundation (NSF) gears up to fund research into new Internet architectures that will provide a more secure and innovation-friendly network (see the three-part “The Internet is Broken”), it will get a running start from existing research efforts, such as PlanetLab, a collection of 1,000 researchers at 300 institutions around the world developing an “overlay network” of software. This global network runs on routers that allow network operators to do everything from manage traffic more efficiently to detect worms and viruses more quickly.
Larry Peterson is a computer scientist at Princeton University and director of PlanetLab. In an exchange with Technology Review, he explained the rationale behind the NSO project, called Global Environment for Networking Investigations, or GENI.
Technology Review: The problems with the Internet are fairly well understood. What are the basic views of what needs to be done at a technical level?
Larry Peterson: There is universal agreement that creating the “Future Internet,” which meets the demands of the 21st century, is both a national priority and ripe with research challenges and opportunities. But there are two general schools of thought as to how to pursue this goal.
One view is that we may be at an inflection point in the societal utility of the Internet, with eroding trust, reduced innovation, and slowing rates of update. This view focuses on assumptions built into today’s 30-year-old architecture that limit its ability to cope with emerging threats and opportunities, and argues that it is time for a “clean slate” reconceptualization of the Internet architecture.
The other view takes today’s Internet as a given, and argues that future innovation will come in the form of new services and applications running on top of the Internet. Over time, these innovations will likely have a transformational effect on the Internet, but [this argument goes] it is simply not practical to think in terms of replacing all of today’s Internet infrastructure.
TR: To the casual reader, “clean slate” sounds like you are arguing for replacing all of today’s Internet infrastructure. Are you?
LP: No. I interpret Future Internet very broadly, to include innovations at any level of the architecture. Research is equally likely to result in alternative protocols and architectures running inside the network, or in new applications and services as overlays on top of today’s Internet. Collectively, all these layers will form the Future Internet.
While researchers should employ clean-slate thinking that is not constrained by today’s Internet, it does not imply that the outcome will necessarily be an entirely new Internet. In other words, clean slate is a “process,” not a “result.”
It is likely that different researchers will chose to leverage different aspects of today’s Internet, while exploring alternatives to other elements. There will likely be opportunities at the boundary between these two perspectives, that is, in exploring how today’s architecture is best evolved over time to better support emerging overlay services.
TR: Either way, this is a massive undertaking on a very entrenched infrastructure. How hard will it be, socially or politically, to get this off the ground in the United States?
LP: I think we will be able to tell a compelling story: about the need to re-conceptualize the Internet as being of national importance. The question is whether the Computer Science research community can (at least partially) set aside its traditional competitive mode of operation to reach consensus on how the best ideas can be synthesized into a coherent and comprehensive network architecture.
TR: Given the international scrap over just the issue of domain naming [see “Net Compromise in Tunis”], how can you get this implemented outside the United States?
LP: We need to have international participation from the outset. There are already GENI-like activities beginning elsewhere, such as in the European Union, Japan, and China. We just need to make sure the United States is one of the countries in the game.
TR: What key new functions or features do you think are needed?
LP: Of course security will be important, but I expect the way in which users identify resources – Web URLs, domain names, host addresses – will be central. This is because the way users identify resources influences everything from enabling mobility, to delaying the selection of the best resource for a given client, to adding flexibility to how routes are selected, to controlling what resources are private and what resources are publicly accessible.
TR: Which parts of this have already been amply demonstrated through efforts like PlanetLab?
LP: Many of the services running on PlanetLab make the Internet behave in a more robust or flexible way. Content distribution networks redirect Web requests to nearby cached copies, both improving the response time a user sees and making the system more robust when there is significant demand for certain content. New addressing schemes add a level of indirection to point-to-point communications, thereby providing a means to multicast data to multiple recipients or implement firewall-like protection. Other services are able to detect anomalous network behavior (e.g., worms, route failures), and still others give users alternative paths through the Internet. These are all services are able to run as overlays on top of today’s Internet.
TR: Which parts are farther back in the pipeline and when will they be ready?
LP: While there is a lot of research on wireless networks, much of that work is focused on isolated wireless sub-networks. The missing piece seems to be how we create a seamless global network that includes both wired and wireless components, thereby supporting mobility on a world-wide scale. Likewise, understanding how to exploit multiple independent sensor networks on a global basis – to do things like tracking product distribution or creating traffic reports – still needs attention. This “global perspective” is still a ways out, but the prospect of GENI is causing the wired and wireless communities to pay more attention to the broader architectural issues.
TR: Assuming that a good new architecture can be crafted and demonstrated, what’s a possible deployment scenario? Would the federal government be the first adopter?
LP: Government-funded use by large research projects – “big science” – is one scenario; but to realize widespread adoption will require that the research community demonstrate value to a much broader user base. Doing so potentially leads to “service-oriented” ISPs that do some of these value-added things. Perhaps these new ISPs exist side-by-side with today’s Internet, or perhaps they become the “lens” through which ordinary users interact with the Internet. We tend to speculate about how deployment will actually play out, but our goal is simply to lower the barrier-to-entry for innovators to deploy, and for users to adopt new capabilities.
TR: As we understand it, today’s basic Internet protocol, called TCP/IP, began on a certain date. Would we need to do that here – launch a new architecture on a specific date?
LP: I don’t think this scenario is at all likely. The challenge for the research community is to find ways to support incremental adoption of whatever new ideas we develop. This requires a way for users to opt-in on a per-user, per-application basis. In the end, user demand will drive deployment.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today