In his office within the gleaming-stainless-steel and orange-brick jumble of MIT’s Stata Center, Internet elder statesman and onetime chief protocol architect David D. Clark prints out an old PowerPoint talk. Dated July 1992, it ranges over technical issues like domain naming and scalability. But in one slide, Clark points to the Internet’s dark side: its lack of built-in security.
In others, he observes that sometimes the worst disasters are caused not by sudden events but by slow, incremental processes – and that humans are good at ignoring problems. “Things get worse slowly. People adjust,” Clark noted in his presentation. “The problem is assigning the correct degree of fear to distant elephants.”
Today, Clark believes the elephants are upon us. Yes, the Internet has wrought wonders: e-commerce has flourished, and e-mail has become a ubiquitous means of communication. Almost one billion people now use the Internet, and critical industries like banking increasingly rely on it.
At the same time, the Internet’s shortcomings have resulted in plunging security and a decreased ability to accommodate new technologies. “We are at an inflection point, a revolution point,” Clark now argues. And he delivers a strikingly pessimistic assessment of where the Internet will end up without dramatic intervention. “We might just be at the point where the utility of the Internet stalls – and perhaps turns downward.”
Indeed, for the average user, the Internet these days all too often resembles New York’s Times Square in the 1980s. It was exciting and vibrant, but you made sure to keep your head down, lest you be offered drugs, robbed, or harangued by the insane. Times Square has been cleaned up, but the Internet keeps getting worse, both at the user’s level, and – in the view of Clark and others – deep within its architecture.
Over the years, as Internet applications proliferated – wireless devices, peer-to-peer file-sharing, telephony – companies and network engineers came up with ingenious and expedient patches, plugs, and workarounds. The result is that the originally simple communications technology has become a complex and convoluted affair. For all of the Internet’s wonders, it is also difficult to manage and more fragile with each passing day.
That’s why Clark argues that it’s time to rethink the Internet’s basic architecture, to potentially start over with a fresh design – and equally important, with a plausible strategy for proving the design’s viability, so that it stands a chance of implementation. “It’s not as if there is some killer technology at the protocol or network level that we somehow failed to include,” says Clark. “We need to take all the technologies we already know and fit them together so that we get a different overall system. This is not about building a technology innovation that changes the world but about architecture – pulling the pieces together in a different way to achieve high-level objectives.”
Just such an approach is now gaining momentum, spurred on by the National Science Foundation. NSF managers are working to forge a five-to-seven-year plan estimated to cost $200 million to $300 million in research funding to develop clean-slate architectures that provide security, accommodate new technologies, and are easier to manage.
They also hope to develop an infrastructure that can be used to prove that the new system is really better than the current one. “If we succeed in what we are trying to do, this is bigger than anything we, as a research community, have done in computer science so far,” says Guru Parulkar, an NSF program manager involved with the effort. “In terms of its mission and vision, it is a very big deal. But now we are just at the beginning. It has the potential to change the game. It could take it to the next level in realizing what the Internet could be that has not been possible because of the challenges and problems.”