/ 17 April 2007

Is it time to scrap the internet?

Although it has already taken nearly four decades to get this far in building the internet, some university researchers with the federal government’s blessing want to scrap all that and start over.

The idea may seem unthinkable, even absurd, but many believe a ”clean slate” approach is the only way truly to address security, mobility and other challenges that have cropped up since UCLA professor Leonard Kleinrock helped supervise the first exchange of meaningless test data between two machines on September 2 1969.

The internet ”works well in many situations, but was designed for completely different assumptions”, said Dipankar Raychaudhuri, a Rutgers University professor overseeing three clean-slate projects. ”It’s sort of a miracle that it continues to work well today.”

No longer constrained by slow connections and computer processors and high costs for storage, researchers say the time has come to rethink the internet’s underlying architecture, a move that could mean replacing networking equipment and rewriting software on computers better to channel future traffic over the existing pipes.

Even Vinton Cerf, one of the internet’s founding fathers as co-developer of the key communications techniques, said the exercise is ”generally healthy” because the current technology ”does not satisfy all needs”.

Balancing interests

One challenge in any reconstruction, though, will be balancing the interests of various constituencies. The first time around, researchers were able to toil away in their labs quietly. Industry is playing a bigger role this time, and law enforcement is bound to make its needs for wiretapping known.

There’s no evidence they are meddling yet, but once any research looks promising, ”a number of people [will] want to be in the drawing room”, said Jonathan Zittrain, a law professor affiliated with Oxford and Harvard universities. ”They’ll be wearing coats and ties and spilling out of the venue.”

The National Science Foundation (NSF) wants to build an experimental research network known as the Global Environment for Network Innovations (Geni), and is funding several projects at universities and elsewhere through Future Internet Network Design (Find).

Rutgers, Stanford, Princeton, Carnegie Mellon and the Massachusetts Institute of Technology are among the universities pursuing individual projects. Other government agencies, including the Defence Department, have also been exploring the concept.

The European Union has also backed research on such initiatives, through a programme known as Future Internet Research and Experimentation (Fire). Government officials and researchers met last month in Zurich to discuss early findings and goals.

A new network could run parallel with the current internet and eventually replace it, or perhaps aspects of the research could go into a major overhaul of the existing architecture.

These clean-slate efforts are still in their early stages, though, and are not expected to bear fruit for another 10 or 15 years — assuming the United States Congress comes through with funding.

Guru Parulkar, who will become executive director of Stanford’s initiative after heading the NSF’s clean-slate programmes, estimated that Geni alone could cost $350-million, while government, university and industry spending on the individual projects could collectively reach $300-million.

Spending so far has been in the tens of millions of dollars. And it could take billions of dollars to replace all the software and hardware deep in the legacy systems.

Mission critical

Clean-slate advocates say the cozy world of researchers in the 1970s and 1980s doesn’t necessarily mesh with the realities and needs of the commercial internet. ”The network is now mission critical for too many people, when in the [early days] it was just experimental,” Zittrain said.

The internet’s early architects built the system on the principle of trust. Researchers largely knew one another, so they kept the shared network open and flexible — qualities that proved key to its rapid growth.

But spammers and hackers arrived as the network expanded and could roam freely because the internet doesn’t have built-in mechanisms for knowing with certainty who sent what.

The network’s designers also assumed that computers were in fixed locations and always connected. That is no longer the case with the proliferation of laptops, personal digital assistants and other mobile devices, all hopping from one wireless access point to another, losing their signals here and there.

Engineers tacked on improvements to support mobility and improved security, but researchers say all that adds complexity, reduces performance and, in the case of security, amounts at most to bandages in a high-stakes game of cat and mouse.

Workarounds for mobile devices ”can work quite well if a small fraction of the traffic is of that type”, but could overwhelm computer processors and create security holes when 90% or more of the traffic is mobile, said Nick McKeown, co-director of Stanford’s clean-slate programme.

The internet will continue to face new challenges as applications require guaranteed transmissions — not the ”best effort” approach that works better for email and other tasks with less time sensitivity.

Transitioning to a next-generation internet could be akin to changing the engines on a moving airplane. Routers and other networking devices will likely need replacing; personal computers could be in store for software upgrades.

Headaches could arise given the fact that it will not be possible simply to shut down the entire network for maintenance, with companies, groups and individuals depending on it every day. And just think of the costs – potentially billions of dollars.

Difficult transition

Advocates of a clean-slate internet — a restructuring of the underlying architecture better to handle security, mobility and other emerging needs — agree that any transition will be difficult.

Consider that the groundwork for the IPv6 system for expanding the pool of internet addresses was largely completed nearly a decade ago, yet the vast majority of software and hardware today still use the older, more crowded IPv4 technology. The clean-slate initiatives are far more ambitious than that.

But researchers are not deterred. ”The premise of the clean-slate design is, let’s start by saying, ‘How should it be done?’ independent of ‘Can we retrofit it?” said Andrea Goldsmith, an electrical engineering professor at Stanford. ”Once we know what the right thing to do is, then we can say, ‘Is there an evolutional path?”’

One transition scenario is to run a parallel network for applications that truly need the improved functions. People would migrate to the new system over time, the way some are now abandoning the traditional telephone system for internet-based phones, even as the two networks run side by side.

”There’s no such thing as a flag day,” said Larry Peterson, chairperson of computer science at Princeton. ”What happens is that certain services start to take off and attract users, and industry players start to take notice and adapt.”

That is not unlike the approach Nasa has in mind for extending the internet into outer space. Nasa has started to deploy the interplanetary internet so its spacecraft will have a common way of communicating with one another and with mission control.

But because of issues unique to outer space — such as a planet temporarily blocking a spacecraft signal, or the 15 to 45 minutes it takes a message to reach Mars and back — Nasa can’t simply slap on the communications protocols designed for the Earthbound internet. So project researchers have come up with an alternate communications protocol for space, and the two networks hook up through a gateway.

To reduce costs, businesses might buy networking devices that work with both networks — and they would do so only when they would have upgraded their systems anyhow.

Some believe the current internet will never go away, and the fruits of the research could go into improving — rather than scrapping — the existing architecture.

”You can’t overhaul an international network very easily and expect everyone to jump on it,” said Leonard Kleinrock, a UCLA professor who was one of the driving forces in creating the original internet. ”The legacy systems are there. You’re not going to get away from it.” — Sapa-AP