Last week, Google announced its plans to build an experimental fiber network that would offer gigabit-per-second broadband speeds to up to 500,000 U.S. homes. Among other goals, the company said it wanted to “test new ways to build fiber networks, and to help inform and support deployments elsewhere.”
Google hasn’t released many details yet, but experts believe that the key to successful very-high-speed broadband doesn’t lie in fiber alone. To really speed up the Internet, Google will have to operate at many levels of its infrastructure.
Gigabit-per-second speeds are much faster than, for example, the speed currently offered by high-speed services such as Verizon FiOS. However, Google’s network won’t be the first to reach such speeds. There are several such deployments internationally, including in Hong Kong, the Netherlands, and Australia. Internet2, a nonprofit advanced networking consortium in the United States, has been experimenting with very-high-speed Internet for more than a decade, routinely offering 10-gigabit connections to university researchers.
Existing applications for very-high-speed Internet include the transfer of very large files, streaming high-definition (and possibly 3-D) video, video conferencing, and gaming. Some experts speculate that accessing large data files and applications through the cloud may also require better broadband.
“Just big pipes alone to an end user does not necessarily guarantee that you can deliver high-end applications,” says Gary Bachula, vice president of external relations for Internet2. There are many factors beyond raw bandwidth, Bachula says. For example, an improperly configured router or a university firewall can affect performance and end up acting as a network bottleneck.
“You need to have open networks, you need to publish your performance data, you need to have people troubleshoot your network remotely,” says Bachula. In recent years, Internet2 has been researching tools and technologies that can help find and resolve the performance issues that occur on high-speed connections “in a systematic and seamless way.” Ideally, he says, consumers as well as network managers would be able to use these tools to diagnose the network.
“If we’re really going to realize the vision of some of these high-end applications, it does have to go beyond basic raw bandwidth,” he adds.
It’s also not enough to build a fast hardware infrastructure, says Steven Low, a professor of computer science and electrical engineering at Caltech, and cofounder of the network optimization technology company FastSoft, based in Pasadena, CA. Low believes the protocols that move traffic through the network will also need to be updated to make effective use of very-high-speed capabilities.
For example, the transmission control protocol (TCP), the 20-year-old algorithm that governs most of the traffic flow over the Internet, doesn’t work well at gigabit-per-second speeds. The methods used by standard TCP to make sure it isn’t losing data cause it to use too little of the bandwidth available.
Low says that similar problems exist in many protocols, and that there are often problems with how protocols coordinate with each other that can further undermine network performance. High-speed broadband to users’ desktops might also be an opportunity to create new systems. “What new applications will become possible that are not now that users actually want to use, and what application protocols are needed to support them?” he says.
Rudolf van der Berg, a telecommunications consultant who was involved in running one of the earliest broadband networks in the world, says that while other companies and organizations have found ways to install gigabit connections, physically laying fiber still accounts for 70 to 80 percent of a project’s cost. Google could make a big contribution by finding more cost-efficient methods, he says.
He also notes that Google’s intention to share the network among multiple providers could influence how the network is structured technically. Networks that run one fiber to a group of homes and then share the bandwidth among them are harder to run according to the open-access model, van der Berg says.
Google hasn’t worked out most of the details of its plans for the experimental network yet, according to a Google spokesperson, but the company has engineers interested in various kinds of experiments with the deployment. Google expects that some of its teams will be interested in finding better ways to deploy fiber, others will want to experiment with the network’s capabilities, and so on.
The company plans to offer its own Internet service to customers in the community or communities it selects for the test bed, and it also expects to partner with other companies that will offer services using its network. Google is currently soliciting proposals from interested communities. The company expects to choose locations by the end of the year.
Internet2’s Bachula says he believes that Google’s initiative will encourage organizations such as the FCC to set concrete goals for broadband access throughout the U.S. By proposing a gigabit per second, Bachula says, Google has opened the way for conversation about how fast connections should be for tomorrow’s Internet.
Embracing CX in the metaverse
More than just meeting customers where they are, the metaverse offers opportunities to transform customer experience.
Identity protection is key to metaverse innovation
As immersive experiences in the metaverse become more sophisticated, so does the threat landscape.
The modern enterprise imaging and data value chain
For both patients and providers, intelligent, interoperable, and open workflow solutions will make all the difference.
Scientists have created synthetic mouse embryos with developed brains
The stem-cell-derived embryos could shed new light on the earliest stages of human pregnancy.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.