Skip to Content

The Wait-for-Google-to-Do-It Strategy

America’s communications ­infrastructure is finally getting some crucial upgrades because one company is forcing ­competition when regulators won’t.

It’s too often said that some event “changed everything” in technology. But when it comes to the history of broadband in the United States, Google Fiber really did. Before February 2010, when Google asked cities to apply to be first in line for the fiber-optic lines it would install to deliver Internet service to homes at a gigabit per second, the prospects for upgrading Americans’ wired broadband connections looked dismal. The Federal Communications Commission was on the verge of releasing its first National Broadband Plan, which stressed the importance of affordable, abundant bandwidth and the need to spread it by “overbuilding”—stringing fiber to houses and businesses even if they already had service over cable and phone lines with relatively low capacity. Yet at the time, as Blair Levin, executive director of the broadband plan, told me, “for the first time since 1994, there was no national provider with plans to overbuild the current network.”

This was not because of technological hurdles. Instead, it was a simple matter of incentives. Building much faster networks was an expensive task, one that would require the kind of hefty capital expenditures that Wall Street typically frowns upon. (Verizon’s spending on its FIOS TV and high-speed Internet service, for instance, came in the face of deep skepticism from investors, which eventually led the company to curtail its expansion of FIOS nationally.) And since Internet service in most cities was supplied by either a near monopoly or a cozy duopoly in which the two players—typically a cable company and a major telecom provider—barely competed against each other, there was little competitive pressure to improve. As long as all the players kept the status quo intact, it seemed, Internet providers could look forward to years of making sizable profits without having to put much money into their networks. The Internet as we know it was only 15 years old, but ISPs were already shifting into harvesting mode: maximizing revenue from their infrastructure rather than upgrading it. Forget gigabit Internet. The National Broadband Plan set a goal of getting 100 million homes affordable access to download speeds of just one-tenth of a gigabit, or 100 megabits, per second. (Only 15 percent of American homes have connections above 25 megabits now.)

State and local governments had done little to disrupt the status quo or push ISPs to invest in upgrades. And governments also showed little interest in subsidizing, let alone fully paying for, a better infrastructure themselves. (There was money allocated to broadband investment in the 2009 stimulus bill, but it went mainly to wire underserved areas rather than lay fiber.) On the municipal level, most cities still had building regulations and permit requirements that, inadvertently or not, tended to discourage the laying of new line, particularly by new entrants. And in many cases, even if cities were interested in building or operating their own high-speed networks, state laws barred them from doing so. The result of all these factors was that the United States, slowly but certainly, began falling well behind countries like Sweden, South Korea, and Japan when it came to affordable, abundant bandwidth.

The unnerving thing is that so much of the present and future of broadband has come down to the whims of a single company.

Five years later, things look very different. The United States is still behind Sweden and South Korea. But fiber-to-the-home service is now a reality in cities across the country. Google Fiber, which first rolled out in Kansas City in the fall of 2012, is now operating in Austin, Texas, and Provo, Utah, and Google says it will expand next to Atlanta, Salt Lake City, Nashville, and Charlotte and Raleigh-Durham, North Carolina, with another five major metro areas potentially on the horizon. The biggest impact, though, has arguably been the response from big broadband providers. In the wake of Google Fiber’s debut, AT&T announced that it would begin offering one-gigabit connections at prices that would previously have seemed impossible, and the company says it might expand that service into a hundred cities. ­CenturyLink and Cox now have gigabit service in a few cities, and Suddenlink promises an offering in the near future. (Whether such promises will be kept is, of course, a different question, but the mere fact that they’ve been made is striking.) And even in areas where gigabit connections may be a long time coming, cable companies have dramatically improved speeds for their customers, often at no added cost. Time Warner Cable—one of whose executives declared, at a public conference, that it wasn’t offering gigabit service because consumers didn’t want it—offers connections today that are five times the speed of what was its fastest connection a couple of years ago.

Google Fiber has also inspired action on the municipal level. Gig.U, of which Blair Levin is now executive director, is working on bringing gigabit connections to more than two dozen college towns (where the demand for ultra-high-speed connections is obvious). A consortium of cities in Connecticut is talking with the Australian investment bank ­Macquarie about a public-private partnership to build a fiber network that the cities would eventually own (an approach similar to the one Stockholm used to build its fiber network). Seeing how Chattanooga, Tennessee, went ahead and built its own network, wiring every home with fiber, cities everywhere are looking to streamline their permit processes in order to make laying these new networks as simple (and affordable) as possible. “When you talked to mayors a few years ago, they would tell you about all the other problems they had that mattered much more than bandwidth,” Levin says. “When you talk to them today, they recognize that this is something they really need, and that it isn’t about streaming TV but about making sure businesses and schools and health-care facilities are going to have what they need in the future.”

None of this means that we’ve reached a true tipping point when it comes to fiber. The share of the country’s homes connected to fiber lines was still only about 3 percent at the end of 2013. But compared with where the U.S. was just a few years ago, progress has been dramatic. Had Google not chosen to do what it did, we’d probably still be stuck with the lack of investment and slow downloads that were our lot in 2010. As Levin puts it, “I would like to believe that all this happened because we made such a brilliant case for the benefits of abundant bandwidth in the National Broadband Plan. But that’s not the case. Without Google, this would not have happened.”

That raises the obvious question, of course, of just why Google did this, given that investing in physical networks is a long way from its core business. Google Fiber was introduced as “an experiment,” but as it has expanded, the company has said that it views the project as a real business and is managing it that way. And obviously, even if the direct return on the investment in Google Fiber ends up being small (as seems likely, given that Google is charging similar prices for gigabit connections as cable companies charge for much slower ones), the company will reap ancillary benefits from making the Internet more valuable and driving more traffic online.

In the end, though, the reason Google has invested in fiber is less important than the practical outcome of that investment. In effect, what the company is doing—both in building these networks and in pushing national providers to upgrade—is providing a public good whose spillover benefits are likely to be immense, and one that neither the government nor the private sector was doing much to deliver. This is somewhat similar to what Google did, on a smaller scale, back in 2008, when the FCC was auctioning off sections of the airwaves to wireless providers. The FCC had announced that if bids for a certain slice of the spectrum exceeded $4.6 billion, it would attach an open-access requirement that existing wireless providers didn’t want to have to follow. So Google placed a bid that was above the FCC’s price. It did so not in the expectation of winning (though it was prepared to spend the money if it did) but, rather, in order to ensure that regardless of who won—in this case, Verizon—the open-access requirement would go into effect. One might speculate that a similar dynamic is at work in Project Fi, Google’s new wireless-service offering, which challenges most wireless providers’ traditional pricing strategies (as well their dependence on privately owned networks).

What Google’s doing, in these cases, is using its deep pockets in the interest of broader social ends, with seemingly little concern for short-term returns. This strategy has historical precedent. In the early years of the American republic, there was little appetite for government spending on public works, like roads and canals. But the country needed better roads to facilitate the growth of trade and commerce. So the states turned to private companies, which built turnpikes that they then operated as toll roads. In the late 18th and early 19th centuries, hundreds of these companies invested millions of dollars in laying thousands of miles of road, in effect providing the basic infrastructure for travel in the United States.

What’s interesting about these companies is that while they were, in theory, for-profit, and while they had shareholders, in most cases there was no expectation that they would actually turn a profit in operating the roads—tolls were kept low enough to encourage traffic and commerce. Instead, the shareholders—who were typically local merchants and manufacturers—saw their investments in turnpikes as a way to collectively provide a public good that, not incidentally, would also deliver benefits to them as business owners and consumers. They knew, of course, that other businesses would benefit from these roads even if they didn’t invest in them (the nature of a public good being that everyone can use it). But that didn’t mean the investment wasn’t worth making. It’s hard not to see a similar logic underlying much of what Google does.

When it comes to the current state of innovation and the economy, the implications of Google Fiber are complicated. On the one hand, it is a testament to the power of competition. Google’s willingness to invest the money in a new network threatened cable and telecommunications companies’ dominance, and took customers away from them. That shifted the economic calculus. It’s no coincidence that the cities and regions where cable companies first announced they were building fiber, and offering high-speed connections at affordable prices, have been the places where Google Fiber either is or is going.

At the same time, though, it’s depressing that ensuring competitive broadband markets required the intervention of an outsider like Google. Indeed, the system as it was five years ago was designed to keep us stuck in the broadband dark ages. The government didn’t really do anything to change that, either by acting to make markets more competitive or by investing on its own. We just got bailed out by Google.

The unnerving thing is that so much of the present and future of broadband has come down to the whims of a single company, and a company that, in many ways, doesn’t look or act much like most American firms. If Google didn’t have such a dominant position in search and online advertising, giving it the resources to make big investments without any requirement of immediate return, Google Fiber wouldn’t have happened. And if Google’s leadership weren’t willing to make big long-term investments in projects outside the core business, or if the company didn’t have a dual-share structure that preserved its founders’ power and somewhat insulated its executives from Wall Street pressure, gigabit connections would more than likely be a fantasy in the United States today. As Levin puts it, “We got fortunate that a company with a real long-term view came into this market.” It might be good to design technology policy so that next time around, we don’t need to get so lucky.

James Surowiecki writes “The Financial Page” for the New Yorker. His last article for MIT Technology Review was about Uber’s dynamic pricing algorithm.

Keep Reading

Most Popular

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

What to know about this autumn’s covid vaccines

New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.

Human-plus-AI solutions mitigate security threats

With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure

Next slide, please: A brief history of the corporate presentation

From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.