Data store: Facebook’s data center in Prineville, Oregon, is one of several that will help the company cope with its always growing user base.
Facebook is the gateway to the Internet for a growing number of people. They message rather than e-mail; discover news and music through friends, rather than through conventional news or search sites; and use their Facebook ID to access outside websites and applications.
As the keeper of so many people’s social graph, Facebook is in an incredibly powerful position—one reason its IPO this week is expected to be the largest ever for an Internet company.
But potential investors should take note that there’s a flip side to Facebook’s explosive growth and power; that flip side, as one analyst put it, is its bid to become a core piece of the Internet’s infrastructure. Facebook’s own technology infrastructure is expensive to build and operate, and it must scale rapidly.
Infrastructure is Facebook’s biggest cost, and to support growing traffic and network complexity, it will have to spend even more. What’s less clear is whether Facebook’s revenues will likewise increase—especially if additional traffic comes from less lucrative visitors, such as people accessing the site from their phones or from outside North America and Europe.
To date, Facebook has been up to the infrastructure challenge. In less than eight years it has grown to host 526 million daily users, 300 million daily photo uploads, and nine million applications.
Two metrics highlight Facebook’s success in this respect.
First, Facebook spent $860 million, or about $1 per active monthly user, to deliver and distribute its products last year. The bulk of that money was related to data center equipment, staff, and operating costs. That is up from about 80 cents and 60 cents per user in the two previous years. For the moment, however, Facebook’s revenue, currently at $4.30 per user, is growing at an even faster clip. That’s a good sign for any potential investor.
Second, Facebook is not only the Web’s biggest social media site, it is also consistently the fastest. In 2010, Facebook’s response time averaged one second in the U.S., but had improved to 0.73 seconds by mid-2011, according to AlertSite. By comparison, LinkedIn, the next fastest, took nearly double the time to load. Twitter’s site was a full two seconds slower.
Facebook has come a long way since it was first hosted in Mark Zuckerberg’s dorm room and expanded as he rented additional servers for $80 a month. By late 2009, Facebook disclosed it was using about 30,000 servers, and since then, the number has more than doubled.
As it has grown, the company’s engineers have had to innovate to keep costs down and process a growing volume of data. For example, Facebook designed minimalist custom servers that are cheaper for it to build and run than off-the-shelf ones. It also built a program to optimize the performance of its code, cutting the computing demand on its Web servers by 50 percent. It has open-sourced many of its software innovations and also created the Open Compute Project to widely share its new server designs, with the hope that others could contribute useful innovations.
Today, Facebook is building its own data centers in Oregon, North Carolina, and Sweden. Last year it spent nearly a third of its revenues, $1.1 billion, in capital expenditures on networking equipment and infrastructure. It plans to spend as much as $1.8 billion on such costs this year.
These infrastructure investments are a good sign, says KC Mares, a data-center energy expert and the founder of MegaWatt Consulting; owning and operating rather than leasing data-center space will help Facebook save money in the long term. Other growing tech companies such as Google have pursued this same strategy.
But as Facebook’s IPO filing makes clear, there is also a risk to investing in a global infrastructure to serve all users, regardless of their short-term profitability. It is a balancing act.
“If you add too much, it’s a big cost that eats into your revenues. If you don’t add fast enough, it’s an opportunity cost of customers you can’t serve,” says John Pflueger, a board member of the Green Grid, an IT industry group.
Coming to the wrong conclusions about how to invest in infrastructure can have major consequences. Just look at Friendster, a social network founded before Facebook and MySpace. Friendster had more than 100 million users, but it quickly fell behind as Facebook came to dominate the landscape.
Jim Scheinman, head of business development at Friendster until 2005, says Friendster made product decisions that required too much computing power. For example, it tried to calculate up to six-degree connections between all users. As a result, the site slowed to a crawl. Today, big Web companies often calculate exactly how much revenue they lose when a page is slow to load, even down to tenths of a second.
Facebook, of course, is long past its early days and has more than a critical mass on its platform: almost half of the world’s population of Internet users. But to stay relevant as it battles companies like Google, it’ll have to stay on the cutting edge, and it will need the computing power to support that.
The question, says Scheinman, is less about costs and capital and more about engineering challenges: “When they have a billion people, and as people use the product more, does that create scaling issues they haven’t yet seen before?”
Smaller design teams can now prototype and deploy faster.