It was 1999, and Google founders Larry Page and Sergey Brin, fresh from incorporating their oddly named company, needed servers—lots of them. So they went shopping for PC motherboards, disk drives, and power supplies. Not long before, though, they'd been cash-strapped grad students, so to save money, they kludged together four motherboards to one power supply, mounting them on cookie trays, which they lined with cork to avoid shorting anything out. Then they crammed these ugly yet functional servers into racks with network cables dangling everywhere.
It goes without saying that Google's technical infrastructure has improved since those slapdash early days. But Google is loath to reveal much about its back-end operations. In interviews with IEEE Spectrum, the company's engineers would often preface their purposely vague answers with,
"We don't want to talk about specifics" or "We can't talk about it a lot." Google has even attempted to keep secret the locations of many of its three dozen or so data centers, which include 20 scattered across the United States. Of course, that is absurdly hard to do with multimillion-dollar warehouselike facilities that must be approved by local officials, checked by government inspectors, and constructed in plain sight. So considerable information about Google's data infrastructure can now be found by just, well, googling Google.
Facebook, too, quickly catapulted from a student project to a dominant player on the Web. And Facebook's engineers have also had to pedal hard to keep up with the site's speedy rise in popularity. Indeed, these two companies have in many respects led strangely parallel lives—each of them opened its first data center in its seventh year of operation, for example. But Google and Facebook differ in fundamental ways as well, particularly in how they've produced the software that creates all the things we've come to expect from them, and also in how open they are about their technical operations.
Of course, those operations have had to grow in size and complexity to match the exponential rise in demand. Now, on any given day, Google's search engine fields more than a billion queries, and more than a quarter billion people visit the Facebook site. These companies have both had to mount massive engineering efforts to handle all that traffic, and the results of those labors have been impressive indeed.
Google's data center in The Dalles, Ore., completed in 2006, is one of the first that the company built rather than leased. At the time, Google was so hush-hush about this project that it required town officials to sign confidentiality agreements that precluded their even mentioning the facility to the press. Although Google is open enough now about having a data center located on this particular bend of the Columbia River, to this day Google Earth displays only overhead views of the site taken before construction commenced.
Google's if-we-tell-you-we'll-have-to-kill-you attitude toward its data centers isn't ironclad, however. For example, in 2009, Google hosted an energy-efficient data-center summit, where it revealed much about its operations. Days later, a narrated video tour of one of its early data centers, which the company refers to publicly only as "Data Center A," appeared on YouTube, which Google owns. This facility's more than 45 000 servers are mounted in 45 giant shipping containers, giving the interior of the cavernous building a strangely temporary look—as if Google wanted to be able to pack up and move all these servers to another location at a moment's notice.
This modular approach is not unique to Google, but it's not standard practice either. Google has also departed from data-center tradition in the way it handles power outages. Backup generators kick in when the grid fails, but they don't work instantly, requiring an uninterruptible power supply (UPS) to keep each server running for the first 10 seconds or more after the lights flicker off.