I am excerpting on this blog roughly 10% of my next book, The New Technology Elite due out in February (and available for pre-order on Amazon – see badge on left) . Chapters 6 through 17 cover 12 attributes of what I call the elite. Each also has a case study. Here is the excerpts from case study, Facebook’s Prineville Data Center for Chapter 8 which focused on Efficiency. Note: the text is going through the publisher’s edits and subject to change.
“Our server chassis is beautiful,” says Amir Michael, Manager, System Engineering at Facebook.
That’s beautiful in a minimalist kind of way. “It’s vanity-free—no plastic bezels, paint, or even mounting screws. It can be assembled by hand in less than eight minutes and thirty seconds.”
Michael is describing the design principles behind Facebook’s data center, which opened in April 2011 in Prineville, OR (previous data centers were leased).
****************
Everything in the center’s design was driven from a cost and efficiency perspective, and most of the components were designed from the ground up to Facebook’s specifications.
The chassis, for example, is 1.5U form/factor (2.63 inches tall) compared with standard 1U (1.75 inches tall) chassis. That allows for larger heat sinks and larger 60 mm fans as opposed to 40 mm fans. Larger fans use less energy to move the same amount of air. That takes 2 to 4 percent of the energy of the server, compared to 10 to 20 percent for typical servers. The heat sinks are spread at the back of the motherboard, so none of them will be receiving preheated air from another heat sink, reducing the work required of the fans.
The racks are “triplet” enclosures that house three openings for racks, each housing 30 of the 1.5U Facebook servers. Each enclosure is also equipped with two rack-top switches to support high network port density.7
Prineville enjoys dry, cool desert weather at its altitude of 3,200 feet. In what is likely good application of feng shui, the data center is oriented to take advantage of prevailing winds feeding outside air into the building. The data center thus takes advantage of the free, natural cooling, and in winter, heat from the servers can be used to warm office space. On warmer days, when they cannot use the natural cooling, they use evaporative cooling. Air from the outside flows over water-spray-moistened foam pads. There are no chillers on-site—common in most other data centers thus saving significantly on capital and ongoing energy to run them.
*****************
But Hamilton’s next statement really makes the center really stand out. “What made this trip (to the Prineville data center) really unusual is that I’m actually able to talk about what I saw.”
He continues:
In fact, more than allowing me to talk about it, Facebook has decided to release most of the technical details surrounding these designs publically. In the past, I’ve seen some super interesting but top secret facilities and I’ve seen some public but not particularly advanced data centers. To my knowledge, this is the first time an industry leading design has been documented in detail and released publically.9
Facebook calls it the Open Compute Project. They elaborate: “Facebook and our development partners have invested tens of millions of dollars over the past two years to build upon industry specifications to create the most efficient computing infrastructure possible. These advancements are good for Facebook, but we think they could benefit all companies. The Open Compute Project is a user-led forum, to share our designs and collaborate with anyone interested in highly efficient server and data center designs. We think it’s time to demystify the biggest capital expense of an online business—the infrastructure.”10
************************
This is a remarkable move on the part of Facebook. As we will discuss in Chapter 13, Apple’s iCloud data center was not even visible on Google Earth until a few days before the iCloud public announcement. A giant 500,000-square-foot facility was kept “hidden.” Google, Microsoft, Amazon, and others have also traditionally been secretive about their operations.
Frank Frankovsky, Facebook’s director of hardware design, was quoted as saying, “Facebook is successful because of the great social product, not [because] we can build low-cost infrastructure,” he says. “There’s no reason we shouldn’t help others out with this.”
******************
One quibble that environmentalists had with the Facebook center is that it is fueled by the utility Pacific Power, which produces almost 60 percent of its electricity from burning coal. Greenpeace ran a campaign for Facebook to “unfriend” coal. As we discuss in Chapter 17, Google, through its energy subsidiary, has negotiated several long-term wind-power agreements. It sells that energy in the open market at a loss but strips the Renewable Energy Credits and applies them for carbon credit to the conventional power it also uses to run its data centers. Towards the end of 2011, Greenpeace won a moral victory as Facebook promised a preference for access to clean and renewable energy in picking future sites for data centers. Facebook has also recruited Bill Weihl, formerly Google's "Energy Czar".
Barry Schnitt, Facebook’s director of policy communications, provides an alternative perspective on clean energy:, “As other environmental experts have established, the watts you never use are the cleanest and so our focus is on efficiency. We’ve invested thousands of people hours and tens of millions of dollars into efficiency technology and, when it is completed, our Oregon facility may be the most efficient data center in the world.”
Photo Credit: Facebook of 450w power supply
Great excerpt, Vinnie. I'm interested now to read the parts of this chapter you didn't excerpt!
Posted by: Frank Scavo | January 07, 2012 at 10:52 AM