“The heat transfer was not up to the quantities of a normal frying pan,” Trubador reported in the English computer magazine, The Register, “and it was a tedious task waiting for the egg to cook, but eleven minutes later it was lovely.” This little story not only illustrates the wonderful eccentricity of English computer enthusiasts, but it also illustrates the highly concentrated energy use in modern computers.
Although computers and Internet-related equipment don’t and probably never will swallow the massive amounts of energy estimated by Peter Huber and Mark Mills in their 1999 Forbes fantasy “Dig More Coal: The PCs Are Coming,”1 scientists on the leading edge of computer technology do worry about the intensity of the energy they use. Many computer experts expect that by the year 2010, a single computer chip might contain more than a billion processors producing up to 1,000 watts of heat—enough to cook a pot roast, not just an egg. In fact, some chips get so hot they actually threaten to melt themselves. Individual computers are one thing; now imagine the concentrated heat being generated by millions of chips all thrumming contentedly away in a big data center, one of those immaculately-clean perfectly-chilled white rooms seen in sci-fi movies. The obvious problem with computers and computing is that they have traditionally focused on speed and power (calculations per second) and are, in the words of server pioneer Chris Hipp, a lot like muscle cars in the 1960s—power at any price; cost, reliability, and energy consumption be damned!
RMI Gets Into the Data Center Business
In February 2003, Rocky Mountain Institute staffers dug into their encyclopedic Rolodexes and invited a broad cross-section of smart engineers, computer experts, and real estate professionals to come talk openly and candidly about data-center design. The event, dubbed Low Power Data Centers: An RMI Charrette, was held 2–5 February in San Jose, in the heart of Silicon Valley. About ninety experts took part in the rich discussion whose aim, ultimately, was to chart a new course for data centers. Certainly, if a single computer chip producing 1,000 watts of heat is stacked among thousands of other chips producing the same, and many of them contain vital, national security, military, academic, telecommunications, and financial information, someone should be mulling over the perplexing issues of heat and power in data centers. As the charrette unfolded, attendees broke into four main groups and about six or seven subgroups to examine everything, from the main electricity supply system to options for removing heat from individual chips, from system architecture to software and compilers. Perhaps the most important thing that came out of the three-plus days of discussion was an awareness of the unintended disconnections that permeate data centers—the event became something of a happy confession, where everyone tells of their small realm’s troubles. While hundreds of ideas were exchanged, rather than list them all here, I’ll touch on a few of the highlights.
First of all, it turns out that most chips don’t need to gobble as much electricity as they currently do. In fact, many chipmakers are finding that efficiency is as important to customers as computational muscle and they are starting to design their CPUs to be less consumptive. One group of engineers at the charrette explored this idea further, and created—at least on paper—the Hyperserver. The Hyperserver’s disk drives and fans were set apart from the CPU, its operating system was installed on chips, and its power supplies were efficient and “right-sized;” the entire assembly ran on dynamically-allocated resources—or in lay parlance, it powered up only the components it needed at any moment.
While that group was looking at moving the components of servers around, another group of engineers and architects looked at the way servers are stacked and stuffed into tiny, hot places, and wondered what an optimum design for housing servers might look like. Hipp, who has developed blade servers and installed them in all sorts of lilliputian rabbit warrens, gave a presentation showing thick braids of power cords strung between overloaded server racks, in turn stacked among jumbles of “useless” sheet metal, all cooled by a guy wheeling a small electric fan around. Clearly, most server technicians are afraid of removing old equipment—justifiably, they fear something will break—so they prefer to pile new equipment on top of the old, which leads to hotter and hotter spaces. Cooling fins, fans, and exhaust systems are often suboptimally located, so this “cooling group” created a variety of naturally ventilating designs for rooms that might house server racks. They also suggested sticking servers’ power supplies on top—an incredibly simple idea but one not regularly practiced. In another room, a subgroup of the cooling group sketched out several server racks designed with liquid cooling elements—after all, despite the sector’s fear of mixing water and electricity, water is a 3,500-fold better heat carrier than air per unit volume.
Meanwhile, a “power supply” group looked at the power supply coming into the building, and redesigned it for efficiency and reliability. One of the not-surprising-but-rarely-addressed aspects of data centers is that intermittent savings of energy aren’t as valuable as continuous power savings. This idea—akin to RMI’s “tunneling through the cost barrier” thesis—could permeate all aspects of data center design. Even though computers and their cooling systems might power down when not used, often they have other systems -uninterrupted power supplies (UPSs), for example - that remain active. In fact, according to information presented at the charrette, power continuously saved is worth roughly $10 per watt over a three-year period - several times as much as power saved intermittently. Power supplies aren’t designed to be nearly as efficient as they should be to match this value. (An additional ironic twist: many computers’ energy requirements are bumped up so the machine can run more complex applications, with very fancy graphics, faster—usually unnecessarily. As Lawrence Berkeley Labs’ Jon Koomey asked at one point, “Who here uses more functions in Microsoft Word now than you did ten years ago?” -and not many hands went up.)
A fourth group looked at “future trends,” which included a broad range of concerns—the meatiest of which was the misalignment of incentives. For example, the immature real estate model for data centers calls for charging tennants by the square foot and not by the energy consumed, even though—all things being equal—the juice costs much more than the space. As one attendee noted, “Cramming ‘free’ watts into ‘costly’ square-footage creates uncoolably concentrated heat, hence vastly more cost and unreliability.”
The ideal situation, group members noted, is when incentives are aligned and all involved are working together. Like other attendees, they also called for benchmarking, education, and energy ratings on chips, servers, and racks.
As several RMI staffers compiled the elements of the discussion, a report about the charrette quickly began to organize itself around recommendations—over fifty all told. Some recommendations merely urged the search for a solution to a problem. Other recommendations were so specific they included engineering schematics. All of them seemed best applied in unison.
Soon complete and downloadable at www.rmi.org/sitepages/pid626.php, the report, although comprehensive, should be seen as a starting point, a document that will infuse creative thinking into all high-tech sector activities, and point toward the smarter use of fewer watts.
Possibly the most encouraging part of the report is one graph. At the end of the final day’s discussion, Malcolm Lewis, of Constructive Technologies Group, drew the graph to outline the results of the charrette. It depicts an eighty-nine percent potential energy reduction for the whole data center—from CPU to everything else—at lower capital cost and with improved reliability.
The graph is a quite remarkable thing because it could have far-reaching consequences: by using more efficient CPUs and designs (“advanced concepts”), it might be possible to drop the required computing power to one-sixth of the amount needed in the “business-as-usual (“Current Practice”) approach. When that happens, the rest of the gadgetry can drop its requirements in a similar fashion (in fact, the heating, ventilation, and air conditioning requirements can be dropped to less than one-twentieth their current energy requirements). This bodes well for anyone interested is storing information safely, reliably, and efficiently—and brings to mind thoughts of servers in cars, desks, and backpacks.
How quickly will the Data Center of the Future be realized? The late 1990s dot-com bubble bust and the economic lull of the past several years have provided all who work with data centers, computers, and high-tech real estate a chance to do data centers right.
As Wu-chun Feng, Ph.D., co-creator of the Green Destiny computer at Los Alamos National Laboratory and charrette keynote speaker, observed, “Bigger and faster machines are not good enough anymore. Size, power consumption, reliability, and ease of maintenance will be the issues of this decade.”
1 RMI’s quest for a Low Power Data Center and the argument that Huber and Mills were making are not antithetical arguments. Mills and Huber stated that electricity consumption by the Internet would consume half of the electric grid’s available power within a decade of May 1999, but technical review by Lawrence Berkeley National Laboratory, the Center for Energy and Climate Solutions, RMI, and others found they’d overstated computer-related electricity use by at least eightfold (See “Debinking an Urban Lgend,” RMI Solutions, Spring 2003). RMI’s concern for data centers is not the amount of energy—which overall is rather small—but its concentration.