Through constant innovation, computing continues to transform the world in phenomenal ways and to deliver astounding gains in productivity and connectivity. In concert with the Internet, computing has created new business and communications models – everything from e-commerce to wireless broadband – that enable us to work, shop and communicate more efficiently. And often save energy.
How big is this change? A study by the Information Technology and Innovation Foundation (Digital Prosperity: Understanding the Economic Benefits of the Information Technology Revolution, March 2007) cites information and communication technology (ICT) as contributing to two-thirds of the productivity gains in the U.S. economy from 1997 to 2002. These gains help slow growth in energy consumption through greater operational and material efficiencies that allow more to be done and fewer miles traveled. In fact, the study noted that during this period each gross national product gain required less energy than it did before.
Worldwide, ICT has played a major role in energy efficiency and sustainable economic growth, as well as job creation. ICT bridges distances, improving logistics and reducing the need for travel. It allows people to work in more flexible ways, increasing efficiency and encouraging innovation. It enables dematerialization, the shift from products to services that can save energy by replacing the need for something physical like a telephone answering machine with a computerized message service that forwards messages to your cell phone or other devices.
Equally important is the role ICT is playing as a tool in helping governments, industries, scientists, and engineers collect data on global warming, study it, and work together on solutions to reduce humanity’s carbon footprint and dependence on fossil fuels. The tool on nearly every desk is a computer connected to the Internet and the massive data storage and number crunching capabilities of data centers.
What about the power that computers use?
As much as ICT is transforming our world and helping to make it more energy efficient, there is a growing concern about the power computers pull from the wall socket. And rightly so. According to Gartner, the ICT industry accounts for approximately 2 percent of global carbon dioxide emissions. The U.S. Environmental Protection Agency (EPA) reports that data centers consume about 1.5 percent of the nation’s electricity annually. This will continue to grow. According to IDC forecasts, U.S. data centers alone will house 50 percent more servers in 2009 than in 2007.
What this means is that the very technology enabling us to efficiently increase our productivity is a significant environmental concern in its own right. Which is why so many organizations, from governmental bodies, like the European Union and EPA, to the World Wildlife Fund and industry organizations like the Green Grid and the Climate Savers Computing Initiative, are seeking and promoting ways to make computing more energy efficient.
The good news is a lot can be done. What’s more, many of the solutions are easy fixes or simple technologies to adopt.
Making data centers more energy efficient
Worldwide, data center operators are experiencing power shortages and are becoming increasingly motivated to improve energy efficiency. According to Gartner, 50 percent of data centers could face insufficient power and cooling capacity by 2008.
Part of the problem is a lag in moving to the latest microprocessor technology. While the latest microprocessors continue to deliver more performance per watt, many data centers are operating with servers using older technology. In March 2007, for instance, Intel introduced two energy-efficient 50-watt server processors, the Quad-Core Intel® Xeon® processor L5320 and L5310. These products provide similar performance to Intel's existing 80- and 120-watt Quad-Core server products, representing a 35- to nearly 60-percent decrease in power. Setting a new standard in energy efficiency, these processors represent a nearly 10-fold improvement in power consumption per core in just one-and-a-half years (compared to 64-bit Intel® Xeon® processors at 110W per core). Even more energy-efficient processors are on the way.
Microprocessors are only part of the story though. Servers lose approximately one-third of their power as wasted heat lost in power conversion, voltage regulation and other energy inefficiencies. Additional energy is lost when data centers don’t make full use of a server’s power management modes. What can be done to fix these issues?
In September 2007, a proof-of-concept demonstration conducted by Intel in conjunction with the Lawrence Berkeley National Laboratory (operated by the University of California for the U.S. Department of Energy) demonstrated how simple configuration changes can deliver significant energy savings at the rack-level for today’s data centers. These changes included:
The result was up to 18 percent measured total power savings over a modern AC power rack. Based on power costs in the United States where the test took place, that could translate into energy savings of more than USD 40,000 a year per 30-server rack. When you think of a data center with hundreds of such racks, the energy savings could be substantial.
Other areas easy to address include power supply units. Typical power supply units are only 75 percent efficient, yet 90 percent efficient units are available that pay for their additional cost in less than a year. When you consider that each server typically has two power supply units, switching to 90 percent efficient units can make a significant difference in energy consumption.
Cooling, power management states, and server consolidation
It’s important to realize that every watt of power directly consumed by silicon in a typical data center requires approximately one watt for power conversion and another watt for cooling. This means many data centers could achieve significant energy reduction simply by using that initial watt more efficiently. Ways to do that include increasing the use of power management states and making each watt of power do more through server consolidation (a technique for eliminating under-utilized servers by transferring their workload to other under-utilized servers).
Power management technologies are extremely effective in reducing energy usage during periods of light loads. Unfortunately, these technologies are not often activated in the mistaken belief they might impact performance. Enhanced Intel SpeedStep Technology, for example, can dynamically ramp processor speed and voltages to the load so effectively that it can reduce server power consumption and cooling costs by up to 25 percent with little effect on performance. That could be a huge gain right there.
Want more? Consider that many data centers dedicate an application to each server and end up running more servers than necessary for the work being done. Server virtualization (partitioning a physical server computer into multiple virtual servers so that each has the ability to run as its own dedicated machine) enables one server to run multiple operating systems and applications. This reduces the number of physical servers required to accomplish the same work and minimizes power consumption, cooling, equipment costs, and the required floor space. It can also create room for growth where there wasn’t any before.
Cooling can also be made more efficient. Cooled doors, chimneys, raised floors, and other data center design improvements are simple structural solutions for increasing air flow effectiveness and reducing cooling costs. Much information is available from Intel and other sources on ways to design data centers so cooling is directed at the components putting out the heat and not at the facility at large.
Facility-level 380-volt DC distribution
While AC and 48-volt DC distribution are currently used in industry, facility-level 380-volt DC distribution is a promising solution for improving power conversion efficiency. Intel, along with several other industry partners, contributed to a small-scale demonstration of 380-volt DC facility-level distribution coordinated by Lawrence Berkeley National Laboratory. Seven percent input power savings were achieved compared to distribution systems using AC architecture with best-in-class components.
Compared to the typical data center with 50 percent power efficiency and drawing 10 megawatts (MW) for the computing infrastructure load, the 380-volt DC data center would draw only 6.67MW. Assuming a cost of USD 0.10 per kilowatt (kW) hour, the 380-volt DC architecture could save approximately USD 2.8 million in energy costs per year.
Desktop and mobile computers
Naturally, desktop computers, because of the enormous number in operation around the world, are also a major concern. Massive numbers of computers use massive amounts of energy. A good portion of that energy is being wasted. For every 100 watts coming in from the wall socket, by the time the computer is useful to you, 50 watts are lost to the air as heat. We then waste even more energy by having to cool the workplace to counter that heat.
Much of this energy waste is preventable. For example, most desktop PCs include power management capabilities, but 90 percent of people do not use them.
In June 2007, Intel and Google, working together with other leading computing companies and the World Wildlife Fund, created the Climate Savers Computing Initiative. The organization’s goal is to reduce computer power consumption in offices and homes by 50 percent by 2010. This could save USD 5.5 billion in energy costs and reduce global carbon dioxide emissions from computing platforms by 54 million tons per year. That’s the equivalent of removing 11 million automobiles from the road or eliminating 10 to 20 coal plants.
Climate Savers plans to accomplish these reductions through two major pushes. One is to increase the energy efficiency of every computer made through more energy-efficient components, such as energy-efficient processors and power supplies. Intel is playing an important role here by being first in the industry to shift to 45-nanometer (nm) processor feature sizes. Smaller features use less energy and pack more performance for greater energy efficiency and a healthier planet.
The other major push is increasing the use of power management settings. Climate Savers estimates that 60 percent of its goal is achievable simply through encouraging greater use of the power management capabilities already embedded in today’s computers. For this, the organization is working with the industry to encourage computer manufacturers to establish the most power efficient power management settings for their systems and educate consumers on how to take advantage of these settings. (More can be learned by visiting www.climatesaverscomputing.org.)
Mobile Internet Devices (MIDs) and ultra-mobile PCs (UMPC) are also about to receive a radical reduction in power requirements based on Intel 45nm silicon process technology and advanced power management. A new Intel® ultra-mobile processor technology (codenamed Menlow) will be introduced in the first half of 2008 that will deliver up to10 times lower power compared to the first UMPCs in the market. Such devices, coupled with Internet connectivity, enable enhanced productivity and work collaboration almost anywhere, reducing the need to commute.
Slowing global warming one computer at a time
Obviously, a lot needs to be done in many areas to slow global warming. But as both a solution and an energy consumer, computing merits serious attention. Using computing strategically to develop solutions that reduce our global footprint, while at the same time reducing how much electricity it takes to get a certain level of performance, is a win-win situation for us all.
We can all help in big and little ways. These include: