Amidst rising computing demand and energy costs, data center managers are faced with the difficult challenge of balancing efficiency and availability. Using cooling-specific tactics can help to reduce energy consumption by 50 percent or more without compromising availability.
Over the past few years, energy consumption has become a hot topic in the data center industry. According to survey results from the Data Center Users’ Group—an organization of data center managers and decision-makers—power usage of data centers (average kW use per rack) jumped 23 percent from 2006 to 2009 and respondents predict per-rack averages of 10 kW by 2012. The Uptime Institute reported data center energy use doubled between 2000 and 2006 and predicts it will double again by 2012. Rising energy costs, coupled with a move toward environmental responsibility, have pushed many companies to look at energy efficiency as a way to cut data center operation costs.
More recently, however, the trend of maximizing efficiency as a cost-cutting tactic has backfired, with many high-profile data center outages in the past year proving availability cannot be sacrificed in the process. Availability was the No. 1 concern reported by respondents in the fall 2009 Data Center Users’ Group survey, having dropped out of the top three concerns behind energy efficiency and heat density in previous years. Modern data centers have evolved as a result of new technologies, but in the process the business world has become increasingly dependent on the IT infrastructure that supports those applications. With the progression of technology and unprecedented business demands, a new challenge has emerged: maintaining availability while improving efficiency in an environment where computing demand is growing and IT budgets are shrinking.
Reductions in energy consumption at the IT equipment level have the greatest impact on overall consumption because they cascade across all supporting systems. Emerson Network Power has developed a widely accepted roadmap for optimizing data center energy efficiency. This approach, called Energy Logic, looks at how IT equipment and supporting infrastructure—such as power and cooling—can deliver a 50 percent or greater reduction in data center energy consumption without compromising performance or availability.
Significant advancements have taken place, specific to cooling, that provide energy efficiency without sacrificing uptime. Here are a few of the best practices:
Data center cooling systems are sized to handle peak loads on the maximum design day, which rarely occur. Consequently, operating efficiency at full load often is not a good indication of actual operating efficiency. Newer technologies, such as digital scroll compressors and variable frequency drives in computer room air conditioners (CRACs), allow high efficiencies to be maintained at partial loads. Digital scroll compressors allow the capacity of room air conditioners to be matched exactly to room conditions without turning compressors on and off.
Typically, CRAC fans run at a constant speed and deliver a constant volume of air flow. Converting these fans to variable frequency drive fans allows fan speed and power draw to be reduced as load decreases. Fan power is directly proportional to the cube of fan rpm and a 20 percent reduction in fan speed provides almost 50 percent savings in fan power consumption. These drives are available in retrofit kits that make it easy to upgrade existing CRACs with a payback of less than one year. In a chilled water-based air conditioning system, for instance, the use of variable frequency drives provides an incremental saving of 4 percent in total data center power consumption.
High-density cooling brings cooling closer to the source of heat through high-efficiency cooling units located near the rack to complement the base room air conditioning. These systems can reduce cooling power consumption by as much as 65 percent compared to traditional room-only designs. Originally designed to address hot spots or zones within the data center, high-density cooling systems have become a basic building block of the energy efficient data center of the future, meeting the needs of today’s 10, 20 and 30 kW racks while offering the ability to support fanless server technologies of the future. The majority of these cooling systems use high-efficiency pumped R134a refrigerant that turns into a gas if it ever touches the air. This prevents an unlikely leak from damaging IT equipment and triggering an outage.
Intelligent aisle containment
The efficient and established practice of hot-aisle/cold-aisle alignment sets up another movement—containment. Aisle containment prevents the mixing of hot and cold air to improve cooling efficiency. While hot-aisle and cold-aisle containment systems are available, cold aisle containment presents some clear advantages. Cold aisle containment can be used with or without conventional raised-floor cooling, can be retrofitted easily into existing raised-floor data centers and works in tandem with the raised floor as well as with high-density cooling systems to produce efficient cooling. By integrating the cold aisle containment with the cooling system and leveraging intelligent controls to closely monitor the contained environment, systems can adjust the temperature and airflow independently to match server requirements. This results in optimal performance and energy efficiency.
Economizers, which use outside air to reduce work required by the cooling system, can be an effective approach to lowering energy consumption if they are properly applied. Two base methods exist, air-side and water-side. There are pros and cons of each method. Analysis needs to verify which system is the most beneficial for the target geography and operating conditions. Then, an analysis of the risks in each system needs to be weighed against the business needs and operational SLA’s and skills.
Cooling at the server
The next stage in the evolution of data center cooling is to bring cooling even closer to the heat source within the server. The row-based supplemental cooling systems started this trend, and they are now migrating to passive systems that rely on the fans within the server—not the cooling system—to move the air. In the future server fans will be eliminated, removing the heat directly from the electronic source. This method cools the servers by immediately removing heat generated by the electronics, rather than pushing hot air back into the data center via server fans. Because the fans are eliminated from the equation, this approach presents significant savings over traditional cooling systems. In fact, the energy used to cool the servers is less than the energy required in the server fans alone.
Maintaining both efficiency and availability in the data center will remain a key industry issue as long as developments in technology and user demand continue to increase. Evaluating IT infrastructure support systems, such as cooling, is one way to reduce energy consumption without sacrificing system availability. Although not mandated by industry regulation at this point, many data center managers are implementing new cooling approaches – such as variable capacity or high-density cooling, aisle containment, economizers or—coming soon, cooling at the server—to ensure optimal results moving forward.