Improving data center efficiencies can be costly, so learning from the experience of the supercomputing sector could prove invaluable.
Gartner predicts that by next year, half the world’s data centers will have insufficient infrastructure to meet the power and cooling requirements of the latest high-density equipment. It will bring to managers of mainstream data centers issues that managers of high-end scientific and technical supercomputing centers have been dealing with for decades: namely, how to properly site infrastructure support equipment, optimize cooling for high server rack densities, balance data center efficiency against business needs and track all the tiny details that can make or break an implementation.
John West is a Senior Fellow in the Department of Defense’s High Performance Computing Modernization Program, and the Executive Director of the supercomputing center at the US Army Engineer Research and Development Center in Vicksburg, Mississippi. As such, he knows more than most about managing a high-end data center, and concedes that designing a new facility – or retrofitting an old one – is a complex business. Nevertheless, he believes there are a number of steps that can be taken to mitigate the impact.
“I think there’s a lot of low-hanging fruit,” he says. “My gut feeling is that we can probably realize between 20-50 percent in efficiency improvements, and if it’s not all low-hanging fruit, it’s probably fairly easy to reach. I think we can get at those kinds of savings, but I think we shouldn’t expect that we can just pull the switch and start making those savings tomorrow. It’s going to take time and in a lot of cases we’re going to have to rip out the old stuff and put in new infrastructures. My advice is this: as you’re building and upgrading in the future – doing projects you were going to do anyway – go ahead and do those with an eye on reducing your overall energy footprint.”
The good news is that companies with a fixed acquisition budget can now buy much more equipment for their money from one year to the next. “Over the last five years, the amount of equipment that you can buy and plan into a data center for a relatively low amount of money has just gone up and up,” he says. “At the same time, we’ve seen a big increase in the degree to which businesses are connected both to one another and to their users and customers, which has created additional demands for even more infrastructure.”
West believes that these two aspects play into one another. “Often, when energy costs start to become significant in the enterprise, the cost for the data center’s energy usage is transferred into the IT budget. It’s not commonplace, but a lot of places are talking about it and some have already tried it. So that then becomes a pretty strong motivator to go out after that low-hanging fruit and show that you’re moving in the right direction.”
The approach each individual company takes is likely to vary depending on their individual circumstances. “For example, if you just spent $100 million to deploy a totally new architecture 18 months ago, it doesn’t make sense to rip that out now,” says West. “It’ll also depend on the specific goals of the business. Do you have 24/7 availability requirements or can you be a little bit looser on availability? What are your customers’ needs? What can your business afford?” Finally, he says, it’ll depend on what each company is actually capable of doing, as well as who they can get as partners to help them. “All of those things are going to have an impact on what’s effective in a particular organization.”
Indeed, it seems as though the most important thing that an organization thinking about improving data center efficiency can do is just that: think about what improvements they can make. “Make it a top of mind item when you’re going into acquisition of a new server, when you’re talking about taking on a new IT mission area, when you’re thinking about expanding your data center,” says West. “Ask these environmental, energy-conscious questions upfront and make them part of your planning process from the beginning, and then see where they take you in the context of your individual situation.”
West suggests server virtualization as one way to improve efficiencies for most organizations – although says it has limitations as far as his own facility is concerned. “I work in a very high-end scientific computing data center, so in many ways it’s different from a typical enterprise,” he explains. “One of the reasons that virtualization has taken off so well in commercial enterprises is that you’re looking at 5-10 percent utilization rates on a typical enterprise server, which makes that server a good candidate for virtualization. You can put a bunch of different servers on one server to improve utilization rates – and now you’re getting up to where we run in a high-end scientific data center, which is typically in the 80-90 percent range, all the time. So, as far as we’re concerned, virtualization doesn’t make sense, although it can make sense in places where your per unit utilization is a little bit lower.”
Instead, his team has looked to other areas in order to get the required efficiency improvements into its own data center environment, with advanced closely coupled cooling technology as a key element of that strategy. “As we’re adopting these kinds of technologies into our program, they’re enabling us to get much higher densities and much more equipment into an existing space without a retrofit than we thought was possible,” he enthuses. “Technologies such as water-cooled doors can reduce the thermal footprint of a rack to something that’s almost neutral instead of having them just pump heat into the machine. So, this whole idea of closely coupled cooling has been really important for us in our program – particularly as we’re a federal program, and major facility modifications quite often require an act of Congress to get approval. Naturally, we want to get as much as we can out of our existing facilities.”
Something else that’s really paid off for West and his team is a greater focus on the importance of floor tiles. “Several years ago we started using fluid dynamics modeling applications to model what was under the floor in terms of the cable bundles, pipes and other typical obstructions – as well as where the servers were located to establish where the heat sources are – and used the program to help us figure out where we needed to put the perforated tiles to get the best benefit (i.e. the most cold air into the system). That’s not something you think about when you’re installing a super computer – you just put a row of perforated tiles around it – but it turns out that that’s not always the best solution.”
The final thing that’s helped West realize significant efficiency improvements is investing in monitoring equipment to measure power usage, temperature and humidity. “This sounds like a no-brainer, but it’s something that can be difficult to find the money for and it’s really helped us in terms of knowing just what’s going on in our data center,” he says. “Not only can you see the impact of your changes at that point and start and evaluating whether it’s worth doing those changes again or not; but you can also immediately identify your problem areas and deal with those before you find out about them from a thermal interrupt. These things have been important for us as we try to cram more and more into what’s essentially a fixed data center.”
John West offers his top tips for a more efficient data center
1. Do you really need your own data center?
Unless you’re in an industry where having an in-house data center translates directly into revenue, you might be better off running your applications in someone else’s facility. This solution isn’t right for everyone, but as utility costs rise and growing demand squeezes the support infrastructure even tighter, it’s worth considering.
2. Weigh the costs/benefits of green design
Clearly, you need to design your infrastructure to be as efficient as possible, taking steps like specifying high-efficiency power supplies in servers. But the degree to which you can green your power distribution infrastructure will depend upon the value of continuous availability to your organization and the costs of expanding capacity.
3. Design for closely coupled cooling
This approach allows for targeted cooling and control of hot spots, and can result in shorter air paths that require less fan power to move the cold air around the room. Closely coupled cooling can allow for rack densities of up to four times the density of a typical room-oriented cooling solution.
4. It’s the little stuff that matters
Minimizing under-floor obstructions can help eliminate data center hot spots and prevent air handlers from working against one another. Another step is to commission a fluid dynamics study of your data center that uses a computer model to simulate the flow of air around your data center to identify the causes of and solutions to cooling problems.
5. Move support equipment outside
Properly siting your computer infrastructure support systems will improve efficiency and make it easier for you to expand capacity in the future. One of the most important steps you can take is to move as much of your power and cooling equipment as possible out of your data center. A good solution is to move the bulk of these items outside the building.
6. Monitor for power management
An infrastructure monitoring system for the power and cooling systems needs to be part of any upgrade you are planning. Actively managing and monitoring your energy usage will help you plan for the future and assess the effectiveness of steps you take to improve your data center's efficiency.