Blade servers pack a helluva lot of punch into increasingly smaller form-factors, but can they solve the problem of effectively and efficiently managing the huge data volumes currently swamping companies large and small?
Fred Stack is VP of Marketing for Liebert Precision Cooling, Emerson Network Power. Stack has held a number of executive positions with Emerson, and is responsible for new product development roadmaps that reflect evolving market demands.
As CEO and President of Blade Network Technologies, Vikram Mehta brings over 18 years of global experience in the technology industry, including influential roles at Nortel and HP in various leadership and executive positions.
Arie Brish is a senior high tech executive and entrepreneur with over 25 years experience leading technology organizations. He is currently the CEO of Tehuti Networks, a fables semiconductor company in the networking space.
Carrie Higbie is Siemon’s Global Network Applications Manager, supporting end-users and active electronics manufacturers, and has been involved in computing and networking for over 25 years in executive and consultant roles.
Steve Campbell is VP Marketing and Solutions of the Server Systems Group at Hitachi Americas, Ltd. Before joining Hitachi in 2006, Steve served as VP of Marketing at Sun and had extensive experience in mission critical and supercomputing.
BM. 10 Gigabit Ethernet is seen as the next wave in networking – it lets networks move data faster, and supports applications that use a lot of bandwidth and require very low latency. So how can companies employ the latest technologies to boost the performance of their networks and take full advantage of the bandwidth that 10GbE offers?
SC. 10 Gigabit Ethernet has proven its value in high-performance computing applications and in the upper echelons of the data center, but it is still very new technology. That means the relevant standards have not yet solidified and the cost is still much higher than it will be in the months and years ahead. With that said, I think there is an immediate opportunity for forward-looking companies to begin using 10 Gigabit Ethernet and blade servers in combination to improve the overall efficiency of their high-end data centers. By migrating the application workloads that do not require 10 Gigabit Ethernet performance to lower-cost blade servers equipped with Gigabit Ethernet switches, IT departments can aggregate their most bandwidth-hungry applications onto a small number of extremely high-performance systems. This could increase the utilization rates of the relatively expensive high-end servers, while still accommodating the performance needs of most users via the blade servers. At the same time, IT organizations can begin to use 10 Gigabit Ethernet on blade servers. Hitachi now offers support for 10 Gigabit Ethernet in its BladeSymphony 1000 product. This provides high performance, high availability and reliability to BladeSymphony. The Hitachi PCI-X 10 Gigabit Ethernet Adapter is designed for space-constrained data center environments.
VM. 10 Gigabit Ethernet is set to challenge slower alternatives for dominance in LAN and SAN markets. A multitude of factors, from vastly improved economics to explosive demands on network bandwidth, is aligning to fuel exponential growth. Furthermore, the convergence of blade servers with a 10GbE I/O backplane could be a disruptive competitor to traditional network equipment and server vendors. However, until now, the prohibitively high price per port of 10G Ethernet has been a barrier to broad adoption. To overcome this barrier, Blade Network Technologies recently introduced its 10G Ethernet switch priced at less than $500 per port – a price that was formerly predicted to arrive in 2009. The economics of standards-based Ethernet products, coupled with the unprecedented capacity of 10GbE technology, now enables companies to deploy converged voice, video and storage networks and enable new high-bandwidth, low-latency applications.
AB. One way to migrate to 10GbE technology is to upgrade the servers in the data center by installing 10GbE add-on network interface cards (NICs) into the existing servers. The other way is to replace the servers with ones that already include 10GbE connection. In some cases, companies may need to upgrade their cabling infrastructure as well.
CH. For many enterprises, this will certainly be a full end-to-end decision. Category 5e was written out of the 10Gb/s standard, while legacy Category 6 installations have limited 10Gb/s support – and that is only after costly mitigation. Most enterprises will require an upgrade to Category 6A (either shielded or unshielded) or Category 7/Class F (currently the only published standard that supports 10GBASE-T over 100 meters – the 6A standards should publish towards the end of the year or first part of next year). On fiber, the same would apply based on the distances supported and the grade of fiber installed. For those that have the better infrastructures, the main consideration would be the cost of the fiber components over the copper components along with the ongoing maintenance.
BM. Blade server innovations – particularly virtualization technologies – are heating up the market, and successful consolidation strategies are gaining traction. But what key issues must be addressed in order to make a seamless transition to blade servers, virtualization, and technologies that encompass both?
AB. 10GbE is a must have technology for virtualization. It provides the sufficient network bandwidth to facilitate seamless virtualization.
FS. Blade servers are ‘heating up’ the market both figuratively and literally. Because they compress so much power in such a small space, even the most efficient blade servers can add significantly to a facility’s heat load. They can create hotspots or zones within the data center that cannot be effectively cooled by traditional approaches. The installation of a supplemental cooling infrastructure – essentially an overhead piping system that can deliver refrigerant to cooling modules mounted above or alongside racks – should be considered as part of any blade server consolidation project. Looking at issues relative to longer-term management, advanced monitoring and control technologies can be employed to control server power consumption.
CH. Many companies today are faced with the question of remediating what they have, consolidating into the same space or moving to a new space entirely. This can be a big problem for companies that have been in their space through several iterations of servers, switches, cabling and storage devices. It is important in this decision to have everything (including processes) well documented. One common pitfall is to miss an application here or there that interfaces with another. The documentation should be updated with each patch, revision and equipment change. Any problem and its resolution should be included. Also, with the proper infrastructure already installed as a roadmap (much like a blueprint for a house), the transition is much easier than trying to solve issues after the fact.
VM. Server virtualization is ideal for utilizing servers that are running below capacity, and consolidating them onto one physical server. The result is fewer physical servers from a power, energy and management perspective. Implementing server virtualization on a blade platform yields additional benefits. Some years ago, there were reasons why virtualization and blades were not an ideal match. Today, however, both technologies have advanced at such a rapid pace that from an IT manager’s perspective, they now go hand-in-glove. Blade servers equipped with embedded Ethernet switches are ideal as an effective platform for virtualization. The blade server switch allows enterprises to bring network intelligence back into a virtual server deployment. Implementing virtualization in conjunction with blade servers simplifies the consolidation project and provides a strong foundation for next-generation data centers. Today, many users are implementing virtualization software in concert with blade servers.
SC. First of all, it’s important to point out that blade servers and virtualization are not mutually exclusive. Many people are under the mistaken impression that you can either consolidate with blade servers, or you can consolidate through virtualization. However, the combination of virtualization technology and blade servers can amplify the benefits of consolidation. When you virtualize on a blade server platform, you raise your server efficiency rates and take advantage of enormous compute density – while still reaping all the traditional rewards of virtualization. You can allocate processing power on demand; you can change the allocation quickly to respond to user demand; you can improve the average utilization rates of your servers; and you can avoid purchases of new systems because you’re getting more use out of the servers you already have. However, traditional software-based virtualization technologies have a couple of drawbacks. First, they’re typically third-party solutions, so you have to evaluate multiple products to find the best one; then you need to configure it for your environment, learn how to install and use it, and troubleshoot problems that come up. And, as many customers have learned the hard way, virtualization software can slow down application performance.
BM. Blade server usage is growing and customers are increasingly purchasing integrated blade server switching solutions that maximize network intelligence, availability and density. Why is the right networking switching infrastructure of critical importance?
SC. This is a critical issue and there is no single ‘right’ or ‘wrong’ answer as no two environments are identical. One thing for certain is that a mistake in networking infrastructure can be expensive and time-consuming in terms of poor performance, poor quality and needless complexity. Blade servers were specifically designed to address these very issues, which is why their usage is growing so rapidly. Blade servers have evolved in the past four or five years and current blade servers represent the third generation, with features that are mandatory for mission-critical, high-throughput computing. At the same time, blade servers are more cost-effective and energy efficient than rack-and-stack servers, accelerating their adoption even further. Much of this rapid adoption is driven by the acceptance of the blade form factor as an alternative for corporate datacenters that are running out of space, power and cooling, and an alternative for the tangled mess of cables and connections to the already tangled mess of network switching.
VM. Concerns about server power consumption, cooling and total cost of ownership can now be alleviated with blade servers equipped with embedded Ethernet switches. However, data center managers have tended to overlook the cost of their networks when considering blade server costs. One of the hidden benefits of blades is network aggregation using blade servers to reduce expensive aggregation and core network port use. Today, virtualization of servers, consolidation of datacenter resources and Ethernet as a common fabric each make a powerful case for enterprises and mid-sized businesses to adopt blade-centric solutions.. Today, the critical decision about matching IT requirements with blade servers and the right network infrastructure is fast becoming a ‘no brainer’.
AB. Network bandwidth defines the amount of data per given time that the network is capable of supporting. In real life, the network is structured as a cluster of multiple smaller networks. The switch controls the flow of data between one sub-network to another. You could have very fast highways, but you also need efficient interchanges to allow a free flow of traffic with no congestions at the interchanges.
CH. If for no other reason than uptime and troubleshooting, the right equipment is critical. Poorly documented networks can increase downtime significantly. Secondly, in order to use technology to a company’s advantage, any infrastructure should support not only what is installed today, but also what may be installed tomorrow. While that requires something of a crystal ball, the standards have great guidelines. Realistically, you aren’t going to change out your cabling or your cabinets throughout the 10-15 years they will be in use. What will change are the active components. Being able to support multiple iterations is a tremendous time and money savings over the long run. The more you have to revisit an infrastructure, the more risk you have of harming what is already there.
BM. As blade servers bring increasingly powerful and dense computing abilities to the data center, they are taking on a more important role within the enterprise and require more effective and comprehensive management to ensure system availability and reliability, and quickly respond to changing business requirements. What design, planning and management techniques/tools are required for effective and comprehensive blade server administration?
AB. One important aspect is thermal management. One must carefully assess the hotspots deployment in the data center, and assure enough spot cooling density to accommodate the number of blades planned for that spot. In order to minimize the cooling challenge a careful selection of (low power) blades and accessories is required. Also needed is 10GbE local area network to allow for seamless virtualization. Last but not least is the virtualization software itself. Make sure that the virtualization software is supported by the platforms you have in your data center.
SC. The more distributed the blade servers are – both within the data center and geographically – the more important centralized management becomes to effective administration. You’ve got to be able to manage the extended system, across multiple chassis and racks, from a central point. Look for a management suite that allows the various system components to be managed through a unified dashboard. In the event of any system malfunction, you need to be able to locate the faulty part at a glance. You should also have the ability to manage the logical system configuration of each element by using the service name. There should no longer be any need for administrators to concern themselves with the management of physical resources. In addition, the management suite should provide the ability to set up and configure servers – including capabilities such as N+M cold standby, which is a more cost-effective availability feature than traditional hot stand-by solutions. And the management suite should be able to monitor server resources, integrate with SNMP-based enterprise management software, send e-mail alerts and notifications, and manage server assets.
FS. It’s not just blade servers that are driving the need for greater availability and faster response to change. This is one of the most significant issues facing IT today: their business is now so dependent on information technology that any downtime disrupts operations, and yet technology is changing so fast that there is a continual need to introduce new technology or reconfigure existing systems. The power and cooling systems that support IT can play a significant role in helping an organization accomplish this. Look for power and cooling technologies that are easy-to-scale and reconfigure without compromising overall reliability and create a plan for growth that ensures you can add power and cooling capacity as needed without impacting operations.
CH. I don’t necessarily think that blade servers require anything more than other environments require. All need proper cooling, connectivity, power, etc. The trick is in planning to assure that all of your equipment works within the parameters listed. Advanced speeds, as well as a structured cabling system that facilitates moves, adds and changes, means that this part won’t have to be revisited. Proper documentation and planning for any other system will assure minimal downtime. In short, the best plan is a plan, not a knee-jerk reaction to a business need.
VM. There’s a misconception that blades generate more heat than conventional servers; however, when you compare a single blade to a rack of conventional servers, the blade actually requires less power and less cooling than the rack of servers. Both IBM and HP have significantly reduced the heat generated by blade servers. New software is emerging to monitor thermal consumption. Management tools that have been available for mobile technology are now becoming available for blades and the data center.
New applications and services that support strategic business functions depend on complex network services and protocols. Technology professionals must manage network configuration and reliability across increasingly large and complex blade server deployments. Lost productivity and missed opportunities due to network downtime are costly to an enterprise. To this end, blade network management tools based on the Simple Network Management Protocol (SNMP) standard enable centralized point of administration of blade switch modules across multiple chassis and racks, facilitate time-consuming tasks, including bulk software/image upgrade to load and activate software updates, simplify management of complex configurations such as moving or assigning ports to a large block of filters.
Greening the data center
“Power consumption is one of the single biggest concerns and re-occurring expenses in the data center,” says Siemon’s Carrie Higbie. “One study stated that greenhouse emissions from IT in general are the same as that from the aviation industry. So whether the issue is being environmentally friendly as a corporation or just from a cost savings standpoint, the issue is at the forefront of most data center manager’s minds. With virtualization now native on many blades and the amount of processing power in each chassis, there are considerable savings. Utilization is nearly linear from 0-50 percent, while from 50-75 percent there is almost no incremental power consumption. Newer energy efficient power supplies make blades even more attractive.”
Making good environmental sense
“Optimizing server efficiency makes sense on several fronts,” says Emerson’s Fred Stack. “Organizations that ignore efficiency issues will find IT energy costs rising steeply as computing capacity and cost per kilowatt of electricity rise simultaneously. In addition, more efficient server technology can enable more efficient space utilization. This is important because a recent study of data center managers found that 96 percent of facilities will reach capacity by 2011 – despite the fact that many of these facilities have been built in the last 10 years. So increased server efficiency, working in concert with supplemental cooling technologies, can eliminate heat-related issues that are limiting the growth of some facilities today. This will allow organizations to continue to add the technology they need to support business growth while delaying the need to invest in new facilities. Finally, improving efficiency makes good environmental sense. Many companies are looking to reduce their carbon footprint, and improved data center efficiency can support this effort.”
Solving power problems
“Virtually every data center manager has a power problem today,” says Hitachi’s Steve Campbell. “The cost is just one aspect of it; the heat dissipation problem is another. An even greater concern is the fact that the demand for power is growing faster than the ability to supply it reliably. That means the problem will only get worse if no action is taken – and it seems that every action you can take is expensive. The one action you can take that doesn’t require a huge capital outlay is to ensure that each system is as efficient as possible – both in terms of power consumption and in terms of operating efficiency. Look at the compute power per watt, not just total compute density. Look at the utilization rates and see what you can do to reduce total power consumption through consolidation. Blade servers can help you on both counts; they are not only far more energy efficient than rack servers, they’re also an excellent platform for server consolidation.”
“Solving the energy crisis in the data center requires an examination of end-to-end efficiency opportunities that comprise the entire datacenter infrastructure – servers, storage and networking,” asserts Vikram Mehta of Blade Network Technologies. “Blade servers provide an opportunity to consolidate network and storage infrastructure. Blades help reduce the size of datacenters, eliminate hurdles and they require less heat, so less cooling is required. As equipment space is reduced, cooling efficiency is improved. As cooling efficiency improves, less energy is consumed. As less energy is consumed, fewer toxins are released into the environment. Phenomenal costs savings can also be realized. In general, blade server switches consume between 25W and 65W compared with external switches that typically consume 300W or more. When you embed the Ethernet switch in your blade server system, then your IT operations become even more ‘green’.”
“The energy efficiency of blades is often misleading,” believes Tehuti’s Arie Brish. “A single blade may be more energy efficient than its non-blade predecessor. However, the high deployment density of blades causes hotspots in the data center that are very difficult to cool, and therefore some existing data centers limit their deployment of blades. It is critical to take into consideration power consumption and cooling before deploying blades into the data center. Choice of lowest power consumption hardware is critical in blades more than ever before.”