Power Efficiency of HP Rack Mount vs. Blade Servers, 3.5% difference in performance/watt

Burton Group has a blog entry discussing the comparison of HP rack and blade servers.

Power Efficiency – Rack Mount vs. Blade Servers

HP has just published SPECpower_ssj2008 results for a c7000 blade system, the results when compared to existing rack mount server results for the same benchmark make interesting reading. Before I say anything else, kudos to HP for publishing a power benchmark on blades, now if we can only get IBM, Dell, Cisco etc to follow suit.

The blade system used sixteen identically configured blades to achieve the following result:

ssj_ops @ 100%
avg. watts @ 100%
avg. watts @ idle
ssj_ops/watt

7,210,418
2,783
802
1,877

Each of the 16 blades used a pair of Intel Xeon 5520 processors and 8 GBs of memory. The result is interesting because HP just happens to have published results for an identically configured Proliant DL 380 G6 rack mount server. I’ve multiplied the rack mount result by 16 so we can directly compare power efficiency:

ssj_ops @ 100%
avg. watts @ 100%
avg. watts @ idle
ssj_ops/watt

7,037,296
2,720
1060.8
1,813

From these results we can draw some useful conclusions:

  • There is a difference of roughly 3.5% for performance/watt, with blades holding the advantage.
  • Idle power consumption for the blade solution is approximately 25% lower
  • Peak power consumption for the rack mount solution is approximately 2% lower

The writer makes the statement the 3.5% difference in performance per watt is not significant, but he assumed the blade enclosure was fully populated with 16 blades.  How many blade enclosures do you see fully populated? 

Maybe the rack mount servers are more energy efficiency in most circumstances.

As the author says it would be great if we could get the same data from IBM, Dell, and Cisco.  But, then people wouldn’t be buying the blade enclosures.

Read more

Educating Procurement to buy Energy Efficient Servers, Testing x86 Low Power CPU based Servers

I just posted on the idea of biggest opportunity to green the data center is procurement. One day later AnandTech has a post on testing energy efficiency servers.  If you want the version of this article that is all one page go to this print version as there are 12 pages.

Testing the latest x86 rack servers and low power server CPUs

Testing the latest x86 rack servers and low power server CPUs

Date: July 22nd, 2009
Topic: IT Computing
Manufacturer: Various
Author: Johan De Gelas

The x86 rack server space is very crowded, but is still possible to rise above the crowd. Quite a few data centers have many "gaping holes" in the racks as they have exceeded the power or cooling capacity of the data center and it is no longer possible to add servers. One way to distinguish your server from the masses is to create a very low power server. The x86 rack server market is also very cost sensitive, so any innovation that seriously cuts the costs of buying and managing the server will draw some attention. This low power, cost sensitive part of the market does not get nearly the attention it deserves compared to the high performance servers, but it is a huge market. According to AMD, sales of their low power (HE and EE) Opterons account for up to 25% of their total server CPU sales, while the performance oriented SE parts only amount to 5% or less. Granted, AMD's presence in the performance oriented market is not that strong right now, but it is a fact that low power servers are getting more popular by the day.

The low power market is very diverse. The people in the "cloudy" data centers are - with good reason - completely power obsessed as increasing the size of a data center is a very costly affair, to be avoided at all costs. These people tend to almost automatically buy servers with low power CPUs. Then there is the large group of people, probably working in the Small and Medium Enterprise businesses (SMEs) who know they have many applications where performance is not the first priority. These people want to fill their hired rack space without paying a premium to the hosting provider for extra current. It used to be rather simple: give heavy applications the (high performance) server they need and go for the simplest, smallest, cheapest, and lowest power server for applications that peak at 15% CPU like fileservers and domain controllers. Virtualization made the server choices a lot more interesting: more performance per server does not necessarily go to waste; it can result in having to buy fewer servers, so prepare to face some interesting choices.

This article is quite long, and another reason why procurement professional would not read and digest this, plus few have the engineering skills to follow the article.  The writers of this article did not even intend the BDM for server purchasing in this article, thinking the Server Admin and CIO are the audience.

Does that mean this article is only for server administrators and CIOs? Well, we feel that the hardware enthusiasts will find some interesting info too. We will test seven different CPUs, so this article will complement our six-core Opteron "Istanbul" and quad-core Xeon "Nehalem" reviews. How do lower end Intel "Nehalem" Xeons compare with the high end quad-core Opterons? What's the difference between a lower clocked six-core and a highly clocked quad-core? How much processing power do you have to trade when moving from a 95W TDP Xeon to a 60W TDP chip? What happens when moving from a 75W ACP (105W TDP) six-core Opteron to a 40W ACP (55W TDP) quad-core Opteron? These questions are not the ultimate goal of this article, but it should shed some light on these topics for the interested.

How many procurement people would understand this?

The Supermicro Twin2

This is the most innovative server of this review. Supermicro places four servers in a 2U chassis and feeds them with two redundant 1200W PSUs. The engineers at Supermicro have thus been able to combine very high density with redundancy - no easy feat. Older Twin servers were only attractive to the HPC world were computing density and affordable prices were the primary criteria. Thanks to the PSU redundancy, the Twin2 should provide better serviceability and appeal to people looking for a web, database, or virtualization server.

Most versions of this chassis support hot swappable server nodes, which makes the Twin2 a sort of mini-blade. Sure, you don't have the integrated networking and KVM of a blade, but on the flip side this thing does not come with TCO increasing yearly software licenses and the obligatory expensive support contracts.

By powering four nodes with a 1+1 PSU, Supermicro is able to offer redundancy and at the same time can make sure that the PSU always runs at a decent load, thus providing better efficiency. According to Supermicro, the 1200W power supplies can reach up to 93% efficiency. This is confirmed by the fact that the power supply is certified by the Electric Power Research Institute as an "80+ Gold" PSU with a 92.4% power efficiency at 50% load and 91.2% at 20% load. With four nodes powered, it is very likely that the PSU will normally run between these percentages. Power consumption is further reduced by using only four giant 80mm fans. Unfortunately, and this is a real oversight by Supermicro, as the fans are not easy to unplug and replace. We want hot swappable fans.

Supermicro managed to squeeze two CPUs and 12 DIMM slots on the tiny boards, which means that you can outfit each node with 48GB of relatively cheap 4GB DIMMs. Another board has a Mellanox Infiniband controller and connector onboard, and both QDR and DDR Infiniband are available. To top it off, Supermicro has chosen the Matrox G200W as a 2D card, which is good for those who still access their servers directly via KVM. Supermicro did make a few compromises: you cannot use Xeons with a TDP higher than 95W (who needs those 130W monsters anyway?), 8GB DIMMs seem to be supported only on a few SKUs right now, and there is only one low profile PCI-e x16 expansion slot.

The Twin2 chassis can be outfitted with boards that support Intel "Nehalem Xeons" as well as AMD "Istanbul Opterons". The "Istanbul version" came out while we were typing this and was thus not included in this review.

Power measurement used this gear.

Power was measured at the wall by two devices: the Extech 38081…

…and the Ingrasys iPoman II 1201. A big thanks to Roel De Frene of Triple S for letting us test this unit.

The Extech device allows us track power consumption each second, the iPoman only each minute. With the Supermicro Twin2, we wanted to measure four servers at once, and the iPoman device was handy for measuring power consumption of several server nodes at once. We double-checked power consumption readings with the Extech 38081.

Read more

Biggest Opportunity to Green the Data Center - Procurement

I just had a pleasant and thought provoking conversation with IBM’s VP of Deep Computing, David Turek, regarding IBM’s energy efficiency supercomputers press release.

"Modern supercomputers can no longer focus only on raw performance," said David Turek, vice president, deep computing, IBM. "To be commercially viable these systems most also be energy efficient. IBM has a rich history of innovation that has significantly increased energy efficiency of our systems at all levels of the system that are designed to simultaneously reduce data center costs and energy use."

There are many things Dave and I discussed.  One of the areas is the role of procurement, realizing the impacts and issues of energy cost.

Procurement is the acquisition of goods and/or services at the best possible total cost of ownership, in the right quality and quantity, at the right time, in the right place and from the right source for the direct benefit or use of corporations, individuals, or even governments, generally via a contract.

IBM’s blue gene group has a paper on TCO to help procurement groups see how energy costs effect TCO.

image

image

Now ask yourself how many procurement people know how energy costs effect the TCO of data center services?

Unfortunately, training the whole procurement staff to learn the impact of energy is an impossible task. The answer to this is to add data center impact modeling software/tools to the procurement process, calculating TCO including energy costs.

Think about the roles of procurement as you green your data center.  I bet none of you have, but hopefully now you will.

Read more

IBM claims 90% Top 20 Energy Efficient Supercomputers

IBM will release on Monday, July 13 a press release on its energy efficient supercomputers.  I don’t have the link yet, but below is the text.

Ironically, I just sat down with a person at ARM to discuss energy efficiency and he was mentioning how in 2005 he asked about energy use at a super computing conference, and people would think he was asking a stupid question.

It’s great to see the question “how much energy does your supercomputer use?” is now a normal part of a purchase decision.

Note this quote below.

"Modern supercomputers can no longer focus only on raw performance," said David Turek, vice president, deep computing, IBM. "To be commercially viable these systems most also be energy efficient. IBM has a rich history of innovation that has significantly increased energy efficiency of our systems at all levels of the system that are designed to simultaneously reduce data center costs and energy use."

Report Finds IBM Supercomputers Most Energy Efficient in the World
IBM Dominates Green500; 90 percent of Top20 Energy Efficient Supercomputers Made by IBM, Staggering 57 Percent of Top100 from IBM

ARMONK, N.Y., July 13, 2009. . . A new list announced today found that IBM (NYSE: IBM) supercomputers already deemed the most powerful in the world are also the most energy efficient according to the findings of the latest Supercomputing 'Green500 List' announced by Green500.org.

Energy efficiency--including performance per watt for the most computationally demanding workloads--is a core design principle in developing IBM systems.  IBM offers the broadest range of generally applicable supercomputers represented on the Green500 List including Blue Gene, Power servers, iDataPlex, BladeCenter and hybrid clusters.

The list shows that 18 of the Top20 most energy efficient supercomputers in the world are built on IBM high performance computing technology. The list includes supercomputers from across the globe being used for a variety of applications such as astronomy, climate prediction and pharmaceutical research. IBM also holds 57 of the Top100 positions on this list.

The number one most energy efficient system in the world -- an IBM supercomputer based on an IBM BladeCenter QS22 at the Interdisciplinary Centre for Mathematical and Computational Modeling, University of Warsaw -- produces more than 536 Mflops (millions of floating point operations per second) per watt of energy.

The world's fastest supercomputer, the IBM supercomputer at Los Alamos National Laboratories, the machine that first broke through the petaflop barrier, is ranked the fourth most energy efficient supercomputer in the world capable of over 444 Mflops per watt of energy, while the second  fastest supercomputer in the world manufactured by Cray is ranked 90th on the Green500 List, producing only 152 Mflops per watt.

"Modern supercomputers can no longer focus only on raw performance," said David Turek, vice president, deep computing, IBM. "To be commercially viable these systems most also be energy efficient. IBM has a rich history of innovation that has significantly increased energy efficiency of our systems at all levels of the system that are designed to simultaneously reduce data center costs and energy use."

The Green500 list is published by Green500.org. It provides a ranking of the most energy-efficient supercomputers in the world and serves as a complementary view to the TOP500 list of worldwide supercomputers announced last month by Top500.org.

More information about the Green500 List is available at http://www.green500.org
More information about the TOP500 List is available at http://www.top500.org
More information about IBM and HPC Solutions: www.ibm.com/deepcomputing

Read more

Instant Virtualized Physical Infrastructure – Stratascale addresses market for hybrid physical and virtual servers.

Just interviewed Reed Smith, Director of Product Management for StrataScale and discussed their IronScale product.  Their announcement is here.

This is an interesting extension of service from Raging Wire collocation services to host a virtualized infrastructure.

image

In many ways the Stratascale offering is a greener data center solution for companies with 50 – 500 employees who run their own servers on site or even a collocation offering. Why? Because the solution is designed with Virtualization as an assumption.  The good thing is versus services like Amazon Web Services you can also choose to have a no virtualization and be direct on hardware.  Also, the servers are not share with other customers. The virtualized servers are all yours.

This was my first chat with Reed, but I am sure I’ll be talking to him again to discuss Stratascale’s solution.

Part of Stratascale’s value is its UI for system provisioning.

image

You can watch a demo here.

The physical servers are listed.

Available in 3 levels of automated integrated bundled environments, IronScale is built on real, physical dual- and quad-core x86 servers.

Level 1 Server...2 cores, 4GB RAM, 70GB storage

Level 2 Server...4 cores, 8GB RAM, 70GB storage

Level 3 Server...8 cores, 16GB RAM, 70GB storage

Each server bundle features:

Dual-/quad-core Intel® Xeon® CPUs

70GB of local RAID 5O storage

Your choice of a Red Hat® Linux® or Windows® Server OS

1 Mbps dedicated bandwidth

2 networks and 8 external IP addresses per client

100 internal IP addresses per network

VPN for 1 site-to-site and 5 remote users per client

24x7 Monitoring and Management

KVM access

The data center facility is listed is run by Raging Wire.

Our Tier IV Data Center

Staff

Multi-disciplined, certified engineering staff and 24x7 support team of IT experts is rigorously trained, has extensive industry expertise, and is committed to clients, standards, and best practices.

World-Class Facility

Tier IV class "A+" 200,000+ square foot data center

Engineered for 99.999% availability

Power and cooling is scalable beyond 200 Watts per sq. ft.

Carrier neutral high-speed Internet, over 20 Gigabits of bandwidth

N+2 minimum system and component redundancy for concurrent maintainance and fault tolerance

On-site, 69Kv power substation and well

Financial-grade physical security

Their PUE is not advertised, but given the highly virtualized environment the performance per watt should be high.

Environmental Responsibility

At RES, we have a healthy respect for our environment. Which is why we have always built common sense and green practices into everything we do. But being environmentally responsible isn't just good for the community and the planet - its good business. For our efforts, we have received numerous environmental awards, but we don't stop there. We constantly refine our processes to improve our power and resource efficiency, reduce and recycle wastes, and help us to operate more productively.

Efficiency and sustainability is key to everything we do.

Our power conservation efforts have saved more than 4,000,000 kWh of electricity.

We've increased our chilled water plant and cooling efficiency, conserving an additional 250,000 kWh of electricity per month.

Our two chemical-free water treatment systems have eliminated chemical use entirely and reduced our water discharged by 80%.

We recycle 100% of the cardboard, steel, copper, and aluminum we use for construction and other activities. In paper and cardboard recycling alone, it is equivalent to 36+ mature trees per year.

We recycle 100% of our lead acid batteries, over 700,000 pounds worth, and donate the proceeds to the United States Wolf Refuge.

We recycle more than 2,000 pounds of electronic waste per year, from both our own company and as a free service to our valued clients.

Our social and environmental responsibility is an essential part of contributing to and protecting the community we live in.

Read more