Compare Sun’s PUE/Data Center Disclosure vs. Google’s

Sun’s PR team sent me some blog entries on their PUE efforts.

One is Sun discussing their achieving a 1.28 PUE.

I'll show you mine...


So how efficient is your datacenter?

Last month I received some pretty cool news.  The Chill-Off we have been hosting for the Silicon Valley Leadership Group (SVLG) in their datacenter demonstration project, was completed.  This was a head to head test against APC In-Row, Liebert XDV, Spraycool, Rittal Liquid Racks and IBM Rear Door Heat Exchanger. Sun was the host that provided the datacenter, plant, compute equipment, and support.  Lawrence Berkeley National Labs (LBNL) was the group conducting the test on behalf of the California Energy Commission (CEC)  The results from this test will be published in a report in June of this year and we will be hosting the event.

But one piece of information came out of this that I could not wait to share.  As part of the chill-off, LBNL did a baseline of our plant in Santa Clara.  Mike Ryan on my staff then captured the usage data for the remaining portions of the datacenter.  This gave us our PUE or DCiE (pick you favorite) number.

Surprise, Surprise!

I knew that our datacenter would be efficient because of the way we had designed it, but I did not have any data to back it up yet.  We had been telling our customers, and others that toured the center, that we are pretty sure we would be under the industry targeted PUE of 2 (Created by Christian Belady from the Green Grid).  That was a conservative number.  But when I got the data back, even I was surprised at how efficient it was.

We achieved a PUE of 1.28!

Google says they eliminated data for facilities below 5MW.

Such a strong claim demands evidence, especially in light of recent criticism of companies "gaming the numbers." On this page we will explain our measurements in detail to ensure that they are realistic and accurate. It is worth noting that we only show data for facilities with an actual IT load above 5MW, to eliminate any inaccuracies that can occur when measuring small values. This section is aimed at data center experts, but we have tried to make it accessible to a general technical audience as well.

But given Google’s claim of measurement accuracy, it would seem like they should be able to measure numbers below 1MW like Sun has.

Error Analysis

To ensure our PUE calculations are accurate, we performed an uncertainty analysis using the root sum of the squares (RSS) method.  Our uncertainty analysis shows that the overall uncertainty in the PUE calculations is less than 2% (99.7% confidence interval).  Our power meters are highly accurate (ANSI C12.20 0.2 compliant) so that measurement errors have a negligible impact on overall PUE uncertainty.  The contribution to the overall uncertainty for each term described above is outlined in the table below.

Here are more technical details on Sun’s PUE calculations.

Sun exposes more details than Google as they want your business.

Better yet, would you like Sun to help design your datacenter to achieve the same efficiencies?  Let us drive the next generations physical and technical solutions for you.  Sun's Eco Virtualization practice can do just that.  Email me at dean.nelson@sun.com and I'll tell you how.

What is Google’s motivation for its PUE disclosure?

Read more

James Hamilton Analyzes Google’s Data Center ideas – Water and PUE

James Hamilton analyzes Google’s Data Center publication.

A Small Window into Google's Data Centers

Google has long enjoyed a reputation for running efficient data centers. I suspect this reputation is largely deserved but, since it has been completely shrouded in secrecy, that’s largely been a guess built upon respect for the folks working on the infrastructure team rather than anything that’s been published. However, some of the shroud of secrecy was lifted last week and a few interesting tidbits were released in Google Commitment to Sustainable Computing.

On server design (Efficient Servers), the paper documents the use of high-efficiency power supplies and voltage regulators, and the removal of components not relevant in a service-targeted server design. A key point is the use of efficient, variable-speed fans. I’ve seen servers that spend as much as 60W driving the fans alone. Using high efficiency fans running at the minimum speed necessary based upon current heat load can bring big savings. An even better approach is employed by Rackable Systems in their ICE Cube Modular Data Center design (First Containerized Data Center Announcement) where they eliminate server fans entirely.

Parts I liked about James efforts are #1 – he discusses water.

It’s good to see water conservation brought up beside energy efficiency. It’s the next big problem for our industry and the consumption rates are prodigious. To achieve efficiency, most centers have cooling towers which allow them to avoid the use of energy-intensive direct-expansion chillers except under unusually hot and humid conditions. This is great news from an energy efficiency perspective, but cooling towers consume water in two significant ways. The first are evaporative losses which are hard to avoid in wet tower designs (other less water-intensive designs exist). The second is caused by the first. As water evaporates from the closed system, the concentrations of dissolved solids and other contaminants present in the supply water left behind by evaporation continue to rise. These high concentrations are dumped from the system to protect it and this dumping is referred to as blow-down water. Between make-up and blow-down water, a medium-sized, 10MW facility, built to current industry conventions, can go through ¼ to ½ million gallons of water a day.

The paper describes a plan to address this problem in the future by moving to recycled water sources. This is good to see but I argue the industry needs to reduce overall water consumption, whether the source is fresh or recycled. The combination of higher data center temperatures and aggressive use of air-side economization are both good steps in that direction and industry-wide we’re all working hard on new techniques and approaches to reduce water consumption.

Then #2 PUE.

The section on PUE is the most interesting in that the are documenting an at-scale facility running at a PUE of 1.13 during a quarter. Generally, you want full-year numbers since these numbers are very load and weather dependent. The best annual number quoted in the paper is 1.15 which is excellent. That means that for every watt delivered to servers 0.15W is lost in power distribution and cooling.

This number, with pure air-side cooling and good overall center design, is quite attainable. But, elsewhere in the document, they described the use of cooling towers. Attaining a PUE of 1.15 with a conventional water-based cooling system is considerably more difficult. On the power distribution side, conventional designs waste about 8% to 9% of the power delivered. A rough breakdown of where it goes is 3 transformers taking 115KV down to 13.2KV down to 480KV and then down to 208KV for delivery to the load. Good transformer designs run around 99.7% efficiency. The uninterruptable power supply can be as poor as 94%, and roughly 1% is lost in switching and conductors. That approach gets us to 8% lost in distribution. We can easily eliminate one layer of transformers and either use a high efficiency bypass UPS. Let’s use 97% efficiency for the UPS. Those two changes will get us 4% to 5% lost in distribution. Let’s assume we can reliably hit 5% power distribution losses. That leaves us with 10% for all the losses to the mechanical systems. Powering the Computer Room Air Handlers, the water pumps etc. at only 10% overhead would be both difficult and more impressive.

The 1.15 PUE with pure air-side economization in the right climate looks quite reasonable, but powering a conventional, high-scale, air and water, multi-conversion cooling system at this efficiency looks considerably harder to me. Unfortunately, there is no data published in the paper on the approach and whether it was simply attained by relying on favorable weather conditions and air-side economization with the water loops idle.

And last, #3

The paper concludes that “if all data centers operated at the same efficiency as ours, the U.S. alone would save enough electricity to power every household within the city limits of Atlanta, Los Angeles, Chicago, and Washington, D.C.”. This is hard to independently verify without much more information than offered by the paper. Most of the techniques employed are not discussed in the paper published last week. If the large service providers like Google, Microsoft, Yahoo, Beidu, Amazon and a handful of others don’t publish the details, the rest of the world’s data centers will never run as efficiently as described in the paper. Only high-scale datacenter users can afford the R&D program to spend on increased efficiency and water consumption elimination. I’m arguing it’s up to all of us working in the industry to publish the details to allow smaller-scale deployments to operate at similar efficiency levels. If we don’t, it’ll continue to be the case that US data centers alone will be needlessly spending enough power to support every household in Atlanta, Los Angeles, Chicago, and Washington DC. Each day, every day.

Hopefully other press and bloggers will read James’s post, and provide other points beyond Google’s marketing efforts.

Read more

Google’s PUE Coverage

It has been interesting watching Google’s PUE and data center claims spread through blogs and news.

I haven’t blogged about Google’s post, even when I saw it on other blog sites.

Why? Because, I knew Google was going to write a PUE post, and by the time they finally did, I was disappointed it didn’t provide more facts.  Nothing really new, other than Google claims being the most efficient.

As ComputerWorld mentions, the release of the PUE #’s coinciding with another marketing event.

This announcement coincides with a speech about energy that CEO Eric Schmidt gave Wednesday evening in San Francisco, in which he proposed a $4.4 trillion clean-energy plan.

It is great that Google has helped to educate thousands of people on PUE and how much energy can be saved.

There are some details, but not enough to help on equipment selection.

I plan on writing more on this and contrasting what others have shared on PUE best practices.

Read more

The Under Water Data Center, Response to Risks of Google’s Floating Data Center, Submerge

There has been lots of news coverage of Google’s Floating Data Center patent.

I’ve learned to be a patent skeptic.  Didn’t Google get a patent for Containers? Where is the Google container data center?  Can Google sue Microsoft for the idea of using containers in their Chicago Data Center?  The military has been using containers for years delivering compute in containers.

As Data Center Knowledge asked whether a data center can weather a Hurricane.

Here is another idea Google can patent to survive the threat of Hurricane or Tsunami.

Submerge the data center.

The data center is now save from Hurricane and Tsunamis.  Security is better.  Does Greenpeace own a Submarine?

So, in the end what did Google’s container patent achieve? A bunch of speculation, and in reality Google did not build a container data center.

Don’t hold your breath waiting for the Floating Google Data Center.

Read more

Google’s Floating Data Center Ability to Survive a Hurricane? Move it

DataCenterKnowledge points to ZDNet’s Larry Dignan in technical and economic discussion of Google’s floating data center.


Google’s design for floating data centers, described in a patent application, addresses many of the cost issues that make operating a data center so expensive. In Google’s concept, the ocean would provide most of the energy to power and cool the servers and equipment, while also eliminating expenses for real estate and property taxes. As Larry Dignan points out, the design addresses many of the budget-busting features of the modern data center. ”I’d call it brilliant engineering, but the financial engineering could be even more impressive,” Larry writes.

And Rich brings up a good issue of disaster survival.

Would a Google data barge be able to weather a hurricane? Or a tsunami? In the past two weeks hurricanes have struck both the Gulf and Atantic coasts of the U.S., and Hurricane Ike is on track to threaten the western Gulf later this week. At first glance, the Google data barges seem even more exposed than the data center cargo shipsproposed by IDC, which would be docked in harbors. 

The simple answer is to move it. Microsoft could also do the same thing for containers in a data center for a Tornado.  Based on conversations with Google guys who are smart and economically aware, my assumption is they made the tradeoff design decisions and designing in survivability for a Hurricane is not money well spent.

The filing also cites the mobile nature of the floating data center (and particularly the data center containers) as an advantage over traditional data centers. Google notes that “the mooring fields may be moved, such as when demand for computing or telecommunications power moves, when sea conditions change (e.g., seasonally) or when a time period for legal occupation of an area expires.”

Read more