Yes, PUE is the Next Battle Ground

Microsoft’s Steve Clayton posted an interesting question.

Is PUE the new battleground?

I’m reading a lot about data centers of late – so much so that I’m even spelling it the US way already (sigh). What is becoming increasingly clear to me though is that PUE may well be the new battleground between some of the industry heavyweights.

In very basic terms, PUE (Power Usage Effectiveness) is the ratio of power incoming to a data center to power used. The theoretical ideal is 1.0 of course which means you’re not wasting any energy. As Mike Manos points out this is all part of the “Industrialization of IT” that he and our GFS team works on. Google and Sun have both been waxing lyrical about PUE of late with some impressive numbers, particularly from Google who cite a PUE of 1.13 in one datacenter. Very impressive indeed.

Yes PUE is the next battleground. 

Why how many servers you have is not a number to be proud of. As Google has found being the largest data center brings a critical eye.

image

A better number for people relate to market is their PUE.  It is closest thing we have to a MPG.  Ideally there will performance per watt, but given the range of work done in data centers I can’t think of what the performance metric would be.

Last week I talked to Google’s Erik Teetzal about Google’s PUE calculations, and I’ve been thinking of a good post based on our conversation. What I think surprised Google is how much coverage they received regarding their PUE. The main benefit of Google’s press is there are thousands more people who have heard the term PUE.

As Steve Clayton mentions Microsoft’s Mike Manos says the Container Data Center has a PUE of 1.22.  Google’s is 1.21.  Microsoft’s numbers are higher though because they count their office space in the overhead to run the data center.  I think Microsoft is willing to count the office space given they have 1/3 of the reported number of employees Google has in their data centers.

Sun has achieved a PUE of 1.28 in their data centers, and I need to talk to Sun’s Dean Nelson to get more details.

Noticeably absent from the PUE discussion is IBM and HP.  How efficient are the data centers that IBM and HP build? What is their PUE?

Should there be an independent auditing of PUE to ensure accuracy?

A good side effect is the data center vendors are thinking of how to position their products improving PUE.

Read more

Modular Data Center Cooling Plant – McQuay & TAS

Mission Critical had a post on Modular Cooling plants. The post is written by a McQuay employee, so of course it discusses their equipment.

water-cooled chiller <br>

The modular central plant combines a water-cooled chiller with pumps, cooling tower and interconnected piping.

Since June 2007, a total of six Modular Central Plants have provided chilled water to 59 computer room air conditioning units in the 52,500 square foot data center. Each of the modules consists of a 500-ton centrifugal compressor water chiller pre-engineered and pre-assembled with pumps, piping, cooling tower, control panel and associated water treatment system.


Kirchner’s original goal of increased cooling capacity to meet Herakles Data’s projected growth was not only achieved, but also surpassed. “We provide N+1 business solutions for our customers, meaning we meet their needs plus provide redundancy,” he said. “Today, however, we have surpassed that goal because we typically run only two of the four original modular central plants. That results in 2N cooling capacity today available to our data center customers.”


In addition to fast-track construction and commissioning, the new central plants resulted in impressive energy savings compared to the old system. “Our old system used 3,600 kilowatt-hourwatt-hour (kWh)h/ton a day; the new system uses 2,800 kWh/ton a day for a 22 percent reduction in energy,” Stancil said. That reduced energy usage earned Herakles Data a $50,000 rebate from the Sacramento Municipal Utility District.

Another alternative is TAS’s system.

Chilled Water Comfort Cooling Systems

TAS designs, manufactures, installs / commissions and services modular chilled water plants.  Standard product designs range from 200 to 8000 tons for individual units.  TAS' highly engineered chilled water plants are designed to facilitate ease of installation and maintenance.  TAS takes a holistic systems approach to standard product design.  We engineer our chilled water systems for optimized efficiency performance and since we are a single source provider from design to commissioning, we guarantee efficiency performance at system commissioning.

Chilled Water PlantProduct scope includes all components normally found in a chilled water plant.  Key components include chillers, chilled water pumps, condenser water pumps, MCCs, digital controls, full enclosure, cooling tower and cooling tower support structure.  TAS manufactures its chilled water systems in an ISO 9001:2000 quality process factory environment.  Given our ability to control all facets of product manufacture and mitigate project schedule risk, such as weather delays and skilled labor shortages, TAS is able to guarantee delivery schedule with LDs (liquidated damages) for non-performance.

Given the economic environment to cut costs, every data center operator should evaluate a modular cooling system as alternative.

Read more

Not a Bogus Green Data Center Initiative, says Intel

Intel’s Brently Davis, manager of Intel’s data center efficiency project is quoted in Baseline Magazine.

This isn’t some bogus green data center initiative, executives say. This is an efficiency exercise that saves the green that matters most in these tough economic times.

“Our main concern is not about being green,” says Brently Davis, manager of Intel’s data center efficiency project. “It is more so about being efficient, figuring out how we can run our computing environment better.

The problems Intel face are the same as any company.

Intel developed Davis’ group within its IT operations department just so it could tackle such issues. “You’ve always got to pull people in to help look horizontally,” he says. “A lot of times, the focus of IT groups is myopic: They like to concentrate on their own vertical. But you’ve got to have a horizontal overview, and I think that’s what we try to help them do.”

Their solution was to get finance people on board.

One of the first things that Davis did to make that happen was to bring in the financial folks to get a clear picture of spending beyond the overarching price tag—to see where the dollars were going. That’s the key step in any efficiency project, he says.

“You can’t do this if you don’t understand what you’re spending,” Davis says. “We brought in our finance team and said, ‘Pull all of this stuff together so we can figure out what we’re spending. Put these numbers together, get it validated and make sure it makes sense.’”

Intel reconciles issues with standardization.

Standardization Begets Better Utilization

With the business case laid out and the “but why-ers” on board, Intel is now putting all the logistical puzzles into place. The first step has been to work on standardizing the environments and practices to reduce redundancies and improve the way all the data centers work together. Because, as Davis says, when Intel surveyed the data center landscape in 2006, it had 150 centers and “there was no synergy to anything; we were all over the place.”

And executes with specific goals.

This was enabled through a number of strategies, including virtualization, grid computing and cloud computing. And it was coupled with efforts to do a better job refreshing servers—replacing them with fewer servers along the way.

Before the program started, Davis reported that by 2014, Intel was on track to move up from 90,000 servers to 225,000 servers. The goal, he says, is to keep that number at 100,000 in six years’ time and reduce the cost and power draw of each of these servers significantly.

“The only way we could do that was by getting off the old hardware,” he recalls. “We were just as guilty as everyone else. We were sitting on servers that were possibly seven or eight years old. We needed to start refreshing those servers to reduce the power consumption in the data center.”

Read more

Sun’s Software Labs Has a PUE of 1.28

I had blogged before about Sun’s Modular Data Center.

What I missed was their entry on page 15 of the document where  Sun discussed how they consolidate 32 different server labs into one space and achieve a a PUE of 1.28.

Efficiency in Sun’s Santa Clara Software Datacenter
In 2007 Sun completed the largest real-estate consolidation in its history. We closed our Newark, California campus as well as the majority of our Sunnyvale, California campus shedding 1.8 million ft2 (167,000 m2)from our real-estate portfolio. One example in this consolidation is the Software organization’s datacenter, where 32,000 ft2 (2,973 m2)of space distributed across 32 different rooms was consolidated into one 12,769 ft2 (1,186 m2) datacenter. In the end, 405 racks were configured in this space, using an average of 31.5 ft2 (2.9 m2) per rack. The initial power budget was 2 MW (out of 9 MW overall), with the ability to expand to 4 MW in the future. This design supports today's current average of 5 kW with the ability to grow to 9 kW average per rack. Keep in mind that even though the averages are 5 kW and 9 kW, racks ranging from 1 kW to 30 kW can be deployed anywhere in this datacenter.


We measured the power usage effectiveness of the Santa Clara Software organization’s datacenter, and if any single number testifies to the value of our modular design approach, it is the PUE of 1.28 that we were able to achieve. By using our modular approach, which includes using a high-efficiency variable primary-loop chiller plant, close-coupled cooling, efficient transformers, and high-efficiency UPS, an astonishing 78 percent of incoming power goes to the datacenter’s IT equipment.
Figure 4. Our Santa Clara Software organization’s datacenter achieved a PUE of 1.28, which translates to a savings of $402,652 per year compared to a target datacenter built to a PUE of 2.0.

image


In traditional raised floor datacenters the efficiencies worsen as densities increase. Our Pod design is efficient from day one, and remains efficient regardless of density increases.

Even though Sun compares its savings to a PUE of 2.0.  Most likely the PUE was over 2.0 given the 32 labs were scattered in conventional office spaces, not in a data center.


Note that the Santa Clara datacenter is a Tier 1 facility, by choice, with only 20 percent of the equipment on UPS. Choosing Tier 1 was a corporate decision to match the datacenter with the functions that it supports. The design approach is the same for our Tier 3 datacenters with one exception: the amount of redundancy. As you increase the redundancy (to N+1 or 2N) you can lose efficiency. If you make the correct product and
IT LoadChiller PlantRC/CRAC LoadsUPS/Transformer LossLighting78.57% PUE=1.28
16 The Range of Datacenter Requirements Sun Microsystems, Inc.
design decisions that make efficiency the highest priority, you can maintain a PUE of 1.6 or less in a Tier 3 datacenter.

Read more