Projected PUE 1.18 for NCSA Blue Water Data Center

I blogged yesterday on the Univ of Illinois NCSA Blue Waters super computer.

Univ of Illinois NCSA facility drops UPS for energy efficiency and cost savings, bldg cost $3 mil per mW

Below is a lot of different parts in what Univ of Illinois’s NCSA facility is building to host the IBM Blue Waters Super Computer.  I’ve seen lots of people talk about energy efficiency and cost savings.  But, the things that got my attention is the fact is this facility dropped the UPS feature and it is built for $3mil per mW for a 24 mW facility.

The one thing I was looking for and couldn’t find was what the PUE would be for the data center.  Thanks to Google Alerts a person from the NCSA contacted me and I asked for the PUE of the facility.  They sent me this article that mentions PUE.   The answer is 1.18

With PCF and Blue Waters, we will achieve a PUE in the neighborhood of about 1.18.

The article is an interview with IBM Fellow Ed Seminaro, chief architect for Power HPC servers at IBM.  There are actually some excellent points that Ed makes.

Q: What are the common mistakes people make when building a data center?

A: One of the most common mistakes I see is designing the data center to be a little too flexible. It is easy to convince yourself that, when you build a building, you really want to build it to accommodate any type of equipment, but this is at the cost of power efficiency.

As I mentioned in my post yesterday, the building cost is $3mil per mW, much lower than a typical data center.

Another is cost of building construction. Some people spend enormous sums, but really, it gets back to can you design the IT equipment so that it doesn't require too much special capability. And what that really means is that you don't have to build a very special facility, you just have to be able to build the general power and cooling capabilities you need and a good sturdy raised floor. This can save a phenomenal amount of money.

Read more

Oregon State Data Center, learns from its first data center, a bit of humor

Saw this Oregon article about Oregon’s state data center.  I started reading expecting to hear interesting data center ideas, but I started to laugh as it was humorous to see this was Oregon state's first true data center and they thought they could run a data center with unqualified staff and they could do server consolidation across organizational boundaries.

Here is the background.

The Lesson from Oregon's Data Center: Don't Promise Too Much

12/04/2009

State governments across the country are making big changes in their IT departments. They're centralizing their own state data systems in a push to save money. The state of Washington is building a $300 million data center in Olympia. Oregon undertook a similar project a few years ago, but it's been criticized for failing to produce the promised financial savings. Salem Correspondent Chris Lehman found lessons from Oregon.

The State Data Center is a generic looking office building on the edge of Salem. Inside are the digital nerve centers of 10 state agencies, including Human Services, Corrections and Transportation. This mammoth information repository is so sensitive, you can't get very far before you get to something that operations manager Brian Nealy calls the "man trap." It's kind of like an air lock, you have to clear one set of doors before you can get through the next set.

And the story continues.

They have a physical security system.

Bryan Nealy: "You'll notice there are card readers on every door in the secure part of the data center. That way we can give people access only to the areas they need to go into. It's very granular as far as where people can get. This is the command center. This is manned 24–7, 365."

Yet, their goal was to consolidate across agencies which would cause huge workflow and security problems.

Koreski says the original business case for this $63 million facility made assumptions that turned out to be impractical. For example, planners figured they could combine servers from different agencies just by putting them under the same roof. But that's not what happened. Koreski says you can't do the two things at once: physically move the servers and combine their functions.

Due to this assumption they promised a cost savings.

Three years after it opened, data managers are still trying to reduce the number of physical machines at the Oregon Data Center. That ongoing work is one of the reasons Data Center Director John Koreski concedes the facility isn't on track to meet the original goal of saving the state money within the first five years.

John Koreski: "It's not even close."

So, data center operations is dancing to show they didn’t save money, but they did reduce future costs.

And that change has meant the economies of scale haven't materialized as fast as once thought. Koreski took the reigns of the Data Center in January. His predecessor left after a scathing audit from the Oregon Secretary of State's office last year. It said, quote, "It is unlikely that the anticipated savings will occur." But Director Koreski insists the Data Center is saving the state money.

John Koreski: "What our consolidation efforts resulted in was a cost avoidance, as opposed to a true cost savings where we actually wrote a check back to the Legislature."

Luckily Intel and Moore’s law saved their ass even though they are making it seem like the data center addresses budget issues.

In other words, Koreski says the Data Center is growing its capacity at a faster rate than it's growing its budget. That explanation computes for at least one analyst. Bob Cummings works in the Legislative Fiscal Office. It's his job to make sure the numbers add up for major state technology projects. He jumped into the Data Center fray as soon as he was hired last summer, and what Cummings found shocked him.

The Legislative Fiscal office faults the rationale for the data center as bullshit.

Bob Cummings: "It was the right thing to do. However, the rationale for doing it, and the baseline cost estimates and stuff for doing it, were all b–––––––. They were all wrong. They were all low."

Then it gets funnier.

Cummings says the state of Oregon failed to take into account one key detail: Washington already had a data center and is building a bigger one. In Oregon, no one with the state had ever run a Data Center before.

We have never done this before, but our first try was a great job.

Bob Cummings: "I mean, we had to build everything from scratch. And by the way, we did a great job of building a data center but didn't have anybody to run it, didn't have any procedures, no methods. We outsourced to a non–existent organization."

These guys are amateurs.

Oregon Department of Administrative Services Director Scott Harra echoed this in his response to the Secretary of State's audit. Harra wrote that the consolidation effort was hampered because it required skills and experience that did not previously exist in Oregon's state government. After last year's audit, Democratic State Representative Chuck Riley led a hearing that looked into the Data Center. He says he's convinced Data Center managers are saving the state money, but:

Rep. Chuck Riley: "The question is, did they meet their goals. And the answer is basically no, they didn't meet their goals. They over promised."

And that's the basic message Riley and others have for developers of Washington's data center: Keep expectations realistic. I'm Chris Lehman in Salem.

So, for all of you looking at Oregon for a state to put a data center. You can skip a trip to the Oregon state data center as I doubt you will hear this story.  Although it would be entertaining to hear an Oregon politician explain data center operations.

Read more

NREL squeezes a Data Center in a Net Zero Building

WSJ has an article about National Renewable Energy Laboratory (NREL) and its difficult task to be be a Net Zero Building.  Here are nuggets from the article I found interesting.

image

"Traditional architecture is design first, then figure out how to make it work," says Rich von Luhrte, president of RNL, which has offices in Denver. "This project reverses that mindset: Energy drives the design."

The building, in fact, will control a good deal of the working environment. Some windows will open and close automatically as outdoor air warms and cools throughout the day. Other windows will be left to employees to operate—but the building will ping occupants with reminders, flashing alerts on their laptops (desktops use too much energy) when it is time to open or close particular panes.

image

The cubicles were engineered to save energy down to the smallest detail; even the phones, for instance, are special models that use 2.8 kilowatt-hours of electricity a month, compared with 10.8 kilowatt-hours for standard models.

Another striking feature of the NREL building: It will have no central air or heat and no fixed thermostat. The temperature will fluctuate during the day, though it shouldn't go below 68 degrees or above 80.

image

NREL plans to report on its setbacks, as well as its successes, in scientific journals and presentations to developers, architects and engineers. Office buildings account for 18% of U.S. energy consumption, so any lessons about efficiency learned here could "have a huge impact on our nation's energy security," says Jeffrey Baker, director of the Energy Department's local field office.

Where is the data center?  The WSJ doesn’t mention the data center.  But the official NREL press release does.

The RSF will be a 219,000 square foot facility supporting more than 800 Laboratory staff, along with an energy efficient information technology data center.

Then came the large new data center, vital to the Laboratory's significant and growing computational needs, but more than what a typical office building would include.

Data centers usually have voracious energy appetites. But this late addition still had to fit with the RSF concept.

Researchers came up with a combination of evaporative cooling, outside air ventilation, waste heat capture and more efficient servers to reduce the center's energy use by 50 percent over traditional approaches.

Because the data center serves the entire Laboratory campus and not just the RSF, an energy allowance was added to reflect the exception to the project. Now the RSF energy use intensity including the data center is 35.1 kBtus/sf/year. That's still better than most of today's energy efficient buildings and well under half the energy used by a similar building built to code for the same budget.

Also for climate control, a dynamic network of automatically controlled windows, evaporative cooling, radiant heating and cooling, window glazing and heat recovery from the data center.

How efficient is the new RSF building?

Comparison of Buildings Energy Use Intensity:
  • Average US office building: 90 kBtu/sf/year
  • ASHRE code for new commercial space: 55 kBtu/sf/year
  • Chesapeake Bay Foundation, Annapolis, Md.: 40 kBtu/sf/year
  • Big Horn Hardware, Silverthorne, Colo.: 40 kBtu/sf/year
  • NREL RSF: 35.1 kBtu/sf/yr, including the data center
  • NREL Thermal Test Facility : 29 kBtu/sf/year

Here is a video of NREL Thermal Testing Facility.

Read more

What most will miss in EPA’s GHG announcement, impact on water and power infrastructure

It is pretty cool that you don’t have to be official press event on Dec 7, 2009 to see news events like EPA’s GHG announcement.  I could watch a live feed through MSNBC. 

The official press announcement makes warnings to health and environment, but in the report is impact to water and power infrastructure both of which you need for data centers.

I was able to get to the official climate change page http://www.epa.gov/climatechange/endangerment.html

Endangerment and Cause or Contribute Findings for Greenhouse Gases under the Clean Air Act

You will need Adobe Acrobat Reader, available as a free download, to view some of the files on this page.  See EPA's PDF page to learn more about PDF, and for a link to the free Acrobat Reader.

U.S. Environmental Protection Agency Administrator Lisa P. Jackson press briefing – Live Streaming available through www.epa.gov.

Action

On December 7, 2009, the Administrator signed two distinct findings regarding greenhouse gases under section 202(a) of the Clean Air Act:

  • Endangerment Finding: The Administrator finds that the current and projected concentrations of the six key well-mixed greenhouse gases--carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6)--in the atmosphere threaten the public health and welfare of current and future generations.
  • Cause or Contribute Finding: The Administrator finds that the combined emissions of these well-mixed greenhouse gases from new motor vehicles and new motor vehicle engines contribute to the greenhouse gas pollution which threatens public health and welfare.

These findings do not themselves impose any requirements on industry or other entities.  However, this action is a prerequisite to finalizing the EPA’s proposed greenhouse gas emission standards for light-duty vehicles, which were jointly proposed by EPA and the Department of Transportation’s National Highway Safety Administration on September 15, 2009. 

Going through the findings document what I found very interesting is the water section.  So, even though everybody thinks this is about GHG.  The potential effect on the water supply is huge. Section 11 of the report covers water.  Section 11(d)

11(d) Implications for Water Uses

There are many competing water uses in the United States that will be adversely impacted by climate change impacts to water supply and quality. Furthermore, the past century is no longer a reasonable guide to the future for water management (Karl et al., 2009). The IPCC reviewed a number of studies describing the impacts of climate change on water uses in the United States that showed:

 Decreased water supply and lower water levels are likely to exacerbate challenges relating to navigation in the United States (Field et al., 2007). Some studies have found that low-flow conditions may restrict ship loading in shallow ports and harbors (Kundzewicz et al., 2007). However, navigational benefits from climate change exist as well. For example, the navigation season for the North Sea Route is projected to increase from the current 20 to 30 days per year to 90 to 100 days by 2080 (ACIA, 2004 and references therein).

 Climate change impacts to water supply and quality will affect agricultural practices, including the increase of irrigation demand in dry regions and the aggravation of non-point source water pollution problems in areas susceptible to intense rainfall events and flooding (Field et al., 2007). For more information on climate change impacts to agriculture, see Section 9.

 The U.S. energy sector, which relies heavily on water for generation (hydropower) and cooling capacity, will be adversely impacted by changes to water supply and quality in reservoirs and other water bodies (Wilbanks et al., 2007). For more information on climate change impacts to the energy sector, see Section 13.

 Climate-induced environmental changes (e.g., loss of glaciers, reduced river discharge in someregions, reduced snow fall in winter) will affect park tourism, winter sport activities, inland water sports (e.g., fishing, rafting, boating), and other recreational uses dependent upon precipitation (Field et al., 2007). While the North American tourism industry acknowledges the important influence of climate, its impacts have not been analyzed comprehensively.

 Ecological uses of water could be adversely impacted by climate change. Temperature increases and changed precipitation patterns alter flow and flow timing. These changes will threaten aquatic ecosystems (Kundzewicz et. al., 2007). For more information, on climate change impacts on ecosystems and wildlife, see Section 14.

 By changing the existing patterns of precipitation and runoff, climate change will further stress existing water disputes across the United States. Disputes currently exist in the Klamath River, Sacramento Delta, Colorado River, Great Lakes region, and Apalachicola-Chattahoochee-Flint River system (Karl et al., 2009).

Energy is a section of itself in section 13.  It is good to see the EPA put water before Energy infrastructure.

13(b) Energy Production

Climate change could affect U.S. energy production and supply a) if extreme weather events become more intense, b) where regions dependent on water supplies for hydropower and/or thermal power plant cooling face reductions or increases in water supplies, c) where changed conditions affect facility siting decisions, and d) where climatic conditions change (positively or negatively) for biomass, wind power, or solar energy production (Wilbanks et al., 2007; CCSP 2007a).

Significant uncertainty exists about the potential impacts of climate change on energy production and distribution, in part because the timing and magnitude of climate impacts are uncertain. Nonetheless, every existing source of energy in the United States has some vulnerability to climate variability. Renewable energy sources tend to be more sensitive to climate variables, but fossil energy production can also be adversely effected by air and water temperatures, and the thermoelectric cooling process that is critical to maintaining high electrical generation efficiencies also applies to nuclear energy. In addition, extreme weather events have adverse effects on energy production, distribution, and fuel transportation

The official press release  is here.

EPA: Greenhouse Gases Threaten Public Health and the Environment / Science overwhelmingly shows greenhouse gas concentrations at unprecedented levels due to human activity

Release date: 12/07/2009

Contact Information: Cathy Milbourn, Milbourn.cathy@epa.gov, 202-564-7849, 202-564-4355; En español: Lina Younes, younes.lina@epa.gov, 202-564-9924, 202-564-4355

EPA: Greenhouse Gases Threaten Public Health and the Environment

Science overwhelmingly shows greenhouse gas concentrations at unprecedented levels due to human activity
WASHINGTON – After a thorough examination of the scientific evidence and careful consideration of public comments, the U.S. Environmental Protection Agency (EPA) announced today that greenhouse gases (GHGs) threaten the public health and welfare of the American people. EPA also finds that GHG emissions from on-road vehicles contribute to that threat.

image

The EPA provided sound snippets as well.

Speaker: Lisa P. Jackson
EPA Administrator

Sound bite 1 (MP3, 0:11 secs, 360 KB)
Transcript: Today, EPA announced that greenhouse gases threaten the health and welfare of the American people. We also found that greenhouse gas emissions from on-road vehicles contribute to that threat.

Sound bite 2 (MP3, 0:15 secs, 500 KB)
Transcript: The accumulation of CO2 and other greenhouse gases in the atmosphere can lead to hotter, longer heat waves that threaten the health of the sick, the poor, the elderly - that can increase ground-level ozone pollution linked to asthma and other respiratory illnesses.

Sound bite 3 (MP3, 0:15 secs, 500 KB)
Transcript: Today’s announcement, on its own, does not impose any new requirements on industry. But, today’s announcement is the prerequisite for strong new emissions standards for cars and trucks: the ones the president announced last spring.

Sound bite 4 (MP3, 0:22 secs, 700 KB)
Transcript: Today’s finding is based on decades of research by hundreds of researchers. The vast body of evidence not only remains unassailable, it’s grown stronger, and it points to one conclusion: greenhouse gases from human activity are increasing at unprecedented rates and are adversely affecting our environment and threatening our health.

Read more

Univ of Illinois NCSA facility drops UPS for energy efficiency and cost savings, bldg cost $3 mil per mW

Below is a lot of different parts in what Univ of Illinois’s NCSA facility is building to host the IBM Blue Waters Super Computer.  I’ve seen lots of people talk about energy efficiency and cost savings.  But, the things that got my attention is the fact is this facility dropped the UPS feature and it is built for $3mil per mW for a 24 mW facility. 

How can this be done?  I think a key contributor is IBM’s computer architects were involved to help make sure the building was designed to Blue Waters needs.

Maybe one of these days I can visit the facility in ChicagoUrbana-Champaign, but I can learn a lot just from the knowing where to look for information on the web.

Cnet news has an article IBM’s Blue Water super computer at University of Illinois National Center for Supercomputing (NCSA).  But this article doesn’t have much details about the building. I’ve had a few discussions with IBM’s supercomputing folks and I knew they have put a lot of work into the buildings, but it can be sometimes hard to get the information.  The good thing is given the project is run by Univ of Illinois there is public information you can get to like here.

image
William Kramer, Deputy Project Director, Blue Waters

By William Kramer
Deputy Project Director, Blue Waters

The computational science and engineering community requires five attributes from the systems they use and the facilities that provide those systems. These attributes deliver systems that efficiently and productively enhance the scientists' ability to achieve novel results. They are performance, effectiveness, reliability, consistency, and usability (which I refer to as the PERCU method). This is a holistic, user-based approach to developing and assessing computing systems, in particular HPC systems. The method enables organizations to use flexible metrics to assess the features and functions of HPC systems and, if they choose to purchase systems, assess them against the requirements negotiated with the vendor.

image

image 

Here is a video of the raised floor above being built out.

image

But wanting more details I dug around for details about the site.  Here are details about the site.  Note the last paragraph.  No UPS.

Energy efficiency is an integral part of the Blue Waters project and the Petascale Computing Facility. The facility will:

  • Achieve LEED Silver certification, with LEED Gold as the goal.
  • Rely heavily on more efficient water cooling for the systems it houses.
  • Take advantage of an on-site tower to chill water for cooling the compute systems. This will reduce energy consumption by using the outside air to chill water during the cold winter months.
  • Take advantage of the campus' highly reliable electricity supply, avoiding the need for the standard back-up Uninterruptible Power Supply (UPS). Eliminating the UPS saves equipment costs, minimizes floor space used, and increases energy efficiency because systems that employ a UPS convert AC to DC and back, incurring substantial energy losses.

Also, Blue Water uses water directly to the IT equipment.

And how does IBM keep this dense collection of ultrafast processors cool? In a word, water. "We actually went a bit further environmentally," said Ed Seminaro, an IBM Fellow who is involved with the University of Illinois project. "We took a lot of the infrastructure that's typically inside of the computer room for cooling and powering and moved the equivalent of that infrastructure right into that same cabinet with the server, storage, and interconnect hardware."

Seminaro continued: "The whole rack is water-cooled. We actually water-cool the processor directly to pull the heat out. We take it right to water, which is very power efficient," he said.

John Melchi in the video below discusses the building and how it was designed to have efficient power and cooling systems.  Here is a transcript of his conversation. 

One of the things you don’t think about when you look at a facility like this is the fact that the computer architect has been involved in the design of the building. So IBM has just been a tremendous partner and collaborator in helping Illinois and NCSA ensure that the Petascale Computing Facility will meet the needs of Blue Waters.

Specifically, we’ve made sure there’s enough space, power, and cooling. At the level of Blue Waters, you’re talking about substantial amounts of infrastructure to make a computer and a project like this work.

From the beginning the U of I and NCSA intended to build a data center that was a multi‐use facility. We have the ability to provide 5,400 tons of chilled water to the building. We have 24 megawatts of power coming in. That’s substantially more than the Blue Waters system is going to need. So we’re very well positioned to bring in new air‐cooled systems to the Petascale Computing Facility that will enable U of I researchers and researchers across the country to do their science.

But not just not the building is changed to accommodate Blue Water.  the applications are as well.

The Blue Waters staff is now working with about 20 large science teams to start revising their application codes to take full advantage of the Blue Waters features. Much of the work will enable codes to run well and at large scale on Blue Waters, but the work can also be applied to other systems in the future. We are doing this with simulation of the machine itself, application and system performance modeling with premier modeling groups, and early access to prototype systems and software. Over time, we will engage with other science areas as they are allocated time on Blue Waters.

CNET news’s article.

IBM: Envisioning the world's fastest supercomputer

IBM will release a radical new chip next year that will go into a University of Illinois supercomputer in a quest to build what may become the world's fastest supercomputer.

That university's supercomputer center is a storied place, home to both famous fictional and real supercomputers. The notorious HAL 9000 sentient supercomputer in "2001: A Space Odyssey" was built in Urbana, Illinois, presumably on the University of Illinois Urbana-Champaign campus.

Power7 chip die

The Power7 chip die.

(Credit: IBM)

Though not aspiring to artificial intelligence, the IBM Blue Waters project supercomputer, like the HAL 9000 series, will be able to do massively complex calculations in an instant and, like HAL, be built in Urbana-Champaign. It is being housed in a special building on the Urbana-Champaign campus specifically for the computer that will theoretically be capable of achieving 10 petaflops, about 10 times as fast as the fastest supercomputer today. (A petaflop is 1 quadrillion floating point operations per second, a key indicator of supercomputer performance.)

Part of the National Center for Supercomputing Applications (NCSA) at the University of Illinois, it will be the largest publicly accessible supercomputer in the world when it's turned on sometime in 2011.

The data center for this will look like this

Artist rendering of University of Illinois center that will house IBM's Blue Waters supercomputer

Artist rendering of University of Illinois center that will house IBM's Blue Waters supercomputer

(Credit: University of Illinois)

Read more