Urs Hoelzle's keynote at Google European Data Center Summit 2011

James Hamilton has posted his notes on Urs Hoelzle's keynote speech at Google's European Data Center Summit 2011. 

2011 European Data Center Summit

The European Data Center Summit 2011 was held yesterday at SihlCity CinCenter in Zurich. Google Senior VP Urs Hoelzlekicked off the event talking about why data center efficiency was important both economically and socially. He went on to point out that the oft quoted number that US data centers represent is 2% of total energy consumption is usually mis-understood. The actual data point is that 2% of the US energy budget is spent on IT of which the vast majority is client side systems. This is unsurprising but a super important clarification. The full breakdown of this data:

· 2% of US power

o Datacenters: 14%

o Telecom: 37%

o Client Device: 50%

What will get little press is this statement by Urs.

Summarizing: Datacenters consume 0.28% of the annual US energy budget. 72% of these centers are small and medium sized centers that tend towards the lower efficiency levels.

Google 250kW green data center Videos and PDF for improving PUE from 2.4 to 1.5

If you are looking to improve the PUE of a small data center (250kW), Google has shared its best practices they have implemented at 5 locations.

The question I asked Google's Joe Kava was what he thought the PUE could go to with more load in the facility.  The current load is 85 kW in a 250kW capacity.  PUE is currently at 1.5 and more load in theory bring PUE to 1.35-1.4, but there will be step function when an additional CRAC unit needs to be brought on line.

There are 4 CRAC units in the room and now only 2 are run while the other 2 are back-up.  After a week of operation, the operation is flipped evening out the wear on equipment and insuring the back-up cooling is operational.

Here is a PDF that describes the whole project - Google's Green Data Centers: Network POP Case Study.

image

Here is before and after images of the airflow.  Joe said Google samples data once a second to monitor the power and cooling systems in the room.

image

image

A 10 minute video goes along with the white paper.

If you want to segment the 10 minute video into specific areas of interest here are 5 video segments you can view.

Data Center Efficiency Best Practices Part 1 - Intro and Measuring PUE

Data Center Efficiency Best Practices Part 2 - Manage Airflow

Data Center Efficiency Best Practices Part 3 - Raise the Thermostat

Data Center Efficiency Best Practices Part 4 - Utilize Free Cooling

Data Center Efficiency Best Practices Part 5 - Optimize Power Distribution

Google's Hamina Data Center follows #1 rule of architecture, respect "the genius of a place"

Google posted a video on its Hamina Data Center.

Joe Kava is featured in the videos, but the video I would have really like to see is Joe scuba diving through the sea water tunnel that brings water into the facility.  Below is a picture of the tunnel built in 1950.

image

If you want to see what the facility looked like before check out this tour Google gave to the press.

100% sea water use for cooling is a first for the data center industry, and Hamina has some unique characteristics.

image

Google added a bypass mixing function to the sea water cooling system to lower the temperature of the discharge back to the gulf which was not a requirement by any government agency.  But, Google recognized this change would reduce the environmental impact which fits in a sustainability strategy.

image

There was a lot of thought required for Google to have a sea water cooling system that runs 24x7 for years and years with no downtime.

image

I asked Joe some questions on his presentation, and one of the areas we covered is the maintenance issues for a sea water cooling system as a typical assumption at the company who designs sea water cooling systems is there is an annual maintenance that includes downtime.  You can imagine the Google guys telling the engineering company, there will be no downtime in this facility. 

Below is where Google lists an integrated Clean in Place (CIP) system and other features to address sea water fouling to eliminate maintenance downtime.

image

37Signals just posted on Ten Lessons for great landscape architecture.  You may think landscape architecture doesn’t have anything to do with data center design, but great architecture design is consistent across many areas.

Ten design lessons from Frederick Law Olmsted, the father of American landscape architecture Matt May 23

Latest by Naomi Tapia

Frederick Law Olmsted (1822-1903), the father of American landscape architecture, may have more to do with the way America looks than anyone else. Beginning in 1857 with the design of Central Park in New York City, he created designs for thousands of landscapes, including many of the world’s most important parks.

His works include Prospect Park in Brooklyn, Boston’s Emerald Necklace, Biltmore Estate in North Carolina, Mount Royal in Montreal, the grounds of the U.S. Capitol and the White House, and Washington Park, Jackson Park and the World’s Columbian Exposition of 1893 in Chicago. (The last of those documented excellently in Erik Larson’s book The Devil in the White City.) Plus, many of the green spaces that define towns and cities across the country are influenced by Olmsted.

Consider Lesson #1 of a great architect.

1) Respect “the genius of a place.”
Olmsted wanted his designs to stay true to the character of their natural surroundings. He referred to “the genius of a place,” a belief that every site has ecologically and spiritually unique qualities. The goal was to “access this genius” and let it infuse all design decisions.

This meant taking advantage of unique characteristics of a site while also acknowledging disadvantages. For example, he was willing to abandon the rainfall-requiring scenery he loved most for landscapes more appropriate to climates he worked in. That meant a separate landscape style for the South while in the dryer, western parts of the country he used a water-conserving style (seen most visibly on the campus of Stanford University, design shown at right).

Think about these words and watch the video again.  Right at the beginning Joe talks about designing for the unique characteristics of a site.

You may think this idea is a waste of time and you don’t have the money, but 30 years from now or a 100 years from now great data center designs, designs that match a site will last and be upgraded.  Data Centers that are not designed for “the genius of a place” will fade and be demolished.

Weak Bolts suspend operations of 3 Korean Submarines, lesson in managing the supply chain

Mike Manos has his blog at http://loosebolts.wordpress.com/, and I found this article interesting on how weak bolts have suspended the operations of three Korean Submarines.  Many of the data center industry professionals have had duty on a submarine.  Can you imagine how pissed off the operations crew would be at this problem?

For the first 1,800-ton submarine Sohn Won-il, a total of 20 bolts came loose on six occasions between 2006 and 2009.
For the second submarine Jeong Ji, its bolts were broken or loosened on six occasions between 2009 and 2010 while for the third submarine Ahn Jung-geun, its bolts were broken and came loose on three occasions during the same period.
The 214-class submarines, which were designed by German’s Howaldtswerke Deutsche Werft AG, or HDW, and built by Hyundai Heavy Industries, are the primary naval assets for underwater operations.
The military investigated and found that a local subcontractor produced and provided bolts which were weaker than what the German firm required in its design of the submarines, sources said.

I've been having some interesting discussions on supply chain issues in the data center and the need for a Bill of Materials (BOM) approach.  I've tested the idea with some experienced people who understand the approach.  But, to be successful we need an executive sponsor.

Can you think of other data center problems caused by supply chain issues where substandard parts are installed?  I can.

Google takes available space at 111 8th in NYC off the market

DatacenterDynamics reports on Google's move to remove 111 8th ave data center space off the market.

Google takes all available space at key NYC carrier hotel off market

Future of data center providers at 111 8th Ave. uncertain

Published 20th May, 2011 by Yevgeniy Sverdlik

111 8th Avenue in New York

Following its acquisition of one of East Coast’s largest carrier hotels at 111 8th Ave. in New York City, Googlehas taken all the space that was available in the building off the market. The building is home to a number of commercial data center providers, including Digital Realty Trust, Telx and Internap, among others.

I speculated that Google could use the space in 111 8th for carrier negotiations.

Google now owns a premium networking access point in NYC, the biggest concentration of money in the USA with the financials, stock exchanges, and other businesses.

As Google negotiates carrier access in various markets, it can offer a presence in 111 Eighth Ave.  This can change price points, and guarantees of service and access.

If Emerging Market Telecom sets up a relationship with Google, and agrees to a presence in 111 Eighth Ave, then the more the Emerging Market Telecom needs the location due to a variety of economic and technical reasons, the value works for Google.

Did Google just buy one the biggest bargaining chips it could have to negotiate access to WW Telcos?

With Centurylink's purchase of Savvis and Verizon's purchase of Terramark could they do what Google is thinking?  It is interesting to think one building is more valuable that Savvis or Terramark.