Google Data Centers used to save energy and lives

Google’s official blog shares a vision to use Google’s data centers to save energy and lives.

What we’re driving at

10/09/2010 12:00:00 PM

Larry and Sergey founded Google because they wanted to help solve really big problems using technology. And one of the big problems we’re working on today is car safety and efficiency. Our goal is to help prevent traffic accidents, free up people’s time and reduce carbon emissions by fundamentally changing car use.

Data Centers are key.

This is all made possible by Google’s data centers, which can process the enormous amounts of information gathered by our cars when mapping their terrain.

Enabling a bunch of smart people.

To develop this technology, we gathered some of the very best engineers from the DARPA Challenges, a series of autonomous vehicle races organized by the U.S. Government. Chris Urmson was the technical team leader of the CMU team that won the 2007 Urban Challenge. Mike Montemerlo was the software lead for the Stanford team that won the 2005 Grand Challenge. Also on the team is Anthony Levandowski, who built the world’s first autonomous motorcycle that participated in a DARPA Grand Challenge, and who also built a modified Prius that delivered pizza without a person inside. The work of these and other engineers on the team is on display in the National Museum of American History.

Lots of sensor data is used.

Our automated cars use video cameras, radar sensors and a laser range finder to “see” other traffic, as well as detailed maps (which we collect using manually driven vehicles) to navigate the road ahead.

Is the future a Google logo’d car?  :-)

We’ve always been optimistic about technology’s ability to advance society, which is why we have pushed so hard to improve the capabilities of self-driving cars beyond where they are today. While this project is very much in the experimental stage, it provides a glimpse of what transportation might look like in the future thanks to advanced computer science. And that future is very exciting.

Read more

Google's Urs Hölzle explains why beefier cores are better than whimpy cores

The Register covers a new paper by Google's Urs Hölzle.

Google ops czar condemns multi-core extremists

Sea of 'wimpy' cores will sink you

By Cade Metz in San FranciscoGet more from this author

Posted in Servers, 17th September 2010 07:04 GMT

Free whitepaper – The Reg Guide to Solutions for the Virtual Era

Google is the modern data poster-child for parallel computing. It's famous for splintering enormous calculations into tiny pieces that can then be processed across an epic network of machines. But when it comes to spreading workloads across multi-core processors, the company has called for a certain amount of restraint.

With a paper (PDF) soon to be published in IEEE Micro, the IEEE magazine of chip and silicon design, Google Senior Vice President of Operations Urs Hölzle – one of the brains overseeing the web giant's famous back-end – warns against the use of multi-core processors that take parallelization too far. Chips that spread workloads across more energy-efficient but slower cores, he says, may not be preferable to chips with faster but power-hungry cores.

The paper is here and only 2 pages long.  And, when thinking what motivated Urs to write this paper i think it was his frustration that too many people are focusing on the number of cores to solve a problem and not taking into consideration what happens to the overall system when you try to solve problems with a bunch of whimpy cores vs. brawny cores.

We classify multicore systems as brawny-core systems, whose single-core performance is fairly high, or wimpy-core systems, whose single-core performance is low. The latter are more power efficient. Typically, CPU power decreases by approximately O(k2) when CPU frequency decreases by k, and decreasing DRAM access speeds with core speeds can save additional power.

Urs as usual uses excellent presentation skills to make his point in three areas.

First, the more threads handling a parallelized request, the larger the overall response time. Often all parallel tasks must finish before a request is completed, and thus the overall response time becomes the maximum response time of any subtask, and more subtasks will push further into the long tail of subtask response times. With 10 subtasks, a one-in-a-thousand chance of suboptimal process scheduling will affect 1 percent of requests (recall that the request time is the maximum of all subrequests), but with 1,000 subtasks it will affect virtually all requests.

In addition, a larger number of smaller systems can increase the overall cluster cost if fixed non-CPU costs can’t be scaled down accordingly. The cost of basic infrastructure (enclosures, cables, disks, power supplies, network ports, cables, and so on) must be shared across multiple wimpy-core servers, or these costs might offset any savings. More problematically, DRAM costs might increase if processes have a significant DRAM footprint that’s unrelated to throughput. For example, the kernel and system processes consume more aggregate memory, and applications can use memory-resident data structures (say, a dictionary mapping words to their synonyms) that might need to be loaded into memory on multiple wimpy-core machines instead of a single brawny-core machine.

Third, smaller servers can also lead to lower utilization. Consider the task of allocating a set of applications across a pool of servers as a bin-packing problem—each of the servers is a bin, and we try to fit as many applications as possible into each bin. Clearly that task is harder when the bins are small, because many applications might not completely fill a server and yet use too much of its CPU or RAM to allow a second application to coexist on the same server. Thus, larger bins (combined with resource containers or virtual machines to achieve performance isolation between individual applications) might offer a lower total cost to run a given workload.

How many data center operation VPs can write this paper?  One.  :-)

Keep the number of cores in mind for a green data center, smaller energy efficient processors may not be the most efficient overall.

Read more

What is the future of a Data Center Glasnost?

DataCenterKnowledge’s Rich Miller wrote a good post on Google’s Chris Malone presentation at Uptime Institute in Apr 2009, and Daniel Costello’s calling for a Data Center Glasnost.

Microsoft, Google and Data Center Glasnost

April 16th, 2009 : Rich Miller

Chris Malone of Google speaks Tuesday at the Uptim Institute Symposium 2009 in New York, while Uptime founder Ken Brill listens.

Chris Malone of Google speaks Tuesday at the Uptime Institute Symposium 2009 in New York. Listening at right is Uptime Institute founder Ken Brill.

One of the best-attended Tuesday sessions at The Uptime Institute’s Symposium 2009 in New York was a presentation by Google’s Chris Malone. As has been notedelsewhere, Malone’s talk summarized much of the information that Google disclosed April 1 at its Data Center Efficiency Summit. But there was a noteworthy moment during the question and answer period when Daniel Costello approached the mike.

Daniel went on to present the idea of a Glasnost.

“Microsoft applauds Google’s openness in presenting this information,” Costello said. “It’s moving us forward to a data center glasnost of sorts.” Glasnost, for those with short memories, was the policy of openness and transparency that Mikhail Gorbachev introduced in the Soviet Union in the 1980s.

Google’s Chris Malone responds.

Over the past year Microsoft has been actively discussing some of its data center innovations and best practices at industry events. Responding to Costello, Malone said Google intends to pursue a similar path, reversing years of secrecy about its data center operations. “One of the reasons we’re here is to share in the industry discussions,” said Malone, who added that Google has now joined The Green Grid, one of the industry consortiums on energy efficiency.

Rich Miller makes an excellent point though in differences in what Microsoft and Google are presenting.

There are differences in the two companies’ approaches. Microsoft is talking publicly about its future data center design plans, like the “Generation 4 ” plan for roofless container farms. Google’s disclosures thus far have focused on older facilities that likely don’t represent the 2008 model year for its data centers. And as happened at Uptime, there will be continuing debates in the industry about how much of the innovation seen at Google and Microsoft is relevant to smaller data centers.

But, with Daniel Costello moving to Google will Glasnost and the spirit of openness change into a Cold War?  Rich Miller closed his post making the point of a cold war.

But when it comes to expert information on best practices, more is better. Like the end users, the data center industry has its share of information siloes, and its good to see that starting to change. Much hard work remains. But Glasnost is far better than a data center Cold War.

If you follow with the Cold War analogy who is the Soviet Union and who is the US?

Google has been building data centers longer than Microsoft and they are proud of their move to containers before Microsoft.

Both Google and Microsoft have a bunch of money and a lot to win and lose in the data center wars.

Is Daniel Costello’s move to Google a tipping point?

From Publishers Weekly

The premise of this facile piece of pop sociology has built-in appeal: little changes can have big effects; when small numbers of people start behaving differently, that behavior can ripple outward until a critical mass or "tipping point" is reached, changing the world. Gladwell's thesis that ideas, products, messages and behaviors "spread just like viruses do" remains a metaphor as he follows the growth of "word-of-mouth epidemics" triggered with the help of three pivotal types. These are Connectors, sociable personalities who bring people together; Mavens, who like to pass along knowledge; and Salesmen, adept at persuading the unenlightened. (Paul Revere, for example, was a Maven and a Connector). Gladwell's applications of his "tipping point" concept to current phenomena--such as the drop in violent crime in New York, the rebirth of Hush Puppies suede shoes as a suburban mall favorite, teenage suicide patterns and the efficiency of small work units--may arouse controversy.

How ironic that Daniel calls for Glasnost in Apr 2009 as Microsoft data center executive and in Sept 2010 will be a Google Data Center executive.

Read more

Microsoft Data Center Director Daniel Costello joins Google

Data Centers are competitive advantages for the Internet companies, and how much you know about your competitors helps plan your future.  Microsoft's Daniel Costello has been a heavily recruited data center executive for months (I think I count 8 from when I first heard he was being recruited)  He finally made his decision ... to join Google, leaving Microsoft.

Who is Daniel Costello?  Daniel is the person in the center of this photo.

And had this role at Microsoft.

Daniel Costello, director of Data Center Services

     Global Foundation Services

Daniel Costello is the director for Data Center Services at Microsoft, responsible for data center research and engineering, standards and technologies, data center technology roadmap, Generation 4 data center engineering, data center automation and integration with IT hardware, operating systems and applications.  Daniel also works closely with Microsoft Research on proof of concepts in support of the data center of the future and manages a team of facility engineers and service architects. 

I don't know Daniel's new role at Google.  Director of Generation 4 5 data center engineering? :-)  Given Daniel's move to Google, I doubt we'll hear for quite a while what he is doing.

I think Daniel could have the title Data Center Wizard as he knows more than anyone else in the industry about Google and Microsoft's data centers and IT infrastructure.  How much is Daniel's knowledge worth?

Here are two videos from Daniel's presentation 2 years ago at GigaOm.

The funny thing is I just happened to connect to LinkedIn last week.  Daniel provided no information for this blog post, but I had a hunch it was time to connect.

Daniel is one sharp guy who has impressed many.  Here is my post about his engineering approach.

Microsoft’s Daniel Costello, Engineering Approach to Solve Data Center Design

Microsoft’s Daniel Costello has a good post on an engineering approach to solve data center business problems.

...

Now let’s look at  Daniel’s steps.

1) Time to Market

2) Cost

3) Efficiency

4) Flexibility and Density

And the goals of the Microsoft team.

The Goals our Engineering Team Set

· Reduce time-to-market and deliver the facility at the same time as the computing infrastructure

· Reduce capital cost per megawatt and reduce COGS per kilowatt per month by class

· Increase ROIC and minimize the up-front investment for data centers

· Differentiate reliability and redundancy by data center class and design the system to be flexible to accommodate any class of service in the same facility

· Drive data center efficiency up while lowering PUE, water usage, and overall TCOE

· Develop a solution to accept multiple levels of density and form factors, such as racks, skids, or containers

Read more

Google Video Parody of “Don’t Be Evil” - Humor

Gizmodo posts on a Taiwanese video spoof on Google’s latest news coverage and “don’t be evil”.  With an extra bit of the irony being hosted on YouTube.

Taiwanese News Animates "Google Goes Evil"

Taiwanese News Animates "Google Goes Evil"This post was previously published in Gizmodo, which is why it has been taken out of the front page.

From the talented minds who created animated videos for the iPhone 4, Tiger Woods andHP sexual harassment scandals comes...Evil Google! Seriously, these minute-long Taiwanese videos are the highlight of my job.

Here is the video.

Read more