Google increases its Renewable Energy 73% which means? more data center capacity?

Google has a blog post on its adding more wind power.  Adding 240MW on top 330MW = 570MW of wind power.  Which means???  possibly Google is adding 73% data center capacity over the next year or so.  Why else would Google who is carbon neutral add another 240MW?  The new wind farm capacity comes on line by end of 2014.  

Another windy day in Texas: a new power purchase agreement

9/17/13 | 9:00:00 AM

 

(Cross-posted on the Official Google Blog)

As part of our quest to power our operations with 100% renewable energy, we’ve agreed to purchase the entire output of the 240 MW Happy Hereford wind farm outside of Amarillo, Texas. This agreement represents our fifth long-term agreement and our largest commitment yet; we’ve now contracted for more than 570 MW of wind energy, which is enough energy to power approximately 170,000 U.S. households.

The Happy Hereford wind farm, which is expected to start producing energy in late 2014, is being developed by Chermac Energy, a small, Native American-owned company based in Oklahoma. The wind farm will provide energy to the Southwest Power Pool (SPP), the regional grid that serves our Mayes County, Okla. data center.

Google Publishes The Guide to Data Center Book, "The Datacenter as a Computer" 2nd edition

For newbies it can be hard to learn about data centers.  One document that is useful is Google published The Datacenter as a Computer paper back in 2009.

Publication Year

2009

Authors

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines

Abstract: As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers….
 
And in July 2013 the 2nd edition is released. 

Notes for the Second Edition

After nearly four years of substantial academic and industrial developments in warehouse-scale computing, we are delighted to present our first major update to this lecture. The increased popularity of public clouds has made WSC software techniques relevant to a larger pool of programmers since our first edition. Therefore, we expanded Chapter 2 to reflect our better understanding of WSC software systems and the toolbox of software techniques for WSC programming. In Chapter 3, we added to our coverage of the evolving landscape of wimpy vs. brawny server trade-offs, and we now present an overview of WSC interconnects and storage systems that was promised but lacking in the original edition. Thanks largely to the help of our new co-author, Google Distinguished Engineer Jimmy Clidaras, the material on facility mechanical and power distribution design has been updated and greatly extended (see Chapters 4 and 5). Chapters 6 and 7 have also been revamped significantly. We hope this revised edition continues to meet the needs of educators and professionals in this area.

The PDF is here.  And lots of great diagrams.
NewImage
NewImage
NewImage
 
One of the parts I liked was the section on modeling for partially filled data centers.
NewImage
 
Distribution of service disruption events.
NewImage

 And the closing conclusion.

8.5 CONCLUSIONS

Computation is moving into the cloud, and thus into WSCs. Software and hardware architects

must be aware of the end-to-end systems to design good solutions. We are no longer designing

individual “pizza boxes,” or single-server applications, and we can no longer ignore the physical and

economic mechanisms at play in a warehouse full of computers. At one level, WSCs are simple—

just a few thousand cheap servers connected via a LAN. In reality, building a cost-efficient massive-

scale computing platform that has the necessary reliability and programmability requirements

for the next generation of cloud-computing workloads is as difficult and stimulating a challenge as

any other in computer systems today. We hope that this book will help computer scientists understand

this relatively new area, and we trust that, in the years to come, their combined efforts will

solve many of the fascinating problems arising from warehouse-scale systems.

Mobile Devices set up for a three company competition - Apple, Google, and Microsoft

With Microsoft's acquisition of Nokia phone devices it looks like there is a three company race against Apple and Google.  Phones and Tablets are the growing faster than any other technology device.

One way to think about Mobile device companies is a three company competition.  Here is a post on three company competition and its history.

COMPETITIVE MARKETS AND THE RULE OF THREE 
by Jagdish N. Sheth and Rajendra S. Sisodia 
Strategy 

The “Big Three” no longer have the automobile market to themselves, but almost every market, including the one for cars, is ruled by three dominant firms. That reality does not prevent other firms from being successful. However, all firms, regardless of their market share, must still understand The Rule of Three and how it will affect their strategy and attempt to operate efficiently.

Over the past several years, the world economy, principally in the developed free market economies of Europe and North America, has been characterized by a unique economic phenomena-the combination of mergers and demergers at record levels (demergers are the spin-offs of non-core businesses). As a result, the landscape of just about every major industry has changed in a significant way, moving inexorably toward what we call the “Rule of Three.” The recent economic downturn has slowed but not halted this fundamental evolution, nor has it altered its basic direction.

We note that the Rule of Three is much more than an interesting theoretical construct; it is a powerful empirical reality that must be factored into corporate strategizing. Understanding the likely end-points of market evolution is critical to the ability of executives to develop strategies that will result in success.

Insight into Google Data Center Operations, Site Reliability Presentation at PuppetConf

I was at PuppetConf 2013 in SF for the first time and had a great time.  After the opening Keynote by Luke Kaines, was a presentation by Google Site Reliability Engineer, Gordon Rowell.

Google’s Corporate Engineering SRE team provides infrastructure services used by many of Google’s desktops, laptops and servers. This talk gives an overview of the design philosophy, challenges, technologies and some interesting failures seen while implementing infrastructure at scale.
Speakers

Gordon Rowell

Site Reliability Manager, Google
Gordon Rowell is a site reliability manager at Google, Sydney. His team focuses on delivering services to Googlers around the world. They have migrated major internal services to run on Google technology and are currently focused on removing dependencies on the corporate network. | He enjoys the challenges of building robust systems that scale and has a particular passion for configuration management. 

The presentation is here.

Key takeaways I saw are in these slides.

NewImage

NewImage

NewImage

NewImage

How Servers does Google have? 1 Mil cumulative in July 9, 2008

Microsoft's exiting CEO Steve Ballmer said Microsoft has a million servers.

 At the Microsoft World-Wide Partners Conference, Microsoft CEO Steve Ballmer announced that “We have something over a million servers in our data center infrastructure. Google is bigger than we are. Amazon is a little bit smaller. You get Yahoo! and Facebook, and then everybody else is 100,000 units probably or less.

Google reached its million server cumulative mark in July 8, 2008.

How many servers does Google employ? It’s a question that has dogged observers since the company built its first data center. It has long stuck to “hundreds of thousands.” (There are 49,923 operating in the Lenoir facility on the day of my visit.) I will later come across a clue when I get a peek inside Google’s data center R&D facility in Mountain View. In a secure area, there’s a row of motherboards fixed to the wall, an honor roll of generations of Google’s homebrewed servers. One sits atop a tiny embossed plaque that reads JULY 9, 2008. GOOGLE’S MILLIONTH SERVER. But executives explain that this is a cumulative number, not necessarily an indication that Google has a million servers in operation at once.

When will Amazon reach a million servers?  When will Facebook?  Is it really that big of a deal.  Maybe if you are media and you are looking for a story.

To give you how clueless some people are.  Who cares how many servers.  The important thing is how many cores are there in the environment?  The number of cores and the quality of them is what is important to run services.  Not the number of servers. D'Oh.