Google Warehouse Scale Computing pattern harvested, solving the current or future performance problems

The Open Source Data Center Initiative is using a Pattern based approach.

In software engineering, a design pattern is a general reusable solution to a commonly occurring problem in software design. A design pattern is not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations.

I was reading Google's Warehouse Scale Computing document which can be daunting with its 120 pages of dense topics.  One of the points made which is an example of a design pattern is under the following conditions.

Key pieces of Google’s services have release cycles on the order of a couple of weeks compared to months or years for desktop software products. Google’s front-end Web server binaries, for example, are released on a weekly cycle, with nearly a thousand independent code changes checked in by hundreds of developers— the core of Google’s search services has been reimplemented nearly from scratch every 2 to 3
years.

This may not sound like your environment, but it is common in agile dynamic SW development at Google, start-ups and other leading edge IT shops.

Agile methods generally promote a disciplined project management process that encourages frequent inspection and adaptation, a leadership philosophy that encourages teamwork, self-organization and accountability, a set of engineering best practices intended to allow for rapid delivery of high-quality software, and a business approach that aligns development with customer needs and company goals.

The old way of purchasing IT hardware to support an application's SLA is a lower priority.  The new way is to add hardware capabilities to support the rapid innovation in SW development.

A beneficial side effect of this aggressive software deployment environment is that hardware architects are not necessarily burdened with having to provide good performance for immutable pieces of code. Instead, architects can consider the possibility of significant software rewrites to take advantage of new hardware capabilities or devices.

BTW, immutable in SW means which applies to many legacy systems.

In object-oriented and functional programming, an immutable object is an object whose state cannot be modified after it is created. This is in contrast to a mutable object, which can be modified after it is created.

Problem: How to improve the performance per watt  in IT efficiencies with data center infrastructure and hardware?

Options:

  1. Improve data center efficiency, aka PUE.
  2. Buy more efficiency IT HW.
  3. Improve HW utilization with virtualization and server consolidation.
  4. Add new hardware capabilities that support the future of software.

Solution: even though 1 - 3 are typical, the efficiencies from #4 could be sizeable larger.  Some part of the data center and IT hardware should be designed for the future applications vs. making the future applications run on what the past applications require.

Examples of technologies are NVidia's GPU, solid state memory, startups with new hardware designs like www.tilera.com, and complete re-architecture of the data center system.

People are working on the complete re-architecture of the data center system as the performance per watt gains are huge.

How many data centers are designed for the current hardware vs the future? 50%, 75%, 90%, 95%, 98%

Should data centers be designed for a 5 year lifespan vs. 20 - 30 to support more rapid innovation?  Then, be upgradable?