Google Infrastructure is more than Data Centers and Servers, it's software

In the data center world if you hear the word infrastructure you naturally think of the data centers and servers.  Why not Infrastructure is defined as:

Infrastructure is the basic physical and organizational structures needed for the operation of a society or enterprise,[1] or the services and facilities necessary for an economy to function.[2] The term typically refers to the technical structures that support a society, such as roads,water supply, sewers, electrical grids, telecommunications, and so forth.

A couple of years ago at a conference I was talking to a Google architect and I eventually asked what he did.  He said I work on the infrastructure.  When he said infrastructure I named a few people in the data center group I had ran into at data center  conferences, but he didn't know any of those people.  Then, he repeated I work on THE infrastructure.  What we build our applications on - search, storage, compute.  OHHH, you guys get it that your infrastructure is more than physical devices.  Software is infrastructure few think about to build services.  Most typically think physical infrastructure.

GigaOm has a post with Google's Infrastructure czar, Urs Hölzle.  Om Malik says it has been 5 years since he has touched base with Urs.  I would never go that long.

Hölzle was company’s first VP of engineering, and he has led the development of Google’s technical infrastructure.

Hölzle’s current responsibilities include the design and operation of the servers, networks and data centers that power Google. It would be an understatement to say that he is amongst the folks who have shaped the modern web-infrastructure and cloud-related standards.

When you read the GigaOm post don't just think physical infrastructure, think about the software Google has in place to support cloud services.

Others might disagree, but Hölzle believes Google’s common infrastructure gives it a technological and financial edge over on-premise solutions. “We’re able to avoid some of that fragmentation and build on a common infrastructure,” says Hölzle. “That’s actually one of the big advantages of the cloud.”

GigaOm points to 10 ways to achieve a Greener Data Center

Next week on Apr 21, 2011 is Green:Net 2011 and the folks at GigaOm have been helping to spread the news for green data centers.  Here is a post on 10 ways to green the data center.

10 Ways Data Centers Are Becoming Greener

By Katie Fehrenbacher Apr. 13, 2011, 12:00am PT

Energy efficient data centers have solidly moved into the (low power) spotlight in recent weeks thanks to the Open Compute Project from Facebook. Last week the social network giant shared an unprecedented amount of data about its low power servers and data center designs that have enabled its new data center in Oregon to be remarkably energy efficient. To me, the move shows just how important these energy efficient characteristics have become for the leading Internet companies as a way to stay competitive and keep their energy costs as low as possible.

4 years when I started blogging on green data centers people thought I was silly.  Now it is common and expected for data center operators to discuss their environmental impact.

I have my next idea brewing on what I think I should blog as a whole topic in data centers no one discusses now.  But, for now Green Data Centers has me plenty busy.

Fed CIO targets a key area to improve Fed IT, World Class Program Managers

I was reading ZDNET's post on the Fed targeting to close 800 data centers by 2015.

Fed CIO Kundra: We need to shut 800 data centers down by 2015

By Larry Dignan | April 12, 2011, 1:49pm PDT

Summary

Vivek Kundra, the chief information officer of the Federal government, said Tuesday that the company is actively shutting down 800 data centers by 2015.

Then I read the PDF testimony by Vivek Kundra to understand more, and found the section on strengthening program managers.

image

I spent much of my career at Apple and Microsoft as a program manager working on operating system releases.  Three of some of the best I worked with are:

Sheila Brady - Project Leader for System 7

Dennis Adler - Group Program Manager for Windows 95


While a director for MSR, Mr. Adler led a team from Research to the Windows Server Division and oversaw the initial development of a new server deployment and management technology that shipped with Windows Server 2003; formed the University Relations Group in
MSR and oversaw its rapid growth and initial expansion into Europe and Asia; was instrumental in MSR’s initial external public relations endeavors; and was a liaison between MSR and Microsoft’s product teams, as well as a Group Program Manager in the Personal
Systems Division.  Mr. Adler led the team responsible for designing and overseeing the development of the core components for Windows 95

John Medica - Project Leader for Macintosh II

John K. Medica retired as Senior Vice President and Co-Leader, Product Group from Dell Inc. in April 2007. In 1993, Mr. Medica joined Dell as Vice President, Portable Systems. During 1996, he served as President and Chief Operating Officer of Dell?s Japan division. He returned to the U.S. in August 1997 as Vice President, Procurement, and later served as Vice President, Web Products Group, and Vice President and General Manager, Transactional Product Group. Prior to joining Dell, he served as Project Leader for the Macintosh II, Director of the Macintosh CPU Projects Group and Senior Director of PowerBook Engineering with Apple Computer. Mr. Medica received his bachelor?s degree in Electrical Engineering from Manhattan College, and his master?s degree in Business Administration from Wake Forest University. Mr. Medica is currently a director of Compal Electronics, Inc., a publicly traded company.

These are some of top program managers I learned a lot from and the unique talent to manage complex projects with 100s if not 1000s of people on projects that were market successes.  And, I have been lucky to be able to connect with these great program managers even the products shipped.

Facebook Prineville Data Center Design and Site Selection Video

Jay Park, Facebook's Director of Data Center Design presents at the Open Computer Project.

Jay explains three reasons why Prineville was chosen.

    1. Available power
    2. Network
    3. Climate for running with chillers

Highlights the 480V/277V power system and chiller-less cooling system.

In the above video is a 3D rendering of the data center showing airflow.  Note: these 3D renderings are not part of the Open Compute Project web site.