Fed CIO targets a key area to improve Fed IT, World Class Program Managers

I was reading ZDNET's post on the Fed targeting to close 800 data centers by 2015.

Fed CIO Kundra: We need to shut 800 data centers down by 2015

By Larry Dignan | April 12, 2011, 1:49pm PDT

Summary

Vivek Kundra, the chief information officer of the Federal government, said Tuesday that the company is actively shutting down 800 data centers by 2015.

Then I read the PDF testimony by Vivek Kundra to understand more, and found the section on strengthening program managers.

image

I spent much of my career at Apple and Microsoft as a program manager working on operating system releases.  Three of some of the best I worked with are:

Sheila Brady - Project Leader for System 7

Dennis Adler - Group Program Manager for Windows 95


While a director for MSR, Mr. Adler led a team from Research to the Windows Server Division and oversaw the initial development of a new server deployment and management technology that shipped with Windows Server 2003; formed the University Relations Group in
MSR and oversaw its rapid growth and initial expansion into Europe and Asia; was instrumental in MSR’s initial external public relations endeavors; and was a liaison between MSR and Microsoft’s product teams, as well as a Group Program Manager in the Personal
Systems Division.  Mr. Adler led the team responsible for designing and overseeing the development of the core components for Windows 95

John Medica - Project Leader for Macintosh II

John K. Medica retired as Senior Vice President and Co-Leader, Product Group from Dell Inc. in April 2007. In 1993, Mr. Medica joined Dell as Vice President, Portable Systems. During 1996, he served as President and Chief Operating Officer of Dell?s Japan division. He returned to the U.S. in August 1997 as Vice President, Procurement, and later served as Vice President, Web Products Group, and Vice President and General Manager, Transactional Product Group. Prior to joining Dell, he served as Project Leader for the Macintosh II, Director of the Macintosh CPU Projects Group and Senior Director of PowerBook Engineering with Apple Computer. Mr. Medica received his bachelor?s degree in Electrical Engineering from Manhattan College, and his master?s degree in Business Administration from Wake Forest University. Mr. Medica is currently a director of Compal Electronics, Inc., a publicly traded company.

These are some of top program managers I learned a lot from and the unique talent to manage complex projects with 100s if not 1000s of people on projects that were market successes.  And, I have been lucky to be able to connect with these great program managers even the products shipped.

Facebook Prineville Data Center Design and Site Selection Video

Jay Park, Facebook's Director of Data Center Design presents at the Open Computer Project.

Jay explains three reasons why Prineville was chosen.

    1. Available power
    2. Network
    3. Climate for running with chillers

Highlights the 480V/277V power system and chiller-less cooling system.

In the above video is a 3D rendering of the data center showing airflow.  Note: these 3D renderings are not part of the Open Compute Project web site.

Three examples of what Facebook doesn't include in Open Compute, like the use of Fusion IO integration

There are many out there who believe Facebook's Open Compute Project published all the details on Facebook's new data center.  When I see the Open Compute website  I see what is missing more than what is there.  So, let's try and list a couple of areas where Facebook doesn't share its data center information.

1)  ZDNET's Between the Lines reports on Fusion-IO plans for IPO and Facebook is its largest customer.  The Fusion IO products look like the following.

image

When you look at the Open Compute PCB, you don't see any Fusion IO products or mention of solid state memory or PCIe slots.  There is an external PCI Express connector, but no explanation of what connects to it.

Image of AMD motherboard

image

How are the Fusion IO product being used by Facebook?  I can't find any details on Open Compute regarding the use of Fusion IO, can you?

Why wouldn't this be share, because knowing how Facebook uses solid state memory in its servers is a competitive advantage.

2)  What is the % mix of server skus and from what vendors.  Dell DCS is part of the launch event and is a supplier of Facebook. HP is mentioned as well.  Supermicro and many others have sold servers to Facebook.  What % of the 150,000+ servers Facebook has are Open Compute Project versions? 

image

3) Where are the drawings for the Electrical and Mechanical systems?

image

The Triplet Racks do have mechanical drawings.

image

Why wouldn't Facebook publish their mechanical drawings for the electrical and mechanical systems?  Either they are too valuable and give away too many secrets and/or they don't own the distribution rights from the data center engineering design companies.  We'll see if Facebook ever publishes its mechanical and power drawings.

Facebook's Open Compute has many believing Facebook released all the data center specs, but...what is reality?

I've been watching the Facebook Open Compute news and have had a bunch of people send me links.  From a PR perspective Facebook did extremely well.  What is funny is how some, well many, almost all media thinking that Open in Open Compute means Facebook shared everything in its Prineville data center.

The best trick: Facebook released all the specs for the data center

Note the word all.  When you go to http://opencompute.org/.  I sure don't see all the specs for a data center.  Do you?  There are pdf documents.  The Data Centers section has drawings for the racks, but not for the electrical, mechanical, battery cabinet.

One of my friends has been sending me various articles he finds, and one poppped out.  For a good analysis on Facebook's openness and what they share check out this marco.org's post.  Here are nuggets that Marco captures.

Nothing about Facebook’s design is particularly revolutionary to casual industry observers (except the impressive PSU efficiency). The much more interesting question is why they released this. It’s only going to be useful to a very small number of firms for the foreseeable future, and even then, it’s not as if anyone who wants these server or rack designs can just place an order — they’re just designs.

...

On a large scale like this — not a small open-source project by good-willed individuals — “opening” something is almost always an effort to commoditize it, leveling the playing field as much as possible and marginalizing competitive advantages that others might have had.

...

Nobody “opens” the parts of their business that make them money, maintain barriers to competitive entry, or otherwise provide significant competitive advantages.

...

We can reasonably conclude from the Open Compute Project that Facebook isn’t trying to maintain a top-secret competitive advantage in hardware and datacenter design, and they’re not expecting anyone else to gain a meaningful, exclusive advantage by copying ideas from theirs and keeping the results secret.

Marco comes to the following conclusion.

My best guess is that this is primarily for recruiting engineering talent. There’s no shortage of engineers, but there’s always a shortage of greatones, especially in Silicon Valley. Google has been a talent vacuum for a long time since it’s so appealing for most engineers to work there.

One point I think Marco misses is the effect of Greenpeace and its pressure for Facebook to use renewable energy.  Much of the Facebook's Open Compute effort talks about how it is energy efficient, and the Open Compute project is Facebook's way of saying we are contributing to lower power use by the IT industry.