I couldn’t make it to DataCenterDynamics NY, but I have plenty of friends there, so I can get a virtual report.
Kevin Timmons gave the keynote and Rich Miller wrote up a nice entry.
Microsoft’s Timmons: ‘Challenge Everything’March 3rd, 2010 : Rich Miller
The building blocks for Microsoft’s data center of the future can be assembled in four days, by one person. The two data center containers, known as IT PACs (short for pre-assembled components) proof of concept, are built entirely from aluminum. The first two proof of concept units use residential garden hoses for their water hookups.
“Challenge everything you know about a traditional data center,” said Kevin Timmons, who heads Microsoft’s Global Foundation Services, in describing the company’s approach to building new data centers. “From the walls to the roof to where it needs to be built, challenge everything.”
So much of what is wrong with data centers and prevent them from being Green is people do what they have done in the past. This includes the engineer companies and the customers who specify the data centers. You don’t hear customers saying “bring me a data center design no one has done before.”
The efficiency of the data center is a given to have a low PUE (sub 1.2), but with Cloud Computing and Mobile as top needs for data center growth, speed of how quickly you can add capacity is a higher requirement by executive decision makers.
Here is a video showing some of the concepts Microsoft has been willing to share.
and the blog post from Microsoft’s Daniel Costello.
Then we had to take these single lines and schematics and break them into logical modules for the components to reside in. This may seem easy but represents a shift in thinking from a building where, for instance, we would have a UPS room and associated equipment and switchgear manufactured by multiple vendors and put it physically in sometimes separate modules. The challenge became how to shift from a traditional construction mindset to the new, modularized manufacturing mindset. Maintainability is a large part of reliability in a facility, and became a key differentiator between the four classes. Our A Class infrastructure, which is not concurrently maintainable and is on basically street power and unconditioned air, will require scheduled downtime for maintenance. The cost, efficiency, and time-to-market targets for A Class are very aggressive and a fraction of what the industry has come to see as normal today. We realized that standardization and reuse of components from one class to the next was a key to improving cost and efficiency. Our premise was that the same kit of parts (or modules) should be usable from class to class. These modules (in this new mindset) can be added to other modules to transition within the data center from one class to the next.
I would call this a Flexible Data Center System. This has been done in manufacturing in flexible manufacturing systems for decades and is just now coming to data center design.
A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react in the case of changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, which both contain numerous subcategories.
The advantage of this system is
Faster, Lower- cost/unit, Greater labor productivity, Greater machine efficiency, Improved quality, Increased system reliability, Reduced parts inventories, Adaptability to CAD/CAM operations.
With one disadvantage.
cost to implement.
But in data centers the cost to implement can be lower than traditional data centers with enough people adopting the approach. And, whereas the flexibility in manufacturing typically applies to the product produced, the flexibility concepts are being applied to the data center infrastructure.
And, what else is changing is the hardware that goes in these data centers. Microsoft’s Dileep Bhandarkar discussed here.
IT departments are strapped for resources these days, and server rightsizing is something every team can do to stretch their budgets. The point of my presentations and the white paper our team is publishing today is two-fold:
1. To quantify some of the opportunities and potential pitfalls as you look for savings, and
2. To present best practices from our experiences at Microsoft, where the group I lead manages server purchases for the large production data centers behind Microsoft’s wide array of online, live and cloud services.