For those of you who want to know more about why Microsoft (MSFT) is using Containers, Mike Manos just posted a response to the ComputerWorld article, "6 reasons why Microsoft's container-based approach to data centers won't work."
Mike's post, titled "Stirring Anthills ... A response to the recent Computerworld Article"
The stirring anthills is a good description of how fired up Mike is in his response to accusations Microsoft is not listening to the industry.
Again, I highly suggest you read his post, and here are some nuggets.
When one inserts the stick of challenge and change into the anthill of conventional and dogmatic thinking they are bound to stir up a commotion.
That is exactly what I thought when I read the recent Computerworld article by Eric Lai on containers as a data center technology. The article found here, outlines six reasons why containers won't work and asks if Microsoft is listening. Personally, it was an intensely humorous article, albeit not really unexpected. My first response was "only six"? You only found six reasons why it won't work? Internally we thought of a whole lot more than that when the concept first appeared on our drawing boards.
My Research and Engineering team is challenged with vetting technologies for applicability, efficiency, flexibility, longevity, and perhaps most importantly -- fiscal viability. You see, as a business, we are not into investing in solutions that are going to have a net effect of adding cost for costs sake. Every idea is painstakingly researched, prototyped, and piloted. I can tell you one thing, the internal push-backs on the idea numbered much more than six and the biggest opponent (my team will tell you) was me!
Those who know me best know that I enjoy a good tussle and it probably has to do with growing up on the south side of Chicago. My team calls me ornery, I prefer "critical thought combatant." So I decided I would try and take on the "experts" and the points in the article myself with a small rebuttal posted here:
The economics of cost and use in containers (depending upon application, size, etc.) can be as high as 20% over conventional data centers. These same metrics and savings have been discovered by others in the industry. The larger question is if containers are a right-fit for you. Some can answer yes, others no. After intensive research and investigation, the answer was yes for Microsoft.
However, I can say that regardless of the infrastructure technology the point made about thousands of machines going dark at one time could happen. Although our facilities have been designed around our "Fail Small Design" created by my Research and Engineering group, outages can always happen. As a result, and being a software company, we have been able to build our applications in such a way where the loss of server/compute capacity never takes the application completely offline. It's called application geo-diversity. Our applications live in and across our data center footprint. By putting redundancy in the applications, physical redundancy is not needed. This is an important point, and one that scares many "experts." Today, there is a huge need for experts who understand the interplay of electrical and mechanical systems. Folks who make a good living by driving Business Continuity and Disaster Recovery efforts at the infrastructure level. If your applications could survive whole facility outages would you invest in that kind of redundancy? If your applications were naturally geo-diversified would you need a specific DR/BCP Plan? Now not all of our properties are there yet, but you can rest assured we have achieved that across a majority of our footprint. This kind of thing is bound to make some people nervous. But fear not IT and DC warriors, these challenges are being tested and worked out in the cloud computing space, and it still has some time before it makes its way into the applications present in a traditional enterprise data center.
As a result we don't need to put many of our applications and infrastructure on generator backup.
In my first address internally at Microsoft I put forth my own challenge to the team. In effect, I outlined how data centers were the factories of the 21st century and that like it or not we were all modern day equivalents of those who experienced the industrial revolution. Much like factories (bit factories I called them), our goal was to automate everything we do...in effect bring in the robots to continue the analogy. If the assembled team felt their value was in wrench turning they would have a limited career growth within the group, if they up-leveled themselves and put an eye towards automating the tasks their value would be compounded. In that time some people have left for precisely that reason. Deploying tens of thousands of machines per month is not sustainable to do with humans in the traditional way. Both in the front of the house (servers,network gear, etc) and the back of the house (facilities). It's a tough message but one I won't shy away from. I have one of the finest teams on the planet in running our facilities. It's a fact, automation is key.
The main point that everyone seems to overlook is the container is a scale unit for us. Not a technology solution for incremental capacity, or providing capacity necessarily in remote regions. If I deploy 10 containers in a data center, and each container holds 2000 servers, that's 20,000 servers. When those servers are end of life, I remove 10 containers and replace them with 10 more. Maybe those new models have 3000 servers per container due to continuing energy efficiency gains. What's the alternative? How people intensive do you think un-racking 20000 servers would be followed by racking 20000 more? Bottom line here is that containers are our scale unit, not an end technology solution.
And, Mike closes with
I can assure you that outside of my metrics and reporting tool developers, I have absolutely no software developers working for me. I own IT and facilities operations. We understand the problems, we understand the physics, we understand quite a bit. Our staff has expertise with backgrounds as far ranging as running facilities on nuclear submarines to facilities systems for space going systems. We have more than a bit of expertise here. With regards to the comment that we are unable to maintain a staff that is competent, the folks responsible for managing the facility have had a zero percent attrition rate over the last four years. I would easily put my team up against anyone in the industry.
I get quite touchy when people start talking negatively about my team and their skill-sets, especially when they make blind assumptions. The fact of the matter is that due to the increasing visibility around data centers the IT and the Facilities sides of the house better start working together to solve the larger challenges in this space. I see it and hear it at every industry event. The us vs. them between IT and facilities; neither realizing that this approach spells doom for them both. It’s about time somebody challenged something in this industry. We have already seen that left to its own devices technological advancement in data centers has by and large stood still for the last two decades. As Einstein said, "We can't solve problems by using the same kind of thinking we used when we created them."
Ultimately, containers are but the first step in a journey which we intend to shake the industry up with. If the thought process around containers scares you then, the innovations, technology advances and challenges currently in various states of thought, pilot and implementation will be downright terrifying. I guess in short, you should prepare for a vigorous stirring of the anthill.
Now if we can get someone to get Mike fired up like this once a month, we'll learn a lot more. Enjoy Mike's post.