Microsoft says Windows Server 2008 cuts Power Consumption by 10%, What happened to 20%?

InfoWorld writes that Microsoft released a study that Windows Server 2008 cuts power consumption by 10%.

With electricity prices continuing to skyrocket and processors getting ever hungrier for power, it was only a matter of time before Microsoft (NSDQ: MSFT) chimed in with claims that its latest software can cut energy bills. A Microsoft white paper released this week asserts that Windows Server 2008 can cut power consumption by 10% compared with Windows Server 2003 out of the box, and much more if running virtualized.

Microsoft compared power consumption between two installations on the same server with two dual-core processors and 4 GB of RAM, one running Windows Server 2003 R2 Enterprise x64 Edition with SP2 plus hot fixes, and the other running Windows Server 2008 Enterprise Edition, with a hard drive formatting in between.

The company found that Windows Server 2003 used as much as 10% more power despite only being able to deliver 80% of the maximum throughput as its successor. Microsoft attributes these improvements partially to power management features that Windows Server 2008 has enabled by default, like the automatic adjustment of processor performance based on workload.

But, what happened to the 20% Bill Laing discussed with Mary Jo Foley

We've done power management by default in Longhorn Server. And we think average machines will see maybe 20 percent reduction in power use. You kind of slow the clock down when it's not busy. And it's dynamic enough that you can literally slow the clock down across a disk I/O. If you've got nothing to do while you're doing a disk I/O, it actually drops the power use for that short period of time. It's not like sleeping [for] the laptop; this is really short, what they call P-state for processor state.

This continues to support one of my beliefs that power management needs to exist in a bigger picture than the server itself. What is needed is power management to be designed across systems. This part of what the guys at Cassatt Software have done.

Read more

Bill Gates talks about Microsoft's Web Services in Big, Big, Data Centers

Bill Gates keynote transcript at TechEd has the following information about Microsoft's web services running hundreds of thousands of servers. In the future millions.

Now to make these services reliable, to make it easy to call them, to provide the kind of security that you want, there are a lot of new developments that have had to take place, things like identity federation. Things like the protocols that you see in the WS* standards, and that we've made easy to get to through the Communication Framework libraries. We're taking everything we do at the server level, and saying that we will have a service that mirrors that exactly. The simplest one of those is to say, okay, I can run Exchange on premise, or I can connect up to it as a service. But even at the BizTalk level, we'll have BizTalk Services. For SQL, we'll have SQL Server Data Services, and so you can connect up, build the database. It will be hosted in our cloud with the big, big data center, and geo-distributed automatically. This is kind of fascinating because it's getting us to think about data centers at a scale that never existed before. Literally today we have, in our data center, many hundreds of thousands of servers, and in the future we'll have many millions of those servers.

When you think about the design of how you bring the power in, how you deal with the heating, what sort of sensors do you have, what kind of design do you want for the motherboard, you can be very radical, in fact, come up with some huge improvements as you design for this type of scale. And so going from a single server all the way up to this mega data center that Microsoft, and only a few others will have, it gives you an option to run pieces of your software at that level.

You'll have hybrids that will be very straightforward. If you want to use it just for an overload condition, or disaster recovery, but the software advances to make it so when you write your software you don't have to care where those things are located, those are already coming into play. So the services way of thinking about things is very important, and will cause a lot of change.

I think Mike Manos has been in a few BillG reviews. Bill mentions data center issues - power in, how you deal with heating, sensors, and you can be very radical.

Read more

Christian Belady asks, "Does Efficiency in the Data Center Give us What we need?"

In Mission Critical magazine, Christian Belady writes on the topic of efficiency improvements in the data center and the economic effects.

In economics, the scarcity of resources drives the cost of those resources, which results in more efficient use of those resources. When gasoline prices go up, consumers demand more efficient vehicles to offset those increasing costs. Conversely when fuel prices go down, the demand would go up substantially. This latter notion is called Jevons’ Paradox.

Similarly, the well-known phenomenon called Moore’s law has substantially reduced the cost of computation year over year, driving demand up. This cost reduction is completely driven by efficiency improvements in electronics over the past half-century, improvements unmatched by any other industry. This is the most important reason for the significant demand for increases in the number of servers in the data center and more importantly the increase in server power footprint worldwide. So the real question is how would the further efficiency improvements suggested by the Environmental Protection Agency (EPA) in its 2007 report to Congress affect these trends?

And in typical Christian fashion, brings up interesting issues as to the effects of energy efficiency.

Conclusion
While driving efficiency is clearly the right thing to do, industry and government agencies must consider a more holistic view on the cause and effect in the IT industry. It is important to understand what the drivers are to the perceived problem of IT power consumption. To date, this “problem” has been tackled as a technical problem and organizations such as the EPA, Green Grid, Climate Savers are doing the right thing to solve the technical problem.

There is no doubt that there are inefficiencies in the data center and in IT equipment, but perhaps it is exactly these inefficiencies that have curbed what the power growth could have been. In addition, it may be that IT is displacing other high-carbon emission industries. Drawing economists into the discussion alongside technologists could lead to a broad discussion of the larger issues and holistic solutions. The hope is that economists will be just as passionately involved in solving this problem as the technologist over the past couple of years.

Read more

Microsoft's WorldWide Telescope in Beta, Next is Environmental Observatory

BusinessWeek has an article about the WorldWide Telescope just released.

For people who have gazed up at the night sky in wonder and wished they had someone there to identify what they were looking at, Microsoft's (MSFT) WorldWide Telescope (WWT) is coming to the rescue.

The service, which opened to the public on May 13, lets people explore the cosmos through any computer with an Internet connection. It combines about 12 terabytes of data, including 50 surveys and 1,000 high-resolution studies, with links to astronomy research on sites around the Web. It blends the data with regularly updated photos captured by high-powered telescopes on and off the Earth, including the Hubble Space Telescope, circling the planet 353 miles up, and the Cerro Tololo Observatory, 312 miles north of Santiago, Chile, in the foothills of the Andes. Put it all together, and the WWT knits together a spellbinding panorama of the night sky.

There are some similar services available now, including Google (GOOG) Sky from the search kingpin. But what sets WWT apart is how easy it is to navigate the service and dig into more information about planets, stars, and galaxies. Sweep your mouse sideways, and you're spinning across the galaxy. Move the mouse forward, and you hurtle into the picture. You can close in on Sombrero Galaxy or a black hole in Galaxy NGC 4261 and find yourself immersed in startling details and whirling brilliant hues.

And, softpedia posts on a new Environmental Observatory.

Microsoft is building a pioneering environmental observatory together with the European Environment Agency, which will act as the foundation for the Global Observatory for Environmental Change, planned by the EEA. On May 14, 2008, Microsoft announced that it had inked a five-year alliance with EEA, destined to build what the Redmond company referred to as a world-leading online portal, with the initial focus placed on Europe. The partnership is essentially set up as a way to make environmental information available to the general public with the Global Observatory for Environmental Change online-portal acting as the main source.

Professor Jacqueline McGlade, executive director of the EEA, revealed that the portal would enable people across the world, starting with 500 million Europeans in the initial stage, to take part in the process of improving the environment. For McGlade, the collaboration with the Redmond company is simply a guaranteed method to reach an audience as large as possible.

The WWT is a cool software + service application. Combine this possibility with a narrative capability to an Environmental Observatory and people will be able to learn more about environmental impact.

To turn WWT into even more of an educational tool, Microsoft built a feature that allows people to pull together different images and create narrated stories that they can share with others. "People have always looked up to the night sky and made up stories," Wong says. "This is a way for them to share those stories and that knowledge."

The service also allows you to look at different approaches to studying the universe, whether by studying cosmic dust or microwaves. That provides people with a broader understanding of astronomy research. And folks can even sign up to get feeds from specific telescopes around the world or in space.

WWT is expected to add more features that Google Sky has now. For instance, researchers can add their own data to Google Sky and use application programming interfaces (APIs) to put models of their data on their own sites. That competition, says Goodman, will be good for both as well as for researchers and amateurs alike.

The competition between Google and Microsoft is going to create some of the best environmental tools, and Data Centers are probably going to be one of those things that are studied for their environmental impact, given their use of power and water.

A little bit of irony that Google and Microsoft will create the tools that allow others to measure the environmental impact of the data centers that host the applications.

Read more

Mike Manos Provides Reasons for Microsoft's use of Containers, writes Response to ComputerWorld Article

For those of you who want to know more about why Microsoft (MSFT) is using Containers, Mike Manos just posted a response to the ComputerWorld article, "6 reasons why Microsoft's container-based approach to data centers won't work."

Mike's post, titled "Stirring Anthills ... A response to the recent Computerworld Article"

The stirring anthills is a good description of how fired up Mike is in his response to accusations Microsoft is not listening to the industry.

Again, I highly suggest you read his post, and here are some nuggets.

 

clip_image001

When one inserts the stick of challenge and change into the anthill of conventional and dogmatic thinking they are bound to stir up a commotion.

That is exactly what I thought when I read the recent Computerworld article by Eric Lai on containers as a data center technology.  The article found here, outlines six reasons why containers won't work and asks if Microsoft is listening.   Personally, it was an intensely humorous article, albeit not really unexpected.  My first response was "only six"?  You only found six reasons why it won't work?  Internally we thought of a whole lot more than that when the concept first appeared on our drawing boards. 

My Research and Engineering team is challenged with vetting technologies for applicability, efficiency, flexibility, longevity, and perhaps most importantly -- fiscal viability.   You see, as a business, we are not into investing in solutions that are going to have a net effect of adding cost for costs sake.    Every idea is painstakingly researched, prototyped, and piloted.  I can tell you one thing, the internal push-backs on the idea numbered much more than six and the biggest opponent (my team will tell you) was me!

...

Those who know me best know that I enjoy a good tussle and it probably has to do with growing up on the south side of Chicago.  My team calls me ornery, I prefer "critical thought combatant."   So I decided I would try and take on the "experts" and the points in the article myself with a small rebuttal posted here:

...

The economics of cost and use in containers (depending upon application, size, etc.) can be as high as 20% over conventional data centers.   These same metrics and savings have been discovered by others in the industry.  The larger question is if containers are a right-fit for you.  Some can answer yes, others no. After intensive research and investigation, the answer was yes for Microsoft.

...

However, I can say that regardless of the infrastructure technology the point made about thousands of machines going dark at one time could happen.  Although our facilities have been designed around our "Fail Small Design" created by my Research and Engineering group, outages can always happen.  As a result, and being a software company, we have been able to build our applications in such a way where the loss of server/compute capacity never takes the application completely offline.  It's called application geo-diversity.  Our applications live in and across our data center footprint. By putting redundancy in the applications, physical redundancy is not needed.  This is an important point, and one that scares many "experts."   Today, there is a huge need for experts who understand the interplay of electrical and mechanical systems.  Folks who make a good living by driving Business Continuity and Disaster Recovery efforts at the infrastructure level.   If your applications could survive whole facility outages would you invest in that kind of redundancy?  If your applications were naturally geo-diversified would you need a specific DR/BCP Plan?   Now not all of our properties are there yet, but you can rest assured we have achieved that across a majority of our footprint.  This kind of thing is bound to make some people nervous.   But fear not IT and DC warriors, these challenges are being tested and worked out in the cloud computing space, and it still has some time before it makes its way into the applications present in a traditional enterprise data center.

As a result we don't need to put many of our applications and infrastructure on generator backup. 

...

In my first address internally at Microsoft I put forth my own challenge to the team.   In effect, I outlined how data centers were the factories of the 21st century and that like it or not we were all modern day equivalents of those who experienced the industrial revolution.  Much like factories (bit factories I called them), our goal was to automate everything we do...in effect bring in the robots to continue the analogy.  If the assembled team felt their value was in wrench turning they would have a limited career growth within the group, if they up-leveled themselves and put an eye towards automating the tasks their value would be compounded.  In that time some people have left for precisely that reason.   Deploying tens of thousands of machines per month is not sustainable to do with humans in the traditional way.  Both in the front of the house (servers,network gear, etc) and the back of the house (facilities).   It's a tough message but one I won't shy away from.  I have one of the finest teams on the planet in running our facilities.   It's a fact, automation is key. 

...

The main point that everyone seems to overlook is the container is a scale unit for us.  Not a technology solution for incremental capacity, or providing capacity necessarily in remote regions.   If I deploy 10 containers in a data center, and each container holds 2000 servers, that's 20,000 servers.  When those servers are end of life, I remove 10 containers and replace them with 10 more.   Maybe those new models have 3000 servers per container due to continuing energy efficiency gains.   What's the alternative?  How people intensive do you think un-racking 20000 servers would be followed by racking 20000 more?   Bottom line here is that containers are our scale unit, not an end technology solution.

And, Mike closes with

I can assure you that outside of my metrics and reporting tool developers, I have absolutely no software developers working for me.   I own IT and facilities operations.   We understand the problems, we understand the physics, we understand quite a bit. Our staff has expertise with backgrounds as far ranging as running facilities on nuclear submarines to facilities systems for space going systems.  We have more than a bit of expertise here. With regards to the comment that we are unable to maintain a staff that is competent, the folks responsible for managing the facility have had a zero percent attrition rate over the last four years.  I would easily put my team up against anyone in the industry. 

I get quite touchy when people start talking negatively about my team and their skill-sets, especially when they make blind assumptions.  The fact of the matter is that due to the increasing visibility around data centers the IT and the Facilities sides of the house better start working together to solve the larger challenges in this space.  I see it and hear it at every industry event.  The us vs. them between IT and facilities; neither realizing that this approach spells doom for them both.  It’s about time somebody challenged something in this industry.  We have already seen that left to its own devices technological advancement in data centers has by and large stood still for the last two decades.  As Einstein said, "We can't solve problems by using the same kind of thinking we used when we created them."

Ultimately, containers are but the first step in a journey which we intend to shake the industry up with.  If the thought process around containers scares you then, the innovations, technology advances and challenges currently in various states of thought, pilot and implementation will be downright terrifying.  I guess in short, you should prepare for a vigorous stirring of the anthill.

Now if we can get someone to get Mike fired up like this once a month, we'll learn a lot more.  Enjoy Mike's post.

Read more