Where will Microsoft's Green Path Lead? InfoWorld article

InfoWorld joins the press in discussing Microsoft's Green IT efforts.

Microsoft goes greenFor months now, many of the hardware technology heavyweights -- HP, IBM, Sun, Dell, AMD, Cisco, and Intel, for example -- have gone to great lengths to highlight their green products, plans, and corporate visions. Other green IT benefits not withstanding, this strategy makes sense from a marketing standpoint: By demonstrating and accentuating the greenness of their wares, they're better positioned to sell processors, PCs, servers, and other gear to customers hungry for leaner, greener operations.

Now the software behemoth Microsoft has stepped forward to loudly proclaim its support for the sustainable IT movement. The company's off to a good start: Beyond a list of notable green credentials already under its belt, Microsoft has a newly appointed chief environmental strategist in Rob Bernard, charged with guiding and promoting the evolution of the company's green agenda. The company's newly released server platform Windows Server 2008 is garnering praise for its energy efficiency. The company is even starting to share, for free, its best practices for datacenter management.

By shining a green spotlight on itself, Microsoft is calling attention to the fact that it understands the role that it -- and the software and platform industry as a whole -- has to play in the complex green-tech ecosystem. What remains to be seen is how that seemingly newfound green religion will manifest itself in future product offerings, particularly, in my mind, with respect to the successor to Vista.

Green in Redmond
Microsoft CEO Steve Ballmer used the Cebit stage to not only tout the energy-efficiency features loaded in Windows 2008 Server and other Microsoft products; he told the world that the company would soon release a set of best practices for datacenter management, drawing on energy-saving strategies the company has adopted in its own operations. The first set of best practices, in fact, is already available for download.

A friend forwarded on IBM''s direct email response to Steve Ballmer's Cebit presentation. Here is the IBM Green IT product spin. 

IBM System p in conjunction with IBM Middleware provides a massively powerful, scalable solution that lowers Total Cost of Ownership with one of the major components of that savings being power consumption and cooling. Using System p, IBM clients have consolidated 50 or more individual servers onto just one System p server, reducing power consumption by over 90% and floor space requirements by 80%.

With System p, you get improved utilization of the hardware you purchase, and along with the benefits IBM Middleware enjoys on this system, you have an unbeatable combination. Less power, less cooling, more throughput with fewer materials and lower costs - this is the IBM definition of Green IT.

Finally, Ballmer did say that Microsoft still has a lot more work to do when it comes to driving green IT initiatives. We agree. But the answer is NOT moving your data centers to cooler climates or near hydroelectric plants. The answer is consuming less. And IBM Middleware on System p delivers more value with less resource.

So, IBM Green IT consultants are going to focus #1 on upgrading the hardware, and data center power efficiency (PUE) becomes a non-issue?  Actually, it does make sense. The more inefficient your data center is, the more energy you will save when you install an IBM energy efficient hardware upgrade.  These IBM guys are brilliant.  Keep energy costs high to justify hardware purchases.

Which brings up a great point I need to add to a future paper.  Make various PUE assumptions on a data center when calculating an energy efficiency project, and don't assume the PUE is a static number.  PUE is dynamic and changes.

Read more

Feng Shui and the Art of Data Centers, News.com

News.com's Michael Kannellos chooses an interesting term for data centers design to flow with the environment, Feng Shui and the Art of the Data Centers.

Large multinational companies are building data centers designed to flow with their environment. There's something you probably didn't expect to hear five years ago.

Microsoft, for instance, is building a data center in Ireland in which the server rooms and other facilities will be cooled with devices called air side economizers, which pipe outside air inside.

"It uses fresh air aggressively to keep your building cool," said Rob Bernard, Microsoft's chief environmental strategist, in a phone interview. "The ideal scenario is that if Ireland continues to develop wind power and hopefully wave power, you have the best of both worlds: you're minimizing kilowatt consumption, and the kilowatts you use are sustainable."

The company, he pointed out, also has a data center in Quincy, Wash., powered by hydroelectric dams. (We've got an earlier post with Bernard about Microsoft's plans to move into building automation and other green industries.)

Similarly, Google analyzes the availability of renewable power when it builds data centers. Centers built in Oregon and North Carolina are located near hydroelectric power. Google also has 1.6 megawatts' worth of solar power at its headquarters. Applied Materials, the Air Force, and Sharp own even larger solar arrays.

Read more

11th Best Practice hidden in Microsoft's 10 Best Practices

I've been staring at the Microsoft's Best Practices for Energy Efficiency in Microsoft Data Center Operations that they published as part of Steve Ballmer's announcement at Cebit, thinking about how to use the document. I checked with some Microsoft friends and there have been 640 downloads of this document from the press release site in one week. The number will go up significantly when the content gets hosted in a higher traffic Microsoft area like TechNet. The early feedback has been good and customers are asking for more.

For those of you haven't downloaded the document here are the 10 best practices:

  1. Engineer the data center for cost and energy efficiency.
  2. Optimize the design to assess multiple factors.
  3. Optimize provisioning for maximum efficiency and productivity.
  4. Monitor and control data center performance in real time.
  5. Make data center operational excellence part of organizational culture.
  6. Measure power usage effectiveness (PUE).
  7. Use temperature control and airflow distribution.
  8. Eliminate the mixing of hot and cold air.
  9. Use effective air-side or water-side economizers.
  10. Share and learn from industry partners.

The one thing I really liked about the list is the order. The order in which you run a Green Data Center project is the most important best practice and can be listed as the 11th Best Practice.  You can fine tune the order for your organization, but you get the idea of looking at the big picture first, putting in your green infrastructure, implement, and learn more.

    • Big Picture
      • Engineer the data center for cost and energy efficiency.
      • Optimize the design to assess multiple factors.
      • Optimize provisioning for maximum efficiency and productivity.
    • Green Infrastructure
      • Monitor and control data center performance in real time.
      • Make data center operational excellence part of organizational culture.
      • Measure power usage effectiveness (PUE).
    • Implement
      • Use temperature control and airflow distribution.
      • Eliminate the mixing of hot and cold air.
      • Use effective air-side or water-side economizers.
    • Learn More
      • Share and learn from industry partners.

This is the first step anyone should take in Greening their data center.  Answer the question:

In what order will you implement Best Practices?

This will have the largest effect on the success or failure of the project.

Read more

Moore's Law applied to the Data Center, podcast of Microsoft's Mike Manos

Techhermit posted a blog entry, pointing to an Uptime Institute podcast with Microsoft's Mike Manos, chief of data centers discussing the idea of Moore's Law Applied to the Data Center.

Can Moore's Law be applied to the data center?  The late Jim Gray wrote on Moore's Law topic, and made the following observations.

Beginning as a simple observation of trends in semiconductor device complexity, Moore's Law has become many things. It is an explanatory variable for the qualitative uniqueness of the semiconductor as a base technology. It is now recognized as a benchmark of progress for the entire semiconductor industry. And increasingly it is becoming a metaphor for technological progress on a broader scale. As to explaining the real "causes" of Moore's Law, this examination has just begun. For example, the hypothesis that semiconductor device users' expectations feed back and self-reinforce the attainment of Moore's Law (see Figure 1) is still far from being validated or disproved. There does appear to be support for this notion primarily in the software industry (e.g., "Wintel" de facto architecture). Further research, including survey research and additional interviews, is required to address this possible relationship.

What has been learned from this early investigation is the critical role that process innovations in general, and manufacturing equipment innovations in particular play in providing the technological capability to fabricate smaller and smaller semiconductor devices. The most notable of process innovations was the planar diffusion process in 1959 -- the origin of Moore's Law. Consistent with Thomas Kuhn's (1962) paradigm-shifting view of "scientific revolution," many have described the semiconductor era as a "microelectronics revolution." (Forester 1982, Braun and Macdonald 1982, Gilder 1989, Malone 1996, and others) Indeed, the broad applications and pervasive technological, economic, and social impacts that continue to come forth from "that astonishing microchip" (Economist 1996) seem almost endless. However, this phenomenon has also been aptly described by Bessant and Dickson (1982) as evolutionary, albeit at an exponential rate.

"In a definite technical sense there has been no revolution (save, perhaps, for the invention of the transistor in 1947) but rather a steady evolution since the first invention."

Moore's Law is one measure of the pace of this "steady evolution." Its regularity is daunting. The invention of the transistor, and to a lesser degree the integrated circuit a decade later, represented significant scientific and technological breakthroughs, and are both classic examples of the Schumpeterian view of "creative destruction" effects of innovation. This is evidenced by the literal creation of an entire new semiconductor industry at the expense of the large electronics firms that dominated the preceding vacuum tube technological era. This period of transition from old technology to new technology is characterized by instability, and factors that underpin very irregular performance. This would be considered a shift in the economic and technological paradigm (Dosi 1984, 1988) similar to Constant's (1980) account of the "Turbojet Revolution" where the invention of the turbojet, along with co-evolutionary developments including advancements in airframe design and materials, enabled significant performance improvements in air speed and altitude. The turbojet produced a whole new "jet engine" industry and helped redefine both military and commercial aircraft industries and their users (e.g., airlines). Following the early experimental years of the turbojet, these industries settled in on a new technological trajectory (Dosi 1984, 1988) toward the frontier of the "jet age."

Innovations within the boundary limits of this new frontier occurred at a rapid, but more regular rate. The role of accumulated knowledge -- both tacit and explicit (Freeman 1994) -- and standards (e.g., the role of the Proney brake as the benchmark for performance measurement and testing) are emphasized. Similarly, semiconductor development since the planar process has followed Klein's (1977) description of "fast history," but is more in line with Pavitt's (1986) application of "creative accumulation" (i.e., the new technology builds on the old). The "new" technology in this case is the accumulated incremental -- particularly process-oriented -- advancements indicative of the Moore's Law semiconductor "era." As for standards, indeed Moore's Law itself is used throughout the industry as the benchmark of progress, evidenced most strikingly by the kilo- to mega- to giga-bit density DRAM chips. Increasingly, regular advances in microprocessor performance measures such as MIPS (millions of instructions per second) and MHZ processing speeds follow -- and become part of -- Moore's Law.

Moore's law can apply to the data centers when you apply Jim's observations

  • Process innovations play a critical role
  • Steady evolution
  • Role of accumulated knowledge

Is this what Mike Manos was trying to explain in his podcast?

Is this how Microsoft's Data Center Solutions group develops their data centers?

It will be interesting what Mike presents as a speaker at AFCOM and Uptime Institute.

Read more

Microsoft's Data Center Solutions sponsors innovative Microsoft Research project - an always on data collection and visualization system

Microsoft's Data Center Solutions(DCS) sponsored a Microsoft Research project described in this article.

Monitoring the conditions: This sensor, a prototype developed by the Networked Embedded Computing group at Microsoft Research, is sensitive to heat and humidity. The group envisions using sensors like these to monitor servers in data centers, enabling significant energy savings. The sensors could also be used in homes to manage the energy use of appliances.
Credit: Microsoft Research

The sensors, says Feng Zhao, principal researcher and manager of the group, are sensitive to both heat and humidity. They're Web-enabled and can be networked and made compatible with Web services. Zhao says that he envisions the sensors, which are still in prototype form, as "a new kind of scientific instrument" that could be used in a variety of projects. In a data center, the idiosyncrasies of a building and individual servers can have a big effect on how the cooling system functions, and therefore on energy consumption. Cooling, Zhao notes, accounts for about half the energy used in data centers. (He believes that the sensors, which he says could sell for $5 to $10 apiece, could be used in homes as well as in data centers, where they could work in tandem with a Web-based energy-savings application.)

The connection between MSR and DCS is producing an always on data collection and visualization system for data center operations. As Microsoft continues to develop this system the potential for use of this system beyond the data center is huge for other industries and potential energy savings.

I want to thank Jie Liu for showing me the early prototype. And now, that there is a public article, I can write how cool this technology is.

Read more