Why I Named my Company Green M3, What is the Green M3 Method

One of the interesting things I learned at Uptime Institute is the timing was right to explain why I named my company Green M3.  I told the story of how I started the company in response to a WSJ article. And, next let me tell you why Green M3.

I have as my tag line Monitoring, Metering, Managing ...

But, these 3 words starting with M where created after I figured I wanted to call my company Green M3.

Nicholas Carr's post triggered my own post on containers.

UPDATE: There's more on the data center philosophy of the 'Soft Boys in this interview with Manos and one of his colleagues. On the attraction of containers: "One of the things we like about them is we can take a bunch of servers and look at the output of that box and look at the power it draws. At the end of the day, we can determine, 'What is the IT productivity of that unit? How many search queries were executed per box? How many emails sent or stored?' You can get into some really interesting metrics. A lot of people say you can't look at the productivity of a data center, but if you compartmentalize it - not as small as the server level, but at some chunk in between - you can measure productivity."

Now that people are talking about containers and thinking down the line of evaluating performance per the container, it makes it easier for me to explain Green M3.  I tested this on a bunch of people, so let's put in down in writing.

In coming up with a method to evaluate Green performance in the data center, I needed a way to think and have the right mindset in looking at the problem. Then I hit upon abstracting to what it is you wanted, and computing needs to be measured in a universal way.  Processor technology, RAM, HD, network, various I/O technologies confuse the issue, and cloud the ability to evaluate things side by side. Computing is delivered in a box - 1U, 2U, 4U, Rack, but side by side comparison's are still hard as solutions required multiple servers.

How do you evaluate the IT solution implementation in one technology vs. another?

Then I started to think of things as the space it takes to encompass the solution. Characterize the space in abstract terms - KW required, Cooling required, network connectivity, compute power, and other TCO factors. Define the space, how big it is, what it does, and what it costs.

What is a universal unit of measure? The cubic meter. The M3 is meant to communicate cubic meter = meter x meter x meter, not Monitoring, Metering, and Managing. But, this is too hard to put in a tag line. 

A rack is 1.2 cubic meters.

A 40 foot container is 67.5 cubic meters.

A 20 foot container is 33 cubic meters.

This gave me the right framework, architecture method to evaluate different implementations.

If you apply the Green M3 method you can calculate the cubic meters of white floor space and compare it to the cubic meters required in a container, and associate a cost and performance per cubic meter. People tend to think of square meters/feet. But, to be efficient you need think about the metrics per cubic meters.

The good thing is I named my company back in Nov 2007, before containers became popular, and Christian explained to Nicholas Carr a benefit of containers. Also as background, part of what helped me understand this method is working as a distribution logistics engineer 20 years ago for HP and Apple where I learned to think in terms of boxes, pallets, and shipping containers. I remember working at Apple, when the Mac Plus was being introduced, and people were getting all excited about the Mac Plus vs. the Mac 512k. But, in distribution I told people it is just another box, it is no bigger, doesn't weigh anymore. It doesn't make any difference to us, other than we'll ship more of them.

When you abstract a technology how does it effect the cubic meter of data center computing?

I'll work on more on documenting the Green M3 Method, but here is the start.

Read more

Microsoft Embrace, Extend, and Innovate the Data Center Container

DataCenterKnowledge reports on Mike Manos's statement, Microsoft is embracing data centers containers.

The data center container revolution has officially arrived. And Microsoft's cloud computing initiative is driving it.

Microsoft will forego a traditional raised-floor environment in its new data center in Chicago, and will instead fill one floor of the huge facility with up to 220 shipping containers packed with servers, the company said today.

Versus other companies concept demonstrations of data center containers, Microsoft follows its infamous "Embrace, Extend, and Innovate" strategy made public with focus on the Internet in 1994.

In order to build the necessary respect and win the mindshare of the Internet community, I recommend a recipe not unlike the one we’ve used with our TCP/IP efforts: embrace, extend, then innovate. Phase 1 (Embrace): all participants need to establish a solid understanding of the infostructure and the community - determine the needs and the trends of the user base. Only then can we effectively enable Microsoft system products to be great Internet systems. Phase 2 (Extend): establish relationships with the appropriate organizations and corporations with goals similar to ours. Offer well-integrated tools and services compatible with established and popular standards that have been developed in the Internet community. Phase 3 (Innovate): move into a leadership role with new Internet standards as appropriate, enable standard off-the-shelf titles with Internet awareness. Change the rules: Windows Microsoft Data Centers become the next-generation Internet tool of the future.

clip_image002

And the datacenterknowledge article continues

Microsoft is embracing containers as the key to building scalable, energy-efficient cloud computing platforms. The company's bold move is an affirmation of the potential for containers to address the most pressing power, cooling and capacity utilization challenges facing data center operators. The Chicago facility is part of the company’s fleet of next-generation data centers being built to support its Live suite of "software plus services" online applications.

But the design of the Chicago data center will go beyond the optimizations seen in Microsoft’s new facilities in Quincy, Washington and San Antonio.

"The entire first floor of Chicago is going to be containers," Microsoft director of data center services Michael Manos said this morning in his keynote at Data Center World in Las Vegas. "This represents our first container data center. The containers are going to be dropped off and plugged into network cabling and power." The second floor of the immense facility will be a traditional raised-floor data center, Manos said.

"It's a bold step forward," said Manos. "We're trying to address scale with the cloud level services. We were trying to figure the best way to bring capacity online quickly."

Read more

Geeks attempt to save Power in the home

I heard about this project a while ago, and thought it was naive at the time, giving end users the ability to monitor and set their power usage #'s.  I can tell these guys are not User Interface developers.

An easy UI I would have done is have a button to turn the system on and off.  The algorithms for on and off should be modified unseen from the users.  You test whether they are willing to keep the system on and when they turn it off to override it.  Expecting millions of people to play around with a dial to set energy savings is asking too much from users to guess on what dial settings are right for them.

Keep in mind when you want your users to save energy they need a simple User Interface. Do you want to save energy? Yes or No.  Don't give them a bunch of choices that will confuse them and waste their time.

The results were written in the NY times

The homeowners could go to a Web site to set their ideal home temperature and how many degrees they were willing to have that temperature move above or below the target. They also indicated their level of tolerance for fluctuating electricity prices. In effect, the homeowners were asked to decide the trade-off they wanted to make between cost savings and comfort.

The households, it turned out, soon became active participants in managing the load on the utility grid and their own bills.

“I was astounded at times at the response we got from customers,” said Robert Pratt, a staff scientist at the Pacific Northwest National Laboratory and the program director for the demonstration project. “It shows that if you give people simple tools and an incentive, they will do this.”

And thank god, they finally figured out that the UI needed to be simpler.

After some testing with households, the scientists decided not to put a lot of numbers and constant pricing information in front of consumers. On the Web site, the consumers were presented with graphic icons to set and adjust.

Read more

Citrix Demonstrates Technique to Save Server Power

Citrix has released PowerSmart Utility for Citrix Presentation Servers, enabling idle servers to be turned off during

I am going to contact he Citrix team to see what they have done to create logs to record successful and unsuccessful power down events.  This log could be a simple way to monitor the power on/off events.  They have architected the solution to use one presentation server as the controller, making this the ideal place to monitor the power on/off events.

This same power on/off log would be great for Windows to monitor power management. This will be a long conversation with Microsoft and a challenge to find the right people who would be willing to do this, but it is on my list of things to do.

Appended Jan 3, 2008.

I missed the area of the Citrix FAQ.  Great thinking went into this v1 feature.

What trouble shooting support is included?

By default, some basic information such as when the tool decides to power on/off a server will be logged in the event log of the server running this tool. The servers being shutdown/power off will have the power events logged as usual. The debug tracing is flexible and can be configured to trace only the information you want and to where you want it. Various debugging tools are included to help testing individual components separately. Please see the user guide for more details.

Read more

Microsoft Blog - 2008 Green IT Predictions

Lewis Curtis at Microsoft has thrown out his 2008 predictions and below I have highlights. Many of these same ideas have been discussed by Christian Belady, Microsoft's new Power and Cooling architect who was hired from HP.  I'll see if we can get Christian to make his own 2008 predictions.

Prediction one:

2008 is the year that more realize Green IT is not a passing fad in the industry,  More will realize that Green IT is a permanent regulatory and operational reality in IT Architecture and Operations and it cannot be ignored.   Regulations and oversight as well as public scrutiny will increase in 2008 (as well as poor metrics in power consumption and carbon footprint).  We will see more laws and regulations, more audits, around the world.   

Prediction Two:

Companies who only rely on performance per watt (ppw) justifications for capital expenditures will see their power consumption increase (you read it right).

ppw has been a mainstay for vendors to justify new hardware and software it sells to IT organizations for the last thirty years.   

The logic goes like this:

"your (server/SAN/network/database/operating system) can do more work with the same amount of power,  therefore, you will need fewer of them,  hence you can reduce your power bill"

Most Vendors are still parading the ppw marketing plan as their green answer today.   

So why doesn’t this argument work in the real world?    Answer: because it never factors in its impact on the velocity of demand as well as the impact of the environment which must now support it.

As technology capability increases, the velocity of people's demands of that technology will increase more.   Therefore the demand for more servers, storage and network capability will increase. This, in turn, will increase the demand for power.   This does not mention the cooling efficiency challenges of power dense racks (accounting for a substantial percentage of datacenter's power budget).

Read more