CA about to launch EcoSoftware for Green IT

CNET news reports CA will be releasing an EcoSoftware solution.

CA jumps into eco-software market

by Larry Dignan

CA next week will unveil an integrated sustainability suite designed to track carbon emissions, environmental assessments, metering, and compliance to policies in one dashboard.

CA calls the suite EcoSoftware and will launch it Monday, according to Christopher Thomas, vice president of energy and sustainability. I ran into Thomas at the Gartner IT Symposium, where the carbon-monitoring software caught my eye.

There are other efforts designed to track carbon emissions. For instance, Hara and SAP have various applications and others use metering to measure sustainability efforts.

Read more of "CA jumps into eco software market; Plans to launch carbon tracking suite" at ZDNet's Between the Lines.

I have written in the past it was natural for management tool vendors – Tivoli, OpenView, and CA to add Green IT management, so this is no surprise.

We’ll get more details next week as the launch is scheduled for Oct 26.

Read more

Gartner says companies must implement a Pattern-Based Strategy

In my day job, I help clients be innovative leaders, constantly looking for what it takes to be better than the rest. Gartner recently has announced a new initiative called Pattern-Based Strategy.

It is a pleasant surprise to have Gartner’s nine analysts come to a recommendation I’ve been using for over five years in IT infrastructure.

Introducing Pattern-Based Strategy

7 August 2009

Yvonne Genovese Valentin T. Sribar Stephen Prentice Betsy Burton Tom Austin Nigel Rayner Jamie Popkin Michael Smith David Newman

The environment after the recession means business leaders must be more proactive in seeking patterns from conventional and unconventional sources that can positively or negatively impact strategy or operations, and set up a consistent and repeatable response by adjusting business patterns.

One of the best groups I worked with at Microsoft and still have many friends in is the Patterns & Practices group, and I still have regular discussions of how Data Centers and IT could/should be using a patterns-based approach.

You’ve probably guessed from the first half our name that we’re rather enthusiastic about design patterns.  Design patterns describe solutions to common issues that occur in application design and development. A large part of what we do involves identifying these common issues and figuring out solutions to them that can be used across different applications or scenarios. Once we have the patterns, we typically package them up in what we call an application block.

Software people have been some of the early adopters of patterns, but the history of patterns comes from Christopher Alexander, a building architect.

A pattern must explain why a particular situation causes problems, and why the proposed solution is considered a good one. Christopher Alexander describes common design problems as arising from "conflicting forces" -- such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern.

A pattern must also explain when it is applicable. Since two houses may be very different from one another, a design pattern for houses must be broad enough to apply to both of them, but not so vague that it doesn't help the designer make decisions. The range of situations in which a pattern can be used is called its context. Some examples might be "all houses", "all two-story houses", or "all places where people spend time." The context must be documented within the pattern.

For instance, in Christopher Alexander's work, bus stops and waiting rooms in a surgery center are both part of the context for the pattern "A PLACE TO WAIT."

I’ve spent most of my career working on the Mac OS/hardware and Windows OS/hardware The use of patterns seemed like a natural thing to do, but not intuitive for the people who deploy IT infrastructure.  With Gartner’s Pattern-Based Strategy, my persuasion challenge is dramatically decreased.

So, what is good about Gartner’s Pattern-Based announcement?  Their first 2 paragraphs are well written to identify the need.

Gartner Says Companies Must Implement a Pattern-Based Strategy™ to Increase Their Competitive Advantage

Analysts Discuss the Framework for Implementing a Pattern-Based Strategy During Gartner Symposium/ITxpo, October 18-22, in Orlando

STAMFORD, Conn., October 8, 2009 —

The economic environment rapidly emerging from the recession will force business leaders to look at their opportunities for growth, competitive differentiation, and cost controls in a new way. A Pattern-Based Strategy will help leaders harness and drive change, rather than simply react to it, according to Gartner, Inc.

A Pattern-Based Strategy provides a framework to proactively seek, model and adapt to leading indicators, often-termed "weak" signals that form patterns in the marketplace. Not only will leading organizations excel at identifying new patterns and exploiting them for competitive advantage, but their own innovation will create new patterns of change within the marketplace that will force others to react.

They identify the need for closed loop feedback systems to measure the effectiveness of change.

A CONTINUOUS CYCLE: SEEK, MODEL AND ADAPT

Most business strategy approaches have long emphasized the need to seek better information and insights to inform strategic decisions and the need for scenario planning and robust organizational change management. Few have connected this activity directly to the execution of successful business outcomes. According to Gartner, successful organizations can achieve this by establishing the following disciplines and proactively using technology to enable each of these activities:

For the same reason I added modeling and social networking to the list of things I discuss and blog about, Gartner explains.

Modeling for pattern analysis — Once new patterns are detected or created, business and IT leaders must use collaborative processes, such as scenario planning, to discuss the potential significance, impact and timing of them on the organization's strategy and business operations. The purpose of modeling is to determine which patterns represent great potential or risk to the organization by qualifying and quantifying the impact.

"Successful organizations will focus their pattern-seeking activities on areas that are most important to their organization," said Ms. Genovese. "Using models to do scenario planning will be critical to fact-based decisions and the transparency of the result."

I have my black belt in Aikido, and one of the most important skills I figured out to be better is you must develop the skills to change.  Gartner adds this as well.

Adapting to capture the benefits — Identifying a pattern of change and qualifying the potential impact are meaningless without the ability to adapt and execute to a successful business outcome. Business and IT leaders must adapt strategy, operations and their people's behaviors decisively to capture the benefits of new patterns with a consistent and repeatable response that is focused on results.

Clients – I told you taking a modeling based approach to discover patterns with real-time monitoring systems will allow you to be ahead of the competition.  And, what better proof than Gartner now promoting the same ideas.  :-)

Read more

Google Releases Q3 2009 PUE Numbers

Google just updated their PUE measurement page with Q3 2009 numbers.

Quarterly energy-weighted average PUE:
1.22

Trailing twelve-month energy-weighted avg. PUE: 
1.19

Individual facility minimum quarterly PUE:
1.15, Data Center B

Individual facility minimum TTM PUE*:
1.14, Data Center B

Individual facility maximum quarterly PUE:
1.33, Data Center H

Individual facility maximum TTM PUE*:
1.21, Data Center A

* Only facilities with at least twelve months of operation are eligible for Individual Facility TTM PUE reporting

What is nice is the Google guys have discussed their latest data center J even though it has only one data point.  Data Centers G, H, and I are mentioned as well as not being tuned yet.

image
Notes:

We added one new facility, Data Center J, to our PUE report. Overall, our fleet QoQ results were as expected. The Q3 total quarterly energy-weighted average PUE of 1.22 was higher than the Q2 result of 1.20 due to expected seasonal effects. The trailing twelve-month energy-weighted average PUE remained constant at 1.19. YoY performance improved from facility tuning and continued application of best practices. The quarterly energy-weighted average PUE improved from 1.23 in Q3'08, and the TTM PUE improved from 1.21. New data centers G, H, I, and J reported elevated PUE results as we continue to tune operations to meet steady-state design targets.

The Google guys know they are going to get critiqued on how good their numbers are, so they described their measurement methods and error analysis.

Measurement Methodology

The PUE of a data center is not a static value. Varying server and storage utilization, the fraction of design IT power actually in use, environmental conditions, and other variables strongly influence PUE. Thus, we use multiple on-line power meters in our data centers to characterize power consumption and PUE over time. These meters permit detailed power and energy metering of the cooling infrastructure and IT equipment separately, allowing for a very accurate PUE determination.  Our facilities contain dozens or even hundreds of power meters to ensure that all of the power-consuming elements are accounted for in our PUE calculation, in accordance with the metric definition6. Only the office space energy is excluded from our PUE calculations. Figure 3 shows a simplified power distribution schematic for our data centers.

image

Figure 3: Google Data Center Power Distribution Schematic

Equation for PUE for Our Data Centers

image

  • EUS1 Energy consumption for type 1 unit substations feeding the cooling plant, lighting, and some network equipment
  • EUS2 Energy consumption for type 2 unit substations feeding servers, network, storage, and CRACs
  • ETX Medium and high voltage transformer losses
  • EHV High voltage cable losses
  • ELV Low voltage cable losses
  • ECRAC CRAC energy consumption
  • EUPS Energy loss at UPSes which feed servers, network, and storage equipment
  • ENet1 Network room energy fed from type 1 unit substitution
Error Analysis

To ensure our PUE calculations are accurate, we performed an uncertainty analysis using the root sum of the squares (RSS) method.  Our uncertainty analysis shows that the overall uncertainty in the PUE calculations is less than 2% (99.7% confidence interval).  Our power meters are highly accurate (ANSI C12.20 0.2 compliant) so that measurement errors have a negligible impact on overall PUE uncertainty.  The contribution to the overall uncertainty for each term described above is outlined in the table below.

Term
Overall Contribution to Uncertainty

EUS1
4%

EUS2
9%

ETX
10%

ECRAC
70%

EUPS
<1%

EHV
2%

ELV
5%

ENet1
<1%

Read more

Data Center Myth – Thermal/Temperature Shock

Mike Manos has a post pointing out what he calls “data center junk science” and the data center thermal shock requirement. 

Mike’s post got my curiosity up, and I spent time researching to build on Mike’s post. This is my 956th post in less than 2 years, and people many times think I have a journalism writing background.  Well fooled you, I am an Industrial Engineer and Operations Research graduate from Cal Berkeley.  So, even thought I write a lot, you are reading my notebook of stuff that I discover I want to share with others. For those of you who don’t want industrial engineers do.

Industrial engineering is a branch of engineering that concerns with the development, improvement, implementation and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material and process. It also deals with designing new prototypes to help save money and make the prototype better. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical and social sciences together with the principles and methods of engineering analysis and design to specify, predict and evaluate the results to be obtained from such systems. In lean manufacturing systems, Industrial engineers work to eliminate wastes of time, money, materials, energy, and other resources.

This background all helps me think of how to green the data center.

And Operations Research helps me think about the technical methods and SW to do this.

interdisciplinary branch of applied mathematics that uses methods such as mathematical modeling, statistics, andalgorithms to arrive at optimal or near optimal solutions to complex problems. It is typically concerned with determining the maxima (of profit, assembly line performance, crop yield, bandwidth, etc) or minima (of loss, risk, etc.) of some objective function. Operations research helps management achieve its goals using scientific methods.

Mike’s post got me thinking because one of my summer internships was at HP where I worked as a reliability/quality engineer figuring out how to build better quality HP products.  The team I worked in were early innovators in thermal cycling and stressing components back in the early 1980’s. 

Data Center Junk Science: Thermal Shock \ Cooling Shock

October 1, 2009 by mmanos

I recently performed an interesting exercise where I reviewed typical co-location/hosting/ data center contracts from a variety of firms around the world.    If you ever have a few long plane rides to take and would like an incredible amount of boring legalese documents to review, I still wouldn’t recommend it.  :)

I did learn quite a bit from going through the exercise but there was one condition that I came across more than a few times.   It is one of those things that I put into my personal category of Data Center Junk Science.   I have a bunch of these things filed away in my brain, but this one is something that not only raises my stupidity meter from a technological perspective it makes me wonder if those that require it have masochistic tendencies.

I am of course referring to a clause for Data Center Thermal Shock and as I discovered its evil, lesser known counterpart “Cooling” Shock.    For those of you who have not encountered this before its a provision between hosting customer and hosting provider (most often required by the customer)  that usually looks something like this:

If the ambient temperature in the data center raises 3 degrees over the course of 10 (sometimes 12, sometimes 15) minutes, the hosting provider will need to remunerate (reimburse) the customer for thermal shock damages experienced by the computer and electronics equipment.  The damages range from flat fees penalties to graduated penalties based on the value of the equipment.

As Mike asks the issue of duration.

Which brings up the next component which is duration.   Whether you are speaking to 10 minutes or 15 minutes intervals these are nice long leisurely periods of time which could hardly cause a “Shock” to equipment.   Also keep in mind the previous point which is the environment has not even violated the ASHRAE temperature range.   In addition, I would encourage people to actually read the allowed and tested temperatures in which the manufacturers recommend for server operation.   A 3-5 degree swing  in temperature would rarely push a server into an operating temperature range that would violate the range the server has been rated to work in or worse — void the warranty.

Here is the military specification typically used by vendors. MIL-STD- 810G to define temperature/thermal shock.

MIL-STD-810G
METHOD 503.5
METHOD 503.5
TEMPERATURE SHOCK

1.
SCOPE.
1.1
Purpose.
Use the temperature shock test to determine if materiel can withstand sudden changes in the temperature of the surrounding atmosphere without experiencing physical damage or deterioration in performance. For the purpose of this document, "sudden changes" is defined as "an air temperature change greater than 10°C (18°F) within one minute."
1.2
Application.
1.2.1
Normal environment.
Use this method when the requirements documents specify the materiel is likely to be deployed where it may experience sudden changes of air temperature. This method is intended to evaluate the effects of sudden temperature changes of the outer surfaces of materiel, items mounted on the outer surfaces, or internal items situated near the external surfaces. This method is, essentially, surface-level tests. Typically, this addresses:
a.
The transfer of materiel between climate-controlled environment areas and extreme external ambient conditions or vice versa, e.g., between an air conditioned enclosure and desert high temperatures, or from a heated enclosure in the cold regions to outside cold temperatures.
b.
Ascent from a high temperature ground environment to high altitude via a high performance vehicle (hot to cold only).
c.
Air delivery/air drop at high altitude/low temperature from aircraft enclosures when only the external material (packaging or materiel surface) is to be tested.

As Mike says the surprising part is the requirement for thermal shock is coming from technical people, most likely who have military backgrounds.

Even more surprising to me was that these were typically folks on the technical side of the house more then the lawyers or business people.  I mean, these are the folks that should be more in tune with logic than say business or legal people who can get bogged down in the letter of the law or dogmatic adherence to how things have been done.  Right?  I guess not.

I can’t imagine any business person or attorney thinking a thermal shock is 3 degree change in 15 minutes.  If there was an attorney involved they would go to MIL-STD 810G definition of temperature shock being greater than 10°C (18°F) within one minute.

So where does this myth come from?  Most likely their is a social network effect of people who have consider themselves smarter than others and have added thermal shock to the requirements.  One of the comments from Mike’s blog documents the possible social network source.

Dave Kelley, Liebert Precision Cooling

The only place where something like this is “documented” in any way is in the ASHRAE THermal Guidelines book. Since the group that wrote this book included all of the major server vendors, it must have been created with some type of justifiable reason. It states that the “maximum rate of temperature change is 5 degress C (9 degrees F) per hour.

And as Mike closes this has unintended consequences.

But this brings up another important point.  Many facilities might experience a chiller failure, or a CRAH failure or some other event which might temporarily have this effect within the facility.    Lets say it happens twice in one year that you would potentially trigger this event for the whole or a portion of your facility (your probably not doing preventative maintenance  – bad you!).  So the contract language around Thermal shock now claims monetary damages.   Based on what?   How are these sums defined?  The contracts I read through had some wild oscillations on damages with different means of calculation, and a whole lot more.   So what is the basis of this damage assessment?   Again there are no studies that says each event takes off .005 minutes of a servers overall life, or anything like that.   So the cost calculations are completely arbitrary and negotiated between provider and customer. 

This is where the true foolishness then comes in.   The providers know that these events, while rare, might happen occasionally.   While the event may be within all other service level agreements, they still might have to award damages.   So what might they do in response?   They increase the costs of course to potentially cover their risk.   It might be in the form of cost per kw, or cost per square foot, and it might even be pretty small or minimal compared to your overall costs.  But in the end, the customer ends up paying more for something that might not happen, and if it does there is no concrete proof it has any real impact on the life of the server or equipment, and really only salves the whim of someone who really failed to do their homework.  If it never happens the hosting provider is happy to take the additional money.

Temperature/thermal shock is a term that doesn’t apply to data centers.  Hopefully you’ll know when to call temperature/thermal shock requirements in data center operations a myth.

Thanks Mike for taking the time to write on this.

Read more

Dark Side of Smart Grid – Privacy and Exponential Data Growth

MSNBC has a post on the impact of power meters most don’t talk about.  The fact that power meters provide data on what you do in your house and all this data is going to mean petabytes of data.

What will talking power meters say about you?

Posted: Friday, October 9 2009 at 05:00 am CT by Bob Sullivan

Would you sign up for a discount with your power company in exchange for surrendering control of your thermostat? What if it means that, one day, your auto insurance company will know that you regularly arrive home on weekends at 2:15 a.m., just after the bars close?

Welcome to the complex world of the Smart Grid, which may very well pit environmental concerns against thorny privacy issues. If you think such debates are purely philosophical, you’re behind the times.

The gov’t is bringing up privacy concerns.

Pepco’s discount plan is among the first signs that the futuristic “Smart Grid” has already arrived. Up to three-fourths of the homes in the United States are expected to be placed on the “Smart Grid” in the next decade, collecting and storing data on the habits of their residents by the petabyte. And while there’s no reason to believe Pepco or other utilities will share the data with outside firms, some experts are already asking the question: Will saving the planet mean inviting Big Brother into the home? Or at least, as Commerce Secretary Gary Locke recently warned, will privacy concerns be the “Achilles’ heel” of the Smart Grid?

The dark side of what could be is discussed.

Dark side of a bright idea
But others see a darker side. Utility companies, by gathering hundreds of billions of data points about us, could reconstruct much of our daily lives -- when we wake up, when we go home, when we go on vacation, perhaps even when we draw a hot bath. They might sell this information to marketing companies -- perhaps a travel agency will send brochures right when the family vacation is about to arrive. Law enforcement officials might use this information against us ("Where were you last night? Home watching TV? That's not what the power company says … ”). Divorce lawyers could subpoena the data ("You say you're a good parent, but your children are forced to sleep in 61-degree rooms. For shame ..."). A credit bureau or insurance company could penalize you because your energy use patterns are similar to those of other troublesome consumers. Or criminals could spy the data, then plan home burglaries with fine-tuned accuracy.

How big is the data growth?

According to a recent discussion by experts at Smart Grid Security, here’s a quick explanation of the sudden explosion in data. In the United Kingdom, for example, 44 million homes had been creating 88 million data entries per year. Under a new two-way, smart system, new meters would create 32 billion data entries. Pacific Gas & Electric of California says it plans to collect 170 megabytes of data per smart meter, per year. And if about 100 million meters are installed as expected in the United States by 2019, 100 petabytes (a million gigabytes) of data could be generated during the next 10 years.

And you can expect storage vendors and enterprise consultants to support the mindset to leverage the data.

"Once a company monetizes data it collects, even if the amount is small, it is very reluctant to give it up," he said. Many companies he audits have robust privacy policies but end up using information in ways that frustrate or cost consumers, he said. "They talk a good game, but I'm sure (utility companies) will find ways to use the data, and not necessarily to benefit people but to harm people."

This complexity of privacy and data storage is part of why I have not focused much effort on the consumer smart meter market.  Applying the concepts of smart metering in data centers and commercial buildings has the potential to be adopted much quicker and supports energy efficiency and the green data center.

Read more