Power Monitoring Equipment Choice

For quite a while now, I have been stumped with what kind of equipment to get to measure power consumption. For consumers, I have a Kill-a-watt which is fine for curiosity at home, and maybe watts up pro for a USB device.  Extech and Fluke are other pieces of equipment others have used, but nobody gives a huge endoursement.

Keeping my fingers, crossed I think I have found a device that would work well in a Performance Lab environment.  Dan Diesso of Smart Works and I sat down to go over his device and how it works.

Smart-Watt keeps a cumulative total of the energy consumed in 0.1 watt-hour increments and retains that reading even in the event of a power failure.  Smart-Watt conforms to ANSI Standards for watt-hour meters so you are assured of the same high accuracy standards that public utilities require for their meters.  To make reading the meter easy each Smart-Watt is equipped with a unique internal ID and network connection for automated reading.  You can daisy-chain multiple Smart-Watt units together and connect them directly to a PC COM port for automated reading or use our Smart-Net Gateway that provides an IP interface to read a group of Smart-Watt units across your local area network or remotely over the internet.

After I've had a chance to use the unit, I'll report on how well it works, and whether it meets the power needs for a performance lab.  The accuracy, unique ID, and network connection are what differentiates Smart-Watt vs. other equipment I've seen.

Read more

Carbon Emissions Monitoring

IBM has announced a Carbon Emissions Monitoring program and another interesting Carbon Monitoring site is http://carma.org/.  It would be interesting to hear what the folks at CARMA think about IBM's announcement.

IBM to count carbon emissions for cash

IBM has partnered with two other companies to build an application that they say can accurately measure corporate efforts to reduce greenhouse gas emissions.

The software, called GreenCert, is built on IBM's infrastructure software and tools from C-Lock Technology, which can accurately measure reductions in greenhouse gases including carbon dioxide. The companies are expected to detail the application on Wednesday.

Many companies are undergoing initiatives to reduce their carbon emissions, as part of corporate social responsibility or environmental programs.

Having a method to measure and certify those reductions is significant because it will allow those companies to sell those carbon offsets, according to IBM. The application is part of IBM's Big Green Innovations initiative to develop clean technologies.

And another interesting site is Carbon Monitoring for Action.

At its core, Carbon Monitoring for Action (CARMA) is a massive database containing information on the carbon emissions of over 50,000 power plants and 4,000 power companies worldwide. Power generation accounts for 40% of all carbon emissions in the United States and about one-quarter of global emissions. CARMA is the first global inventory of a major, emissions-producing sector of the economy.

CARMA is produced and financed by the Confronting Climate Change Initiative at the Center for Global Development, an independent and non-partisan think tank located in Washington, DC.

The objective of CARMA.org is to equip individuals with the information they need to forge a cleaner, low-carbon future. By providing complete information for both clean and dirty power producers, CARMA hopes to influence the opinions and decisions of consumers, investors, shareholders, managers, workers, activists, and policymakers. CARMA builds on experience with public information disclosure techniques that have proven successful in reducing traditional pollutants.

For several thousand power plants within the U.S., CARMA relies upon data reported to the Environmental Protection Agency by the plant operators themselves as required by the Clean Air Act. CARMA also includes many official emissions reports for plants in Canada, the European Union, and India. For non-reporting plants, CARMA estimates emissions using a statistical model that has been fitted to data for thousands of reporting plants in the U.S., Canada, the EU, and India. The model utilizes detailed data on plant-level engineering and fuel specifications. CARMA reports emissions for the year 2000, the current year, and the future (based on published plans).

Constructing a carbon emissions database of 50,000 power plants in every country on Earth takes a team effort. The methodology and data behind CARMA were developed over many months as part of the Confronting Climate Change Intiative at the Center for Global Development in Washington DC.

The primary architects of the CARMA database are David Wheeler and Kevin Ummel — with assistance provided by a dedicated team at the Center for Global Development. CARMA.org was created by the talented professionals at Forum One Communications, a web strategy and development firm that helps not-for-profit, foundation, government, and commercial organizations make an impact on important social issues.

CARMA Team Photo

Read more

Changing the role of the Performance Lab to measure Power and Cooling

2  posts Google's  The Case for Energy-Proportional Computing

Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems.

and Green Marketing and Green Hype

I'm watching with mixed emotions as more and more vendors start to describe their products as addressing challenges associated with reducing power and cooling in the data center. On one hand, clearly it's a monumental challenge -- and opportunity. On the other hand, I'm starting to see vendors with thinner and thinner claims start to add this message to their marketing drumbeat. That's not a good thing.

point out the challenges coming in determing energy efficiency, and you will need to measure the results.

The answer can be relatively simple.  Add Power and Cooling measurements to your performance labs capabilitise and RFPs.  Some people are adding Green requirements to RFPs, but Green is not a well enough defined area.  Instead ask for the true power consumption under max load and idle for the configuration delivered. This should be a # that is relatively easy for your performance labs to verify when measuring the power draw from IT HW.

Read more

Dynamic Power Usage Effectiveness (PUE) to real world dynamic energy efficiency

I've updated this entry with a post at /2008/01/dynamic-pue-rea.html and appended this post with the entry.

PUE = Total Facility Power / IT Equipment Power

I am about ready to write on the subject of energy efficiency measurements and how PUE should be calculate over time and conditions, and I was fortunate to find a blog entry on this by Ken Oestreich.

Why? Because IT departments operate at greatly different levels; peak (maybe during the day) as well as off-peak (perhaps nights/weekends). Ideally, the data center should know how to adapt to these conditions: re-purposing "live" machines during peak hours; retiring and temporarily shutting-down idle servers during off-peak; removing power conditioning equipment when not needed; turning off specific CRAC units and chillers when not required (i.e. cold days and/or off-peak hours). We need an efficiency metric that indicates how data centers operate Dynamically.

Cornell Medical's Biomedicine compute servers shut down in this condition, but they are not to point of having control over their cooling systems.

My plan is to write on  a customer who is dynamically measuring PUE over different operating conditions, changes in outside temperature, over time of filling the room with servers, within a facility in different colo rooms, and comparing other facilities.  Ken is right in tha PUE will change at various times. It is important to know the conditions and the theoretical PUE should be, and then you can determine if your power and cooling systems are not performing as expected.

I hope to have this article out by end of year.

Below is my continuation from /2008/01/dynamic-pue-rea.html

Dynamic PUE real world use

I've been meaning to write about PUE, and have been stumped in that It is defined as a metric, and in the Green Grid document referenced it makes no reference that is dynamic. In reality PUE will be a dynamic # that changes as the load changes in a room. How ironic would it be that your best PUE # is when all the servers are running at near capacity, and shutting down servers to save power will increase your PUE? Or your energy efficient cooling system uses large amounts of water in Southern California where it is just a matter of time before water shortages will cause more environmental issues?

What helped me to think of PUE as a dynamic # is to think of it as quality control metric. The quality of the electrical and mechanical systems and their operations over time are inputs into PUE.  As load changes and servers will be turned off the variability of the power and cooling systems influence you PUE.  So, PUE can now have a statistical range of operation given the conditions.  This sounds familiar.  It's statistical process control.

Statistical Process Control (SPC) is an effective method of monitoring a process through the use of control charts. Much of its power lies in the ability to monitor both process centre and its variation about that centre. By collecting data from samples at various points within the process, variations in the process that may affect the quality of the end product or service can be detected and corrected, thus reducing waste and as well as the likelihood that problems will be passed on to the customer. With its emphasis on early detection and prevention of problems, SPC has a distinct advantage over quality methods, such as inspection, that apply resources to detecting and correcting problems in the end product or service.

For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, in accordance with a distribution of net weights. If the production process, its inputs, or its environment changes (for example, the machines doing the manufacture begin to wear) this distribution can change. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than specified. If this change is allowed to continue unchecked, more and more product will be produced that fall outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case, the waste is in the form of "free" product for the consumer, typically waste consists of rework or scrap.

By observing at the right time what happened in the process that led to a change, the quality engineer or any member of the team responsible for the production line can troubleshoot the root cause of the variation that has crept in to the process and correct the problem.

This last point of observing at the right time what happened in the process that led to a change ultimately what needs to be achieved with a dynamic PUE system.  Without a system like this and mindset, you wouldn't know how to fix PUE problems. Which is what I think is wrong with a static PUE mindset.  You need a closed loop feedback to monitor the PUE and see if it is performing as expected given the operating conditions and load.

Note: the point about breakfast cereal reminds of Microsoft's Mike Manos, Sr. Director Data Center Services, and his first job working in Rice a Roni operations, learning process control, which is probably why he has invested in software from OSIsoft to help monitor PUE.  Cornell uses the same SW as well.  For more details see Microsoft's Jeff O'Reilly presentation or Cornell's Jason Banfelder presentation.

Read more