TechNet has an article I just wrote in print and online, Filtering The Greenwashing.
Sustainable Computing Filtering the Greenwashing
There are an overwhelming number of products and solutions being marketed as green, as saving energy, as being more efficient. All this marketing hype creates confusion in the market as to what is really eco-friendly. Even after evaluating the specifications on various products, it is difficult, if not impossible, for an IT pro to determine what equipment should be used when environmental impact is a key concern. When you see all the demonstrations, large energy savings are always highlighted, and this leads you to think that the return on investment (ROI) makes the upgrade easier to justify. After all, the energy savings should reduce the total cost of ownership (TCO).
The move by companies of all types to label seemingly everything as environmental and exploit the current interest in green solutions has led to the concept of "greenwashing," which refers to the over-promising of environmental benefits. So what is the truth of energy savings? This isn't as clear cut as, say, installing new energy-efficient lightbulbs in your home.
As interest in sustainable IT efforts increases and the market for environmentally friendly IT equipment expands, many people and organizations jump to the end result of deploying energy-efficient laptops, desktops, and servers, and using virtualization to reduce energy consumption. Yet few organizations run energy audits to determine the true benefits of what they have purchased.
While the ideal scenario is to actively measure in your production environment, that can also be expensive. If you aren't ready to start measuring in production, you can still move forward by performing your energy audit earlier in the process, doing so in your performance and evaluation labs. (Most companies have some lab or group responsible for testing and evaluating equipment before making a purchase.) There you can add energy performance as one of your test criteria and then take those results into account when making purchasing decisions, rather than relying on the numbers provided by manufacturers. So as you test, create your own device power-consumption database. Then you can ignore the greenwashing and see for yourself what works.
Of course, I should point out that if you want highly accurate numbers for operation under your true load, you'll need to monitor in your production environment. The quicker you begin monitoring your production environment, the better for your organization and your bottom line. This process will be critical to your long-term success.
Thanks to the Microsoft folks in the Enterprise Engineering Center, I was able to leverage their experiences in Filtering the Greenwashting.
- 1.Turning off a device doesn't necessarily reduce energy consumption as much as you might expect (see Figure 4). In one case involving server hardware, the EEC discovered a device that actually consumed 100 watts when turned off but still plugged in. This surprised many, and the EEC went over the setup many times. They eventually used an infrared thermometer to measure inlet and outlet temperature and verified that the device did, in fact, consume 100 watts while off.
- Software can have a significant impact on power consumption. On identical networking switches, with identical hardware and BIOS configurations, running different networking software displayed a 21 percent difference in power consumption. High-end solutions with more processes and features enabled, like security and monitoring tools, often consume more than their simpler low-end counterparts.
- In virtualization scenarios, the EEC has measured power consumption versus I/O utilization and CPU utilization to determine when a given piece of hardware maximizes its performance per watt. The EEC found that a narrow focus on CPU utilization could lead to too many virtual machines loaded on a physical machine, actually decreasing the overall performance per watt.
- Higher-density devices, as you might expect, have more power and cooling issues. When deploying higher- density systems, your power and cooling facilities staff should be consulted as early as possible. These devices may be good candidates for their own power-monitoring devices in production if you know the environment will be power constrained.
- Dual power supplies can consume considerably more power than a single power supply.
- Seemingly identical pieces of hardware with identical configurations can have significantly different power consumption. Observed differences were significant enough to make the EEC staff double-check hardware to ensure they were really configured the same.
- The watt ratings on the product plate are not actual consumption numbers, but rated capacity for power supplies.
- Maintaining a database of energy consumption tests and results per device and subcomponent is essential for retaining knowledge and comparing data.
- Different configurations of equivalent amounts of RAM consume different amounts of energy. Fewer DIMMs typically consume less energy—for example, 4 x 2GB DIMMs versus 8 x 1GB DIMMs. But there have been some cases where fewer DIMMs consumed more energy.
And, one of the last minute I was able to get in is the EEC's effort's using real time temperature sensors.
Keeping Cool in the Datacenter
Datacenter cooling offers a huge potential for reducing energy consumption. It is astounding how much heat can be generated in a datacenter and how much energy is used to keep hardware cooled. But if you want to manage your cooling successfully, fix problems, and develop more efficient cooling solutions, you'll need a temperature-monitoring solution. Consider the solution the Microsoft datacenters use.
Microsoft Research built a temperature sensor network for the datacenters that allows for improved temperature control and also enables evaluation of various cooling improvements. For instance, one Microsoft datacenter was evaluating end-of-aisle air curtains to improve hot and cold air separation. After the curtains were installed, some servers started to send overheat alarms. Naturally, the operation engineers increased the air flow from the cooling system to provide more cool air. To their surprise, however, more servers sent overheat alarms. And all these servers were at the bottom of the rack—the bottom, of course, is usually the coolest area from a raised-floor cooling system.
Using the sensor network, the engineers confirmed that the racks were cooler up higher, with the bottom of the rack the hottest. And they soon figured out that hot air was being drawn from the hot aisle between the bottom of the rack and the flooring—a result of Bernoulli's principle. They easily fixed the overheating by sealing the bottom of the rack and reducing the air flow speed.
This is just the sort of data the Microsoft Enterprise Engineering Center gathers and analyzes when doing performance testing. So the EEC recently notified Microsoft Research that they were ready for a deployment test. Within a day the system was deployed to 10 racks, and the installation took just one hour to complete. The EEC now is able to study and better understand cooling issues and their relationship to hardware performance.
Of course, simply monitoring isn't a solution in itself. The real gain is in your ability to find problem areas that you can fix, make changes, and evaluate various solutions to see if they have the result you are expecting. After all, you don't want to be caught off guard when your new cooling solution unexpectedly causes your racks to overheat.