GreenPeace targets Cloud Data Centers environmental impact and use of coal power

I blogged back in July 2009 asking what would be Greenpeace's target for environmental impact of data centers, speculating Apple, Google, Microsoft as a possible target.  Well Greenpeace uses the Apple brand recognition and the iPad announcement to create awareness.

The announcement of Apple’s iPad has been much
anticipated by a world with an ever-increasing appetite for
mobile computing devices as a way to connect, interact,
learn and work. As rumours circulated – first about its
existence and then about its capabilities - the iPad
received more media attention than any other gadget in
recent memory. Apple Chief Executive Officer Steve Jobs
finally showcased his company’s latest creation before a
rapt audience in San Francisco. From their smart phones
and netbooks, the crowd feverishly blogged and tweeted
real time updates out to a curious world.

Greenpeace report cover: Cloud Computing and Climate Change
Whether you actually want an iPad or not, there is no
doubt that it is a harbinger of things to come. The iPad
relies upon cloud-based computing to stream video,
download music and books, and fetch email. Already,
millions access the ‘cloud’ to make use of online social
networks, watch streaming video, check email and create
documents, and store thousands of digital photos online
on popular web-hosted sites like Flickr and Picasa.


The term cloud, or cloud computing, used as a metaphor
for the internet, is based on an infrastructure and business
model whereby - rather than being stored on your own
device - data, entertainment, news and other products
and services are delivered to your device, in real time,
from the internet. The creation of the cloud has been a
boon both to the companies hosting it and to consumers
who now need nothing but a personal computer and
internet access to fulfill most of their computing needs.

image

Greenpeace has been making noise about Facebook's data center, and now has started the public awareness in this pdf.

image

I know of some companies that have a sigh of relief they are not on the Greenpeace list.

image

Some of you have noticed I made a change last week to the blog title and now have Green (low carbon) data center.

image_thumb[1][1]

Green is such an overloaded term it made sense to clarify a focus on discussing low carbon as a goal of a green data center.  Note the following in the Greenpeace pdf.


More cloud-computing companies are pursuing design and siting
strategies that can reduce the energy consumption of their data
centres, primarily as a cost containment measure. For most
companies, the environmental benefits of green data design are
generally of secondary concern.

Cloud computing infographic
Facebook’s decision to build its own highly-efficient data centre in
Oregon that will be substantially powered by coal-fired electricity clearly
underscores the relative priority for many cloud companies. Increasing
Key trends that will impact the environmental footprint of the cloud
the energy efficiency of its servers and reducing the energy footprint
of the infrastructure of data centres are clearly to be commended, but
efficiency by itself is not green if you are simply working to maximise
output from the cheapest and dirtiest energy source available. The US
EPA will soon be expanding its EnergyStar rating system to apply to
data centres, but similarly does not factor in the fuel source being used
to power the data centre in its rating criteria. Unfortunately, as our
collective demand for computing resources increases, even the most
efficiently built data centres with the highest utilisation rates serve only
to mitigate, rather than eliminate, harmful emissions.

Some people thought the hype about Facebook's coal power was a fad.  No it is a trend and the start of evaluating the carbon impact of data centers.

image

Here is a sampling of other media coverage.

Coal Fuels Much Of Internet "Cloud", Says Greenpeace

New York Times - Peter Henderson - ‎5 hours ago‎

By REUTERS SAN FRANCISCO (Reuters) - The 'cloud' of data which is becoming the heart of the Internet is creating an all too real cloud of pollution as ...

Greenpeace issues warning about data centre power

BBC News - ‎7 hours ago‎

Greenpeace is calling on technology giants like Apple, Microsoft, Yahoo and Facebook to power their data centres with renewable energy sources. ...

Data clouds called out for dirty energy

Marketplace (blog) - ‎5 hours ago‎

Environmental activities are concerned about server farms' use of dirty energy to keep sites like Google and Facebook running. ...

Greenpeace: Cloud Contributes to Climate Change

Data Center Knowledge - Rich Miller - ‎5 hours ago‎

The environmental group Greenpeace says data center builders must become part of the solution to the climate change challenge, rather than part of the ...

Cloud computing 'fuels climate change'

PCR-online.biz - Nicky Trup - ‎8 hours ago‎

The growth of cloud computing could cause a huge increase in greenhouse gas emissions, Greenpeace has warned. ...

2020: Cloud Computing GHG Emissions To Triple

Basil & Spice - ‎9 hours ago‎

San Francisco, United States — As IT industry analysts label 2010 the “Year of the Cloud”, a new report by Greenpeace shows how the launch of quintessential ...

Greenpeace criticises coal-fuelled internet cloud

TechRadar UK - Adam Hartley - ‎10 hours ago‎

Eco-campaigners at Greenpeace have criticised the idea of an internet 'cloud' - with data centres built by the likes of Facebook, Apple, ...

The iPad, internet, and climate change links in the spotlight

Greenpeace USA - ‎13 hours ago‎

International — On the eve of the launch of the iPad, our latest report warns that the growth of internet computing could come with a huge jump in ...

Read more

Christian Belady migrates to Cloud Computing joins Microsoft Research eXtreme Computing Group

I’ve started discussing the Cloud Computing more as the cloud infrastructure does almost all the things a green data center does and has industry’s attention.  I can’t think of anyone who wants to go to the cloud to over provision hardware, create silos that don’t work together, and ignore their energy use.

Mike Manos announced his move to the Mobile Cloud.

I am extremely happy to announce that I have taken a role at Nokia as their VP of Service Operations.  In this role I will have global responsibility for the strategy, operation and run of infrastructure aspects for Nokia’s new cloud and mobile services platforms.

Christian Belady announced his move to Microsoft Research’s eXtreme Computing Group.

But even with all of this change, I see there is even more opportunity now then there was when I started at Microsoft almost three years ago. Cloud computing has made mining and developing the “right” opportunities even that much more important. We need to think about how we tie together the complete ecosystem of the software stack, the IT, the data center and the grid today and what efficiencies we can drive from our research and development for the future. For those of you that know me – this is the kind of opportunity that makes me salivate. There aren’t many people around tasked with this kind of challenge and this is the opportunity I have been given in the evolution of my career at Microsoft. This week I begin tackling these projects within the Microsoft Research group in team called the Extreme Computing Group.

How big is the Cloud for Microsoft Research?  The top 3 demos listed in their demo page from TechFest are Cloud Computing.

Client + Cloud Computing for Research

Scientific applications have diverse data and computation needs that scale from desktop to supercomputers. Besides the nature of the application and the domain, the resource needs for the applications also vary over time—as the collaboration and the data collections expand, or when seasonal campaigns are undertaken. Cloud computing offers a scalable, economic, on-demand model well-matched to evolving eScience needs. We will present a suite of science applications that leverage the capabilities of Microsoft's Azure cloud-computing platform. We will show tools and patterns we have developed to use the cloud effectively for solving problems in genomics, environmental science, and oceanography, covering both data and compute-intensive applications.

Cloud Faster

To make cloud computing work, we must make applications run substantially faster, both over the Internet and within data centers. Our measurements of real applications show that today's protocols fall short, leading to slow page-load times across the Internet and congestion collapses inside the data center. We have developed a new suite of architectures and protocols that boost performance and the robustness of communications to overcome these problems. The results are backed by real measurements and a new theory describing protocol dynamics that enables us to remedy fundamental problems in the Transmission Control Protocol. We will demo the experience users will have with Bing Web sites, both with and without our improvements. The difference is stunning. We also will show visualizations of intra-data-center communication problems and our changes that fix them. This work stems from collaborations with Bing and Windows Core Operating System Networking.

Energy-Aware VMs and Cloud Computing

Virtual machines (VMs) become key platform components for data centers and Microsoft products such as Win8, System Center, and Azure. But existing power-management schemes designed at the server level, such as power capping and CPU throttling, do not work with VMs. VMmeter can estimate per-VM power consumption from Hyper-V performance counters, with the assistance of WinServer2008 R2 machine-level power metering, thus enabling power management at VM granularity. For example, we can selectively throttle VMs with the least performance hit for power capping. This demo compares VMmeter-based with hardware-based power-management solutions. We run multiple VMs, one of them being a high-priority video playback on a server. When a user requests power capping with our solution, the video playback will maintain high performance, while with hardware-capping solutions, we see reduced performance. We also will show how VMmeter can be part of System Center management packs.

Microsoft Research is lucky to find someone who is influential in the industry and has spent 3 years being in its own data center operations. There are few who could make the jump from data center operations to Microsoft Research, and I am sure Christian will constantly be working on knowledge transfers across the teams.

For the rest of the industry, I’ve go a feeling we are going to see even more of Christian’s ideas out there now that he is in Microsoft Research.

This is exciting in itself, but what really gets me “charged” is the opportunity I have now to work between my former group Global Foundation Services (that drives the current Microsoft cloud infrastructure) and my new group Microsoft Research (MSR). Taking the best practices from what we have learned with our current and future Gen 4 data centers and combining them with the resources of one of the best research organizations in the world (MSR), I am convinced that many new and exciting things will come. And best of all, I am lucky to be right smack in the middle of it and will still be working closely with the teams driving the hardware architecture for the cloud today and in the future. So actually, I am really not leaving GFS but rather extending the reach of GFS into the future. Who can ask for a better opportunity….man I love this company!

Read more

Microsoft Research Paper, Measuring Energy use of a Virtual Machine

Microsoft TechFest is going on now.

About TechFest

TechFest 2010TechFest is an annual event that brings researchers from Microsoft Research’s locations around the world to Redmond to share their latest work with fellow Microsoft employees. Attendees experience some of the freshest, most innovative technologies emerging from Microsoft’s research efforts. The event provides a forum in which product teams and researchers can discuss the novel work occurring in the labs, thereby encouraging effective technology transfer into Microsoft products.

I used to go when I was a full time employee, but there are some good things you can learn by going to the demo site.

One that caught my eye is the Network Embedded Computing that has consistently worked on data center energy sensor systems.

Their latest project is Joulemeter, a project that can measure the energy usage of VM, Server, Client and Software.

Joulemeter: VM, Server, Client, and Software Energy Usage

Joulemeter is a software based mechanism to measure the energy usage of virtual machines (VMs), servers, desktops, laptops, and even individual softwares running on a computer.

Joulemeter estimates the energy usage of a VM, computer, or software by measuring the hardware resources (CPU, disk, memory, screen etc) being used and converting the resource usage to actual power usage based on automatically learned realistic power models.

Joulemeter overview

Joulemeter can be used for gaining visibility into energy use and for making several power management and provisioning decisions in data centers, client computing, and software design.

For more technical details on the system here is their paper.

Virtual Machine Power Metering and Provisioning
Aman Kansal, Feng
Zhao, Jie Liu
Microsoft Research
Nupur Kothari
University of Southern
California
Arka Bhattacharya
IIT Kharagpur
ABSTRACT
Virtualization is often used in cloud computing platforms for its
several advantages in efficient management of the physical resources.
However, virtualization raises certain additional challenges, and
one of them is lack of power metering for virtual machines (VMs).
Power management requirements in modern data centers have led
to most new servers providing power usage measurement in hardware
and alternate solutions exist for older servers using circuit and
outlet level measurements. However, VM power cannot be measured
purely in hardware. We present a solution for VM power metering.
We build power models to infer power consumption from resource
usage at runtime and identify the challenges that arise when
applying such models for VM power metering. We show how existing
instrumentation in server hardware and hypervisors can be
used to build the required power models on real platforms with low
error. The entire metering approach is designed to operate with
extremely low runtime overhead while providing practically useful
accuracy. We illustrate the use of the proposed metering capability
for VM power capping, leading to significant savings in power provisioning
costs that constitute a large fraction of data center power
costs. Experiments are performed on server traces from several
thousand production servers, hosting Microsoft’s real-world applications
such as Windows Live Messenger. The results show that
not only does VM power metering allows reclaiming the savings
that were earlier achieved using physical server power capping, but
also that it enables further savings in provisioning costs with virtualization.

Note there will be a desktop and laptop version available soon.

Download: A freely downloadable version of the Joulemeter software that measures laptop and desktop energy usage will be be available in a few weeks. Watch this space!

But I want to get access to the VM, Server and software versions for the data center.  Maybe I can get this group involved with GreenM3 being transition into an NPO.  The University of Missouri is also another connection as a  Mizzou professors worked with the Microsoft Researchers in a prior job developing sensor networks.

Read more

Microsoft’s Flexible Data Center System, Kevin Timmons presents at DCD NY

I couldn’t make it to DataCenterDynamics NY, but I have plenty of friends there, so I can get a virtual report.

Kevin Timmons gave the keynote and Rich Miller wrote up a nice entry.

Microsoft’s Timmons: ‘Challenge Everything’

March 3rd, 2010 : Rich Miller

The building blocks for Microsoft’s data center of the future can be assembled in four days, by one person. The two data center containers, known as IT PACs (short for pre-assembled components) proof of concept, are built entirely from aluminum. The first two proof of concept units use residential garden hoses for their water hookups.

“Challenge everything you know about a traditional data center,” said Kevin Timmons, who heads Microsoft’s Global Foundation Services, in describing the company’s approach to building new data centers. “From the walls to the roof to where it needs to be built, challenge everything.”

So much of what is wrong with data centers and prevent them from being Green is people do what they have done in the past.  This includes the engineer companies and the customers who specify the data centers. You don’t hear customers saying “bring me a data  center design no one has done before.”

The efficiency of the data center is a given to have a low PUE (sub 1.2), but with Cloud Computing and Mobile as top needs for data center growth, speed of how quickly you can add capacity is a higher requirement by executive decision makers.

Here is a video showing some of the concepts Microsoft has been willing to share.

Get Microsoft Silverlight

and the blog post from Microsoft’s Daniel Costello.

Then we had to take these single lines and schematics and break them into logical modules for the components to reside in. This may seem easy but represents a shift in thinking from a building where, for instance, we would have a UPS room and associated equipment and switchgear manufactured by multiple vendors and put it physically in sometimes separate modules. The challenge became how to shift from a traditional construction mindset to the new, modularized manufacturing mindset. Maintainability is a large part of reliability in a facility, and became a key differentiator between the four classes. Our A Class infrastructure, which is not concurrently maintainable and is on basically street power and unconditioned air, will require scheduled downtime for maintenance. The cost, efficiency, and time-to-market targets for A Class are very aggressive and a fraction of what the industry has come to see as normal today. We realized that standardization and reuse of components from one class to the next was a key to improving cost and efficiency. Our premise was that the same kit of parts (or modules) should be usable from class to class. These modules (in this new mindset) can be added to other modules to transition within the data center from one class to the next.

I would call this a Flexible Data Center System.  This has been done in manufacturing in flexible manufacturing systems for decades and is just now coming to data center design.

A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react in the case of changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, which both contain numerous subcategories.

The advantage of this system is

Advantages

Faster, Lower- cost/unit, Greater labor productivity, Greater machine efficiency, Improved quality, Increased system reliability, Reduced parts inventories, Adaptability to CAD/CAM operations.

With one disadvantage.

Disadvantages

cost to implement.

But in data centers the cost to implement can be lower than traditional data centers with enough people adopting the approach.  And, whereas the flexibility in manufacturing typically applies to the product produced, the flexibility concepts are being applied to the data center infrastructure.

And, what else is changing is the hardware that goes in these data centers.  Microsoft’s Dileep Bhandarkar discussed here.

IT departments are strapped for resources these days, and server rightsizing is something every team can do to stretch their budgets. The point of my presentations and the white paper our team is publishing today is two-fold:

1. To quantify some of the opportunities and potential pitfalls as you look for savings, and

2. To present best practices from our experiences at Microsoft, where the group I lead manages server purchases for the large production data centers behind Microsoft’s wide array of online, live and cloud services.

Read more