Can you Green the Data Center? Maybe if you think in terms of an Information Factory

I have been writing on the Green Data Center topic for over 2 years with 1,000 blog posts. And, one of the things I have found is the name “data center” is not an accurate description to the layman of what data centers do. Are data centers the “center of data”?  In the past there was one corporate building that was the place where data was housed for the corporation. The standard for Fortune 500 companies now is to have multiple data centers around the world to provide information availability, disaster recovery, and reliability. How can there be multiple centers of data? If you green the data center what am I supposed to green? These multiple centers?  How?

What I propose is a more accurate description of what data centers are in this economy.  The Data Center is an information factory, a building that makes information suitable for use with information machinery – servers, storage, and networking hardware. Information is the raw material input into the factory. Software running on the hardware processes information increasing the value. Like any other manufacturing process electricity is used to power and cool the machinery.  How much power is used to run these information factories, in 2006 1.5% of the US electricity production was in data centers, doubling 2000 consumption, growing at a 12% annual rate.

image

The above is an image Google uses to illustrate its green Information Factory (aka data center).

So the choices to green the data center are now how do you green your information factory.  Making factories energy efficient is a concept many are familiar with.  Applied to the information factory how do you consume less energy and/or greener energy while increasing the value of information? Making power delivery more efficient applies to all parts of the data center. Cooling systems is a whole topic specialists who can figure out the most efficient way to remove the heat from the IT equipment.  More efficient servers are another choice. And of course there is virtualization.  Not too long ago, for every watt of power supplied to a server, there was another watt used by the power and cooling systems.  Now companies like Google consume only 0.21 watts for power and cooling for every watt used by their information factory hardware which by the way consume less power than what is commonly used by the industry.

image

 

Where do you start? Most companies start where they have budget to spend. Huh? Sound silly. Well that is what happens in most companies as the IT organizations within a company are in silos of separation. Imagine if you wanted improve a car’s MPG and approached the problem based on which department had the budget available to make changes to the car.What is needed is an information engineer whose job it is to figure out how to improve the performance per watt in the whole system and prioritize the areas to address.

Companies like Google, Microsoft, Yahoo, Amazon, and eBay have addressed this problem by creating groups who have responsibilities to engineer their information factories.

Is your company running centers of data or information factories? The ones who think like information factories are driving to new levels of performance per watt.  Breaking down silos, to get groups to work together. You can Green the data center by looking at how much energy gets consumed by your information factories to create higher value information. Another choice is where data centers get their power from and the carbon impact.  Using 1.5% of the US electricity consumption data centers have the opportunity to locate near places with renewable energy and is commonly discussed by Google, Microsoft, and Yahoo.

Read more

Top 100 Countries Reading Green Data Center Blog

November is my 2 year anniversary running www.greenm3.com on the Green Data Center topic. I started out the site with the encouragement of an old friend Bob Visse who Google searched “green data center” and told me it is an ideal area to start a blog on.  After 2 years, the site has 950+ RSS Readers, and numerous top 10 Google search results.

The growth of the readership is amazingly linear.

image

There is a lot of I’ve learned over the past 2 years, and this blog is part of my system to document what is going in the industry.  It is now so ingrained in how I work that blogging is a natural part of the day, and explains why I am now up to this being my 988 post on the blog.  Hitting 1,000 posts by Nov 26, 2009 my exact 2 year anniversary is not going to be a problem.

And, the international reach is beyond my expectations. Below is the top 100 countries, rated in order of traffic hits to the blog over the last month.

Thanks for visiting my blog, and thanks to my new and old friends who continue to supply me with new ideas to write about.

-Dave Ohara

dave at greenm3 dot com

1

United States

54.89%
2

United Kingdom

6.41%
3

Canada

4.09%
4

India

3.59%
5

France

2.53%
6

Japan

2.06%
7

Australia

2.02%
8

Germany

1.86%
9

Netherlands

1.75%
10

Singapore

1.18%
11

Hong Kong

0.95%
12

Spain

0.85%
13

Switzerland

0.80%
14

Philippines

0.79%
15

Brazil

0.77%
16

Italy

0.72%
17

Ireland

0.71%
18

Taiwan

0.67%
19

Sweden

0.67%
20

Malaysia

0.66%
21

South Korea

0.63%
22

Belgium

0.55%
23

Denmark

0.55%
24

Indonesia

0.51%
25

Finland

0.49%
26

Thailand

0.44%
27

Poland

0.44%
28

Iran

0.41%
29

Portugal

0.40%
30

South Africa

0.39%
31

Mexico

0.35%
32

Romania

0.34%
33

Turkey

0.32%
34

New Zealand

0.32%
35

Norway

0.31%
36

Russia

0.29%
37

Greece

0.27%
38

(not set)

0.27%
39

Austria

0.25%
40

Hungary

0.25%
41

Israel

0.25%
42

Pakistan

0.23%
43

Vietnam

0.22%
44

Czech Republic

0.17%
45

Saudi Arabia

0.16%
46

Argentina

0.15%
47

Egypt

0.15%
48

Colombia

0.14%
49

Slovakia

0.14%
50

Chile

0.13%
51

Lithuania

0.12%
52

United Arab Emirates

0.12%
53

Peru

0.11%
54

Iceland

0.11%
55

Ukraine

0.11%
56

Croatia

0.10%
57

Bulgaria

0.09%
58

Jordan

0.09%
59

Serbia

0.08%
60

Estonia

0.08%
61

Venezuela

0.07%
62

Bangladesh

0.07%
63

Puerto Rico

0.07%
64

Costa Rica

0.07%
65

Slovenia

0.07%
66

Bahrain

0.05%
67

Armenia

0.05%
68

Luxembourg

0.05%
69

Kuwait

0.05%
70

Sri Lanka

0.05%
71

Dominican Republic

0.05%
72

Yemen

0.04%
73

Latvia

0.04%
74

Brunei

0.04%
75

Morocco

0.04%
76

Lebanon

0.04%
77

Tunisia

0.04%
78

Kenya

0.04%
79

Uruguay

0.03%
80

Algeria

0.03%
81

Malta

0.03%
82

Nepal

0.03%
83

China

0.03%
84

Macau

0.03%
85

Trinidad and Tobago

0.03%
86

Cayman Islands

0.02%
87

Mauritius

0.02%
88

Oman

0.02%
89

Ghana

0.02%
90

Ivory Coast

0.02%
91

Belarus

0.02%
92

Faroe Islands

0.02%
93

Qatar

0.02%
94

Guatemala

0.02%
95

Kazakhstan

0.02%
96

Iraq

0.02%
97

Jamaica

0.02%
98

Ecuador

0.02%
99

Macedonia [FYROM]

0.02%
100

Nigeria

0.02%
Read more

TechHermit Blog Deleted

Unfortunately,  http://techhermit.wordpress.com/ has the following page.

WordPress.com

The authors have deleted this blog. The content is no longer available.

You can create your own free blog on WordPress.com.

There were hopes the TechHermit blog would continue.

TechHermit Returns with New Authors, Speculates the end of Microsoft’s Data Center Program

DataCenterKnowledge spreads the word TechHermit’s blog continues.

Tech Hermit Blog Returns
September 22nd, 2009 : Rich Miller

Back in July I noted the passing of Shane McGew, who wrote about the data center industry at his Tech Hermit blog. So I was surprised to find new posts at the Tech Hermit blog this week.

Here’s the story: “Today we are announcing that through detailed negotiations with the McGew family a group of avid readers have purchased the rights to the Tech Hermit brand and will continue to post under this heading and keep the same edgy feedback that we came to love with Shane. We hope to earn the same level of trust and respect in time.”

Shane was always pretty plugged into goings-on in data center operations at Microsoft, a trend that continues with the new team (whose members remain anonymous). A post today notes the departure of another Microsoft data center executive, Joel Stone, who is headed to Global Switch. Stone’s departure follows the exit of Global Foundation Services corporate VP Debra Chrapaty, who is off to Cisco.

Read more

Microsoft’s Daniel Costello and Christian Belady Container Data Centers Video

cnet news has a video interview of Daniel  Costello and Christian Belady.

Many of your recognize Christian.  Daniel is not as well known, and brains behind the 4th generation Microsoft data center.

But Microsoft has indicated how the next generation of data center will improve upon the Chicago design.

Moving to containers allows Microsoft to bring in computing capacity as needed, but still requires the company to build the physical building, power and cooling systems well ahead of time. The company's next generation of data center will allow those things to be built in a modular fashion as well.

Daniel had an interview with PCworld that gives you some ideas of his thinking.

"The idea of modular, portable data centers is key to the industry's future," said Daniel Costello, Microsoft director of data center research, in a presentation at GigaOM's Structure 08 conference in San Francisco. "That's why I'm here to talk about data centers, not just for Microsoft but for our customers as well."

Buying these boxes from server vendors can be more energy-efficient and cost-effective than building a new, traditional data center, he said, and Microsoft sees them as more than just a way to add extra computing capacity at short notice. "We see them as a primary packaging unit," he said.

Using shipping containers is part of an effort by Microsoft to radically rethink its data centers, as it tries to add more computing capacity in a way that is cost effective and power efficient. "At Microsoft, we're questioning every component in the data center, up to and including the roof," Costello said. That includes "eliminating concrete from our data center bills."

"The definition of a datacenter has changed. It's not just bricks and mortar any more, and moving forward, we think it can be a lot more energy efficient," he said.

But vendors building portable data centers today are filling them with equipment that was designed for traditional data centers. "Moving forward, we need to design systems specifically for this form factor. If we look at the containers, that form factor will change over time as well."

Microsoft has approached every major server vendor about providing it with equipment, Costello said. He said he thinks "all major vendors" will offer portable data centers within the next two years. Vendors offering them today include Sun Microsystems, Verari Systems, Rackable Systems and American Power Conversion.

The cost benefits come partly from economies of scale. Shipping 2,000 servers in a container is more cost-effective than shipping and installing separate racks, and portable data centers don't require raised floors or as much wiring.

They can offer a better "power unit efficiency" ratio than do traditional data centers, he said. PUE is a measure of a data center's power efficiency. If a server demands 500 watts and the PUE of a data center is 3.0, the power from the grid needed to run the server will be 1500 watts, according to a definition from the Green Grid industry consortium.

"We've seen PUE at a peak of 1.3" in modular data centers, Costello said, compared with between 1.6 and 2.0 for a traditional data center.

The containers can accommodate 1,000 watts per square foot, allowing companies to power a lot more servers in a given area, he said. Many companies are unable to add more equipment to their data centers because power supplies and cooling equipment are at maximum capacity. The portable data centers are an alternative to building new facilities or extending old ones.

Daniel mentions some of the downsides of containers.

There are some drawbacks and plenty of questions to be answered, he said. Some of the cons include a higher cost of failure if the power to a container is cut off, as well as new risks in terms of regulatory compliance. In addition, portable data centers offered today can't accommodate servers from multiple vendors, he said.

Read more

Mike Manos’s Perspective on Containers in the Data Center – Part 1

Mike has a new post on a practical guide to data center containers. 

Bottom Line: Because containers encapsulate a large amount of information capability with power and cooling infrastructure, the decision to use containers requires a multi-discipline view of the benefits and impacts of using Containers.  Google and Microsoft have done this exercise, and any one who is thinking about containers should consider both of these companies have figured out where it makes sense.

It is one of Mike’s long posts, so I’ll give you excerpts, and give you a Part 1 to get started, and work on a Part 2 later.

I will do my best to try and balance this view across four different axis the Technology, Real Estate, Financial and Operational Considerations.  A sort of ‘ Executives View’  of this technology. I do this because containers as a technology can not and should not be looked at from a technology perspective alone.  To do so is complete folly and you are asking for some very costly problems down the road if you ignore the other factors.  Many love to focus on the interesting technology characteristics or the benefits in efficiency that this technology can bring to bare for an organization but to implement this technology (like any technology really) you need to have a holistic view of the problem you are really trying to solve.

Mike contrasts Moore’s law vs. the glacial pace of innovation in the data center.

Isn’t it interesting then that places where all of this wondrous growth and technological wizardry has manifested itself, the data center or computer room, or data hall has been moving along at a near pseudo-evolutionary standstill.  In fact if one truly looks at the technologies present in most modern data center design they would ultimately find small differences from the very first special purpose data room built by IBM over 40 years ago.

Mike goes on to discuss modularization in the data center.  The NSA is listening to advice like this and have made modularization a requirement of their $1.5 bil data center.

Data Centers themselves have a corollary to the beginning of the industrial revolution.   In fact I am positive that Moore’s observations would hold true and civilization transitioned from an agricultural based economy to that of an industrialized one.   In fact one might say that the current modularization approach to data centers is really just the industrialization of the data center itself.

In the past, each and every data center was built lovingly by hand by a team of master craftsmen and data center artisans.  Each is a one of a kind tool built to solve a set of problems.  Think of the eco-system that has developed around building these modern day castles.  Architects, Engineering firms, construction firms, specialized mechanical industries, and a host of others that all come together to create each and every masterpiece.    So to, did those who built plows, and hammers, clocks and sextants, and the tools of the previous era specialize in making each item, one by one.   That is, of course, until the industrial revolution.

Data Centers are not buildings they are information factories.

The data center modularization movement is not limited to containers and there is some incredibly ingenious stuff happening in this space out there today outside of containers, but one can easily see the industrial benefits of mass producing such technology.  This approach simply creates more value, reduces cost and complexity, makes technology cheaper and simplifies the whole.  No longer are companies limited to working with the arcane forces of data center design and construction, many of these components are being pre-packaged, pre-manufactured and becoming more aggregated.  Reducing the complexity of the past. 

And why shouldn’t it?   Data Centers live at the intersection of Information and Real Estate.   They are more like machines than buildings but share common elements of both buildings and technology.   All one has to do is look at it from a financial perspective to see how true this is.   In terms of construction, the cost of data centers break down to the following simple format.  Roughly 85% of the total costs to build the facility is made up of the components, labor, and technology to deal with the distribution or cooling of the electrical consumption.

Data Centers are mostly built out of sync with the pace of technology change.

From the perspective of the technology drivers behind this change roadsis the fact that most existing data centers are not designed or instrumented to handle the demands of the changing technology requirements occurring within the data center today.

Data Center managers are being faced with increasingly varied redundancy and resiliency requirements within the footprints that they manage.   They continue to support environments that heavily rely upon the infrastructure to provide robust reliability to ensure that key applications do not fail.  But applications are changing.  Increasingly there are applications that do not require the same level of infrastructure to be deployed because either the application is built in such a way that it is more geo-diverse or server-diverse. Perhaps the internal business units have deployed some test servers or lab / R&D environments that do not need this level of infrastructure. With the amount of RFPs out there demanding more diversity from software and application developers to solve the redundancy issue in software rather than large capital spend requirements on behalf of the enterprise, this is a trend likely to continue for some time.  Regardless the reason the variability challenge that data center managers are facing are greater than ever before.

Mike brings up the problem that occurs as people build costly custom buildings.

From a business / finance perspective companies are faced with some interesting challenges as well.  The first is that the global inventory for data center space (from a leasing or purchase perspective) is sparse at best.    This is resulting from a glut of capacity after the dotcom era and the resulting land grab that occurred after 9/11 and the Finance industry chewing up much of the good inventory.    Additive to this is the fact that there is a real reluctance to build these costly facilities speculatively.   This is a combination of how the market was burned in the dotcom days, and the general lack of availability and access to large sums of capital.  Both of these factors are driving data center space to be a tight resource.

In my opinion the biggest problem across every company I have encountered is that of capacity planning.  Most organizations cannot accurately reflect how much data center capacity they will need in next year let alone 3 or 5 years from now.   Its a challenge that I have invested a lot of time trying to solve and its just not that easy.   But this lack of predictability exacerbates the problems for most companies.  By the time they realize they are running out of capacity or need additional capacity it becomes a time to market problem.   Given the inventory challenge I mentioned above this can position a company in a very uncomfortable place.   Especially if you take the all-in industry average of building a traditional data center yourself in a timeline somewhere between 106 and 152 weeks. 

to be continued…

Read more