Which do you like? Google's 5 data center best practices or Microsoft's list of 10

How you present information can make a big difference on the perception.  Here are two different ways to present which are fundamentally the same ideas from Google and Microsoft.

It is fairly obvious which one is more user friendly.

Microsoft released its update 10 Business Practices for Environmentally Sustainable Data Centers and posted a document here.

Google has their Data Center Best Practices with a list of 5.

1. Measure PUE

You can’t manage what you don’t measure, so characterize your data center’s efficiency performance by measuring energy use. We use a ratio called PUE - Power Usage Effectiveness - to help us reduce energy used for non-computing, like cooling and power distribution. To effectively use PUE it’s important to measure often - we sample at least once per second. It’s even more important to capture energy data over the entire year - seasonal weather variations have a notable affect on PUE.

2. Manage airflow

Good air flow management is fundamental to efficient data center operation. Start with minimizing hot and cold air mixing by using well-designed containment. Eliminate hot spots and be sure to use blanking plates for any unpopulated slots in your rack. We’ve found a little analysis can pay big dividends. For example, thermal modeling using computational fluid dynamics (CFD) can help you quickly characterize and optimize air flow for your facility without many disruptive reorganizations of your computing room. Also be sure to size your cooling load to your expected IT equipment, and if you are building extra capacity, be sure your cooling approach is energy proportional.

3. Adjust the thermostat

Raising the cold aisle temperature will reduce facility energy use. Don’t try to run your cold aisle at 70F; set the temperature at 80F or higher — virtually all equipment manufacturers allow this. For facilities using economizers (we strongly recommend it), running elevated cold aisle temperatures is critical as it enables more days of “free cooling” and more energy savings.

4. Use free cooling

“Free cooling” is removing heat from your facility without using the chiller. This is done by using low temperature ambient air, evaporating water, or using a large thermal reservoir. Chillers are the dominant energy using component of the cooling infrastructure; minimizing their use is typically the largest opportunity for savings. There is no one ‘right’ way to free cool - but water or air-side economizers are proven and readily available.

5. Optimize power distribution

Minimize power distribution losses by eliminating as many power conversion steps as possible. For the conversion steps you must have, be sure to specify efficient equipment transformers and power distribution units (PDUs). One of the largest losses in data center power distribution is from the uninterruptible power supply (UPS); be sure to specify a high efficiency model. Also keep as high a voltage as close to the load as feasible to reduce line losses.

 
The Microsoft blog post is not as simple as Google’s.

Microsoft’s Top 10 Business Practices for Environmentally Sustainable Data Centers

Posted by Global Foundation Services in
Data Centers, Efficiency and Sustainability
blog author image

Dileep Bhandarkar Ph.D.,

Distinguished Engineer,

Global Foundation Services

Microsoft recognizes the tough challenges that data center and IT managers face today as they struggle to support their businesses in the face of rising costs and uncertainty about the future. But the fact is - being “lean and green” is good for both the business and the environment.

It isn’t always easy to know where to begin in moving to greener and more efficient operations. With that in mind-we are sharing our updated Top Ten Best Business Practices for Environmentally Sustainable Data Centers white paper. In this rapidly changing environment it is important that we all continually reassessed and share our best practices with each other. For this reason, senior members of Microsoft’s Global Foundation Services (GFS) team have pooled their key learnings in this white paper.

As you’ll read in the list of best practices we’ve compiled, companies can make major gains by providing incentives to your team to reduce energy consumption and drive greater efficiencies across the entire data center and employing a wide range of practices that can collectively add up to significant gains. Microsoft has been using these practices for several years now and has found that in addition to helping to improve environmental sustainability, they make best use of our resources and help us stay tightly aligned with our core strategies and business goals. 

Microsoft’s top ten best practices for creating sustainable data centers are based on some basic principles: 

Effective resource utilization matters.Energy efficiency is an important element in Microsoft business practices, but equally important is the effective use of resources deployed. We eliminate features that are not essential for operating the services. This principle drives our efforts to right size our servers based on application requirements. Virtualization also improves server utilization by consolidating multiple instances of an application on the same hardware. Our data center designs offer various levels of redundancy to meet the resiliency needs of the different applications.

blog080912-1 

Standardization reduces variability and improves agility and costs, while reducing errors.  A major initiative in Microsoft data centers involves standardizing the platform. High degree of variability in the infrastructure can increase costs. Standardizing on a small set of servers, network equipment and data center technologies can drive economies of scale, and reduce support costs. Custom deployments are more error prone and expensive.

A holistic approach to total cost of ownership is essential.It is tempting to make purchase decisions based on acquisition costs, but often support and operating costs can be a dominant factor over the life of the equipment. The total cost of ownership should be evaluated against the value proposition of the equipment purchased. For example, consider the cost/performance of your servers instead of just performance. Make sure that reducing costs in one aspect of the operation does not increase cost somewhere else. Spending more on a higher efficiency power supply can reduce the total cost of ownership!

blog080912-2

… 

Wouldn't you like to keep the bad guests out of your data center

Unfortunately, in the data center you need to host all your business users.  The guests that use your space, power, cooling, and network.  Wouldn't it be great if you could pick who you hosted and tell them to go look for space elsewhere.  There is really good place down the street, the Cloud Hotel. :-)

NYtimes has a post on the unwelcome guests of summer.  This has little to do with data centers, but will probably get some of you thinking of those bad guests, aka business users who you wish you could say you are all full.

If Summer Goes, Why Won’t the Guests?

Brian Stauffer
 

MEGAN MURPHY SCHWAB, a marketing executive in suburban New Jersey, was seven months pregnant with her second child when a friend asked if another friend, who had just arrived in New York, might spend a night at her home to escape the summer heat. Ms. Schwab had met the woman, who seemed nice enough, so she and her husband, Jeff, an accountant, agreed to put her up.

This will probably never happen, but wouldn't it be nice if you could refuse to host some business users in your data center because they are bad guests.

Restoration Disaster draws attention to unknown artist, Data Center outages (disasters) do the same

The Guardian makes the point that the  news about the failed restoration actually draws attention to art.

It's all over the internet, it's trending, tweeting, the funniest art joke of all time. You must know it by now. "Masterpiece of Jesus is destroyed after old lady's attempt to restore damage is a less-than-divine intervention",Worst painting restoration work in history", "Elderly woman destroys 19th century fresco with DIY restoration".

The author makes the point that this woman should be turned loose on more art.

Similarly, the well-meaning restorer of this obscure Spanish painting should be turned loose on a couple of works that actually matter. Many true masterpieces are starved of the global attention this second-rate Ecce Homo has now got. She could be sent to Italy to see what she can do with the frescoes in the Palazzo Schifanoia in Ferrara. Revered by art historians, these paintings of the months of the year have never quite made it into popular culture. There are 12 paintings, one for every month, so one could be sacrificed for the good of the whole. A hideously repainted face on one of the lesser months might make their creator the 15th-century genius Francesco del Cossa as famous as the 19th century mediocrity Elias Garcia Martinez has now become.

Most of the time data centers aren't big deals.  The exception being Apple data centers.  Apple can build a 100kW data center shed and the news would cover it.

What will draw attention though to a data center is an outage, and just like the woman who somehow thought they were making things better, how many data centers fail when someone was trying fix something.

How did it happen? What was the well-meaning vandal thinking? Reports differ on the meaning of the middle picture in the before-and-after triptych: was this the result of water damage or the self-appointed artist's early effort to prepare the picture for restoration? Picturing how it happened is even funnier than seeing the contrasting versions themselves. Did she, like the Marx Brothers trimming a moustache in Monkey Business, try to fix one bit and then had to do another bit and then another until the whole thing was gone? Was it like Father Ted in the episode of the much-loved clerical comedy where he attempts tomend a car's bodywork with a hammer?

Creative Environments work better with climate control vs. command and control approach

There is a post about Sir Ken Robinsont on Google's Think Quarterly.  Here is one of key quotes highlighted.

“You want to free up the abilities of everybody to contribute ideas, because everybody has ideas, and you need to create a climate in which that will happen. The role of a creative leader is not ‘command and control’, it’s more like ‘climate control’.”

This quote is in this paragraph.

But on whose shoulders does it fall to get the balance right? Is creativity fostered from the top down? “There are some things we know about leadership which tend to inhibit creative thinking,” says Robinson. “Leaders can perpetuate problems when they try and control everything and remove the discretion of people in their organisation. What you want to do is free up the abilities of everybody to contribute ideas, because everybody has ideas, and you need to create a climate in which that will happen. The role of a creative leader is not ‘command and control’, it’s more like ‘climate control’. You create a culture.”

The coolest data centers are ones that are run by people who think this way.  I am way more impressed by the people than the size of a data center or its energy efficiency.  Why?  Because it is easy to make a data center look pretty.  And, you find out what is really going on by talking to the people.

I am trying to find the time to read Sir Ken Robinson's book.

Out of Our Minds: Learning to be Creative [Hardcover]

NewImage

Facebook shares it data analysis, one way to get fired is to look where you aren't supposed to

Techcrunch has a post on Facebook's data analysis.

How Big Is Facebook’s Data? 2.5 Billion Pieces Of Content And 500+ Terabytes Ingested Every Day

JOSH CONSTINE

posted 2 hours ago
Facebook Big Data Numbers

Facebook revealed some big, big stats on big data to a few reporters at its HQ today, including that its system processes 2.5 billion pieces of content and 500+ terabytes of data each day. It’s pulling in 2.7 billion Like actions and 300 million photos per day, and it scans roughly 105 terabytes of data each half hour. Plus it gave the first details on its new “Project Prism”.

VP of Engineering Jay Parikh explained why this is so important to Facebook: “Big data really is about having insights and making an impact on your business. If you aren’t taking advantage of the data you’re collecting, then you just have a pile of data, you don’t have big data.” By processing data within minutes, Facebook can rollout out new products, understand user reactions, and modify designs in near real-time.

Another stat Facebook revealed was that over 100 petebytes  of data are stored in a single Hadoop disk cluster, and Parikh noted “We think we operate the single largest Hadoop system in the world.” In a hilarious moment, when asked “Is your Hadoop cluster bigger than Yahoo’s?”, Parikh proudly stated “Yes” with a wink.

If you concerned about who looks at the data, consider one way to get fired is to look where you are not supposed to.

Users might be a little bit uneasy about the idea that Facebook employees could look so deep into their activity, but Facebook assured me there are numerous protections against abuse. All data access is logged so Facebook can track which workers are looking at what. Only those working on building products that require data access get it, and there’s an intensive training process around acceptable use. And if an employee pries where they’re not supposed to, they’re fired. Parikh stated strongly “We have a zero-tolerance policy.”