GM Green's its data center

I’ve been staring at these browser tabs and have been meaning to post on GM’s green data center efforts.

Here is the official GM press release.  They focused on LEED.

GM’s LEED Gold Data Center Drives IT Efficiency

Fri, Sep 13 2013

WARREN, Mich. – A flywheel for battery-free backup power and in-row cooling that reduces the need for electricity contribute to a 70 percent reduction in energy use at General Motors’ world-class Enterprise Data Center, which has earned Gold certification by the U.S. Green Building Council’s LEED, or Leadership in Energy and Environmental Design, program.

Fewer than 5 percent of data centers in the U.S. achieve LEED certification, according to the building council. GM’s data hub on its Technical Center campus in this Detroit suburb is the company’s fifth LEED-certified facility and second brownfield project.

Arstechnica does a much better job of telling a story.

Waterfalls and flywheels: General Motors’ new hyper-green data center

Ars gets a look inside at the first GM-owned data center in nearly 20 years.

A look down the completed "data hall" of GM's Warren Enterprise Data Center. With 2,500 virtual servers up and running, the center is at a tiny fraction of its full capacity.
General Motors

WARREN, Michigan—General Motors has gone through a major transformation since emerging from bankruptcy three years ago.  Now cashflow-positive, the company is in the midst of a different transformation—a three-year effort to reclaims its own IT after 20 years of outsourcing.

Here are few more details.

Building a cloud, under one roof

Enlarge/ GM's IT Operations and Command Center, where all of GM's IT infrastructure—including its partner network, OnStar systems, and design and engineering systems—is monitored and controlled.

The first step in that transformation, Liedel said, was converting everyone running its IT operations to GM employees. Next came centralizing control over the company's widely-scattered IT assets.

So far, three of the company's 23 legacy data centers have been rolled into the new Warren data center. That's eliminated a significant chunk of the company's wide-area network costs. "We have 8,000 engineers at (Vehicle Engineering Center) here," Liedel said. And those engineers are pushing around big chunks of data—the "math" for computer-aided design, computer aided manufacturing, and a wide range of high-performance computing simulations

And. GM chose flywheels.

Almost no batteries required

One of the Warren Enterprise Data Center's two diesel generators.

Aside from its energy efficiency, GM's Warren Data Center picks up green cred in the way it handles its emergency power. Instead of using an array of lead-acid batteries to provide current in the event of an interruption of power, the data center is equipped with uninterruptible power supplies from Piller that use 15,000 pound flywheels spinning at 3,300 revolutions per minute.

 

Are you designing around The Fiction of Memory? your memory is fragile

I spend way too much time thinking about how to think.  It as actually something that has a word for it called metacognition.

Metacognition refers to one’s knowledge concerning one's own cognitive processes and products or anything related to them, e.g., the learning-relevant properties of information or data. For example, I am engaging in metacognition if I notice that I am having more trouble learning A than B; [or] if it strikes me that I should double check C before accepting it as fact.

—J. H. Flavell (1976, p. 232).

My wife says it much easier, “there you go thinking about thinking."

Here is something to get you thinking.  Our memory is fragile.

NewImage

The above is from this Ted Talk by Elizabeth Loftus on The Fiction of Memory.

Why go through all this? Because if you can design systems that account for people’s tendency to not be able to know when they are telling the fiction of their memory, you can see things others can’t.

 

Greening The Data Center in the HPC scenario

I have been writing on Green Data Centers for 6 years.  This month I hit 2 year anniversary working for GigaOm Research, and thanks to the folks at Verne Global who sponsored a white paper on the Green Data Center topic here is a just released paper for GigaOm Research subscribers on The Value of Green HPC.

Jonathan Koomey, Tate Cantrell, RMS and BMW were interviewed for the paper and provided valuable perspectives the current state of greening a data center in a specific use case, HPC (High Performance Computing).

Table of Contents

The value of green HPC

 Oct. 30, 2013
This report underwritten by: Verne Global
1Executive Summary

Forward-thinking CIOs are anticipating increased regulation of carbon emissions and want lower and more-predictable energy costs over the long term. As part of that process, they are looking at ways to go green. They know that data centers are under scrutiny for how sustainable they are, and they know that demand for data center services is growing while the costs of fossil fuels are already high, getting higher, and becoming difficult to predict.

Green data centers present one solution because they use renewable energy sources, have efficient data center facilities, and use efficient IT equipment. The savings these data centers offer can be transformed into more processing power, which gives new opportunities for increased business revenue. Many of these data centers are located where they can take advantage of an area’s natural resources (cool climates, for example) and sources of power such as wind, geothermal, and hydroelectric.

However, not all applications are suitable for offloading to a data center, whether it’s green or not. Deciding which applications can be placed in a green data center while still satisfying business and performance specifications is critical to success. Among the candidates to consider are high-performance computing (HPC) applications. HPC was once limited to scientific research, but many businesses now use it to analyze large amounts of data and to create simulations and models. HPC applications are compute-intensive and, when applied at scale, require large amounts of energy. However, because users of these applications don’t require real-time responses, you have flexibility in where you place these applications. This means that you can take advantage of the lower energy costs a green data center offers, no matter where it’s located. This report analyzes these topics as well as the following areas:

  • Three factors to consider in choosing a green data center for HPC are the source of the data center’s power, the efficiency of its IT equipment, and the data center’s efficiency.
  • Today’s CIOs have the options of building a new data center, refurbishing an existing data center, using co-location, and using the cloud. Each option needs to be balanced against the following criteria: the requirements of increased data center traffic, government regulations, volatile energy costs, and sustainable practices.
  • Latency is the single most important criterion for choosing the appropriate applications for cloud or co-location. Following latency, other considerations are whether the application must peer with another company, the business requirements, the application architecture, current and predicted application workload, and the application’s resource consumption rate.

A good perspective on Google's Data Center efforts, myth vs. reality

Mike Manos writes a great post on Google’s latest rumors.  To give you the perspective of Mike let’s start with the end.

Editors Note: I have many close friends in the Google Infrastructure organization and firmly believe that they are doing some amazing, incredible work in moving the industry along especially solving problems at scale.   What I find simply amazing is in the search for innovation how often our industry creates things that may or may not be there and convince ourselves so firmly that it exists. 

Mike then starts at the beginning of speculate on what Google would do with a deep earth mining equipment.

Google Purchase of Deep Earth Mining Equipment in Support of ‘Project Rabbit Ears’ and Worldwide WIFI availability…Posted on October 31, 2013

NewImage

(10/31/2013 – Mountain View, California) – Close examination of Google’s data center construction related purchases has revealed the procurement of large scale deep earth mining equipment.   While the actual need for the deep mining gear is unclear, many speculate that it has to do with a secretive internal project that has come to light known only as Project: Rabbit Ears. 

and Mike References an excellent point of a floating data center.

While the technical intricacies of the project fascinate many, the initiative does have its critics like Compass Data Center CEO, Chris Crosby, who laments the potential social aspects of this approach, “Life at sea can be lonely, and no one wants to think about what might happen when a bunch of drunken data center engineers hit port.”  Additionally, Crosby mentions the potential for a backslide of human rights violations, “I think we can all agree that the prospect of being flogged or keel hauled really narrows down the possibility for those outage causing human errors. Of course, this sterner level of discipline does open up the possibility of mutiny.”

Read all of Mike’s post.  It tells a story of how what is a myth vs. reality and how much myth gets told as the truth.  

All is fare in the fight for media traffic.

Sad that $300 mil spent on Obamacare creates no innovation, the benefits of a risk-less approach

One of the sad things about the $300 mil spent on Obamacare is there is no innovation that comes from the effort.  NASA’s mission to put a man on the moon had huge risks and has many innovations it can claim.  Here is a NASA pdf you can check out.

NewImage

From a technical standpoint there isn’t anything Obamacare is doing that is innovative.  In fact, you can think of what is the flaw of Obamacare is living in the 60’s with a procurement process for buying commodities applied to IT.

That cancer is called “procurement” and it’s primarily a culture driven cancer one that tries to mitigate so much risk that it all but ensures it. It’s one that allowed for only a handful of companies like CGI Federal to not only build disasters like this, but to keep building more and more failures without any accountability to the ultimate client: us. Take a look at CGI’s website, and the industries they serve: financial services, oil and gas, public utilities, insurance. Have you had a positive user experience in any of those industries?

The cancer starts with fear. Contracting officers — people inside of the government in charge of selecting who gets to do what work — are afraid of their buys being contested by people who didn’t get selected. They’re also afraid of things going wrong down the line inside of a procurement, so they select vendors with a lot of “federal experience” to do the work. Over time, those vendors have been consolidated into pre-approved lists like GSA’s Alliant schedule. Then, for risk mitigation’s sake, they end up being the only ones allowed to compete for bids.

This results in a culturally accepted idea that cost implies quality. To ensure no disasters happen, throw lots of money at it. And when things go terribly wrong, throw more money at the same people who caused the problem to fix the problem. While this assumption may work well with commodities (want to ensure that you get lots of high-quality gravel? Buy a lot more gravel than you need, then throw out the bad gravel) the evidence points to the contrary with large IT purchases: they usually fail.