Greening The Data Center in the HPC scenario

I have been writing on Green Data Centers for 6 years.  This month I hit 2 year anniversary working for GigaOm Research, and thanks to the folks at Verne Global who sponsored a white paper on the Green Data Center topic here is a just released paper for GigaOm Research subscribers on The Value of Green HPC.

Jonathan Koomey, Tate Cantrell, RMS and BMW were interviewed for the paper and provided valuable perspectives the current state of greening a data center in a specific use case, HPC (High Performance Computing).

Table of Contents

The value of green HPC

 Oct. 30, 2013
This report underwritten by: Verne Global
1Executive Summary

Forward-thinking CIOs are anticipating increased regulation of carbon emissions and want lower and more-predictable energy costs over the long term. As part of that process, they are looking at ways to go green. They know that data centers are under scrutiny for how sustainable they are, and they know that demand for data center services is growing while the costs of fossil fuels are already high, getting higher, and becoming difficult to predict.

Green data centers present one solution because they use renewable energy sources, have efficient data center facilities, and use efficient IT equipment. The savings these data centers offer can be transformed into more processing power, which gives new opportunities for increased business revenue. Many of these data centers are located where they can take advantage of an area’s natural resources (cool climates, for example) and sources of power such as wind, geothermal, and hydroelectric.

However, not all applications are suitable for offloading to a data center, whether it’s green or not. Deciding which applications can be placed in a green data center while still satisfying business and performance specifications is critical to success. Among the candidates to consider are high-performance computing (HPC) applications. HPC was once limited to scientific research, but many businesses now use it to analyze large amounts of data and to create simulations and models. HPC applications are compute-intensive and, when applied at scale, require large amounts of energy. However, because users of these applications don’t require real-time responses, you have flexibility in where you place these applications. This means that you can take advantage of the lower energy costs a green data center offers, no matter where it’s located. This report analyzes these topics as well as the following areas:

  • Three factors to consider in choosing a green data center for HPC are the source of the data center’s power, the efficiency of its IT equipment, and the data center’s efficiency.
  • Today’s CIOs have the options of building a new data center, refurbishing an existing data center, using co-location, and using the cloud. Each option needs to be balanced against the following criteria: the requirements of increased data center traffic, government regulations, volatile energy costs, and sustainable practices.
  • Latency is the single most important criterion for choosing the appropriate applications for cloud or co-location. Following latency, other considerations are whether the application must peer with another company, the business requirements, the application architecture, current and predicted application workload, and the application’s resource consumption rate.

A good perspective on Google's Data Center efforts, myth vs. reality

Mike Manos writes a great post on Google’s latest rumors.  To give you the perspective of Mike let’s start with the end.

Editors Note: I have many close friends in the Google Infrastructure organization and firmly believe that they are doing some amazing, incredible work in moving the industry along especially solving problems at scale.   What I find simply amazing is in the search for innovation how often our industry creates things that may or may not be there and convince ourselves so firmly that it exists. 

Mike then starts at the beginning of speculate on what Google would do with a deep earth mining equipment.

Google Purchase of Deep Earth Mining Equipment in Support of ‘Project Rabbit Ears’ and Worldwide WIFI availability…Posted on October 31, 2013

NewImage

(10/31/2013 – Mountain View, California) – Close examination of Google’s data center construction related purchases has revealed the procurement of large scale deep earth mining equipment.   While the actual need for the deep mining gear is unclear, many speculate that it has to do with a secretive internal project that has come to light known only as Project: Rabbit Ears. 

and Mike References an excellent point of a floating data center.

While the technical intricacies of the project fascinate many, the initiative does have its critics like Compass Data Center CEO, Chris Crosby, who laments the potential social aspects of this approach, “Life at sea can be lonely, and no one wants to think about what might happen when a bunch of drunken data center engineers hit port.”  Additionally, Crosby mentions the potential for a backslide of human rights violations, “I think we can all agree that the prospect of being flogged or keel hauled really narrows down the possibility for those outage causing human errors. Of course, this sterner level of discipline does open up the possibility of mutiny.”

Read all of Mike’s post.  It tells a story of how what is a myth vs. reality and how much myth gets told as the truth.  

All is fare in the fight for media traffic.

Sad that $300 mil spent on Obamacare creates no innovation, the benefits of a risk-less approach

One of the sad things about the $300 mil spent on Obamacare is there is no innovation that comes from the effort.  NASA’s mission to put a man on the moon had huge risks and has many innovations it can claim.  Here is a NASA pdf you can check out.

NewImage

From a technical standpoint there isn’t anything Obamacare is doing that is innovative.  In fact, you can think of what is the flaw of Obamacare is living in the 60’s with a procurement process for buying commodities applied to IT.

That cancer is called “procurement” and it’s primarily a culture driven cancer one that tries to mitigate so much risk that it all but ensures it. It’s one that allowed for only a handful of companies like CGI Federal to not only build disasters like this, but to keep building more and more failures without any accountability to the ultimate client: us. Take a look at CGI’s website, and the industries they serve: financial services, oil and gas, public utilities, insurance. Have you had a positive user experience in any of those industries?

The cancer starts with fear. Contracting officers — people inside of the government in charge of selecting who gets to do what work — are afraid of their buys being contested by people who didn’t get selected. They’re also afraid of things going wrong down the line inside of a procurement, so they select vendors with a lot of “federal experience” to do the work. Over time, those vendors have been consolidated into pre-approved lists like GSA’s Alliant schedule. Then, for risk mitigation’s sake, they end up being the only ones allowed to compete for bids.

This results in a culturally accepted idea that cost implies quality. To ensure no disasters happen, throw lots of money at it. And when things go terribly wrong, throw more money at the same people who caused the problem to fix the problem. While this assumption may work well with commodities (want to ensure that you get lots of high-quality gravel? Buy a lot more gravel than you need, then throw out the bad gravel) the evidence points to the contrary with large IT purchases: they usually fail.

Lego rendering of Data Center History

Data Centers is what almost all of your care about.  And, some of you may enjoy Legos.  Here is a blog post when you combine both.

Datacenter History: Through the Ages in Lego

The data center has changed dramatically through the ages, as our Lego minifigures can testify!

As a rule, I don’t participate in contests: There’s usually little reward, considering chances of winning. But when Juniper Networks asked me to build a datacenter from Lego bricks, I took a second look. And, seeing that the winner can support a charity of their choice, I felt that this was an excellent opportunity for me to have some fun while doing some good!

The above post goes through history.  For you who won’t click on the post, here is the modern lego data center.

The Modern Datacenter

We now turn to today. Our modern datacenter evolved from the history shown here: We retain the same 19-inch rack mount system used for Colossus way back during World War II. All of our machines are “Turing Complete” like the ENIAC. We run UNIX and Windows Server on CPUs spawned from the PDP-11, and our Windowed GUIs reflect the Xerox Alto. Today’s multi-core servers and multi-threaded operating systems carry the lessons learned by Cray and Thinking Machines.

A modern datacenter, complete with an EMC VMAX, Juniper router, and rackmount servers

My Lego datacenter tour ends here, with two racks of modern equipment. At the rear is an EMC Symmetrix VMAX which, like the CM-5, calls attention to its black monolith shape with a light bar. At front is a Juniper T-Series router (white vertical cards with a blue top) rack-mounted with a number of gold servers. Our technician holds an iPad while walking across a smooth raised floor. I even used a stress-reducing blue color for the walls!

Although the Symmetrix model only has three Lego axes, the router rack features four: The servers sit on forward-facing studs while the router is vertical. Both use black side panels, reflecting today’s “refrigerator” design.

Reporter uses Facebook's Rooftop to check out Apple's data center in Prineville

Facebook has been getting some news with the opening of its cold storage facility.

http://sustainablebusinessoregon.com/articles/2013/10/exclusive-a-look-at-facebooks.html?s=image_gallery

http://www.bendbulletin.com/article/20131016/news0107/310160339/

http://readwrite.com/2013/10/16/facebook-prineville-cold-storage-photos#awesm=~oluaddHy6afA5O

What is funny is one reporter used the Facebook rooftop to check out Apple’s data center.

As it turns out, Apple's complex, code-named "Pillar"—and completely devoid of any markings identifying it as an outpost of the Cupertino company—is a literal stone's throw from Facebook's Prineville, Ore. hub. Tracking down the location of Apple's stealth site was just as easy as peering southeast from Facebook's roof, which ironically offered what was probably the best view in town. The Facebook employees pointed it out to me while cracking jokes about its apparently not-so-secret alias.

Construction began on the Apple data center last October, and now the first phase's main building (the large black one) appears to be complete, to the untrained, telephoto-lens equipped eye, anyway. Eventually the project will encompass two full 338,000-square foot data centers sprawling across Apple's 160-acre Prineville plot. And because everything is spookier and more fascinating when it's built out in the desert, we bring you the photographic fruits of our Veronica Mars-style investigation of Apple's Area 51.

NewImage