What is the future of the data center? Looking for patterns of SW and HW

I’ve been heads down this past week working on some research on the future of data centers.  Here are a few thoughts after a week of pondering.

Part of the problem in data centers is the lack of awareness by the various parties in developing data centers.  I stopped being surprised when facility operations team had no idea what hardware and software is running in data centers.  Part of the problem is people don’t take the time to communicate how things are changing.

Imagine you are a Disney attraction park.

image

And the facility operations team mindset was “as long as the park was running, their job is done.”  Silo'd thinking is pervasive in data centers in IT as the overall system is so complex, and quite frankly the other parts are many times boring.  Do facility ops team get excited about the new cloud environment or a ruby on rails deployment?

Rails Web development that doesn't hurt

I think the responsibility for fixing this problem resides in management as it is difficult if not impossible for individual contributors to change the mindset.  FYI, when I reviewed this post, it reminded me that Mike Manos worked for Disney Interactive, and maybe he picked up some ideas on how the team needs to work together for the common goal.

Verizon publishes Carbon per Terrabyte metric, 15% improvement 2009 - 2010

Verizon has a press release on its new carbon metric.

Verizon Develops New Metric to Accurately Measure Company's Carbon Efficiency as Broadband, IP, Wireless and Video Services Grow

Metric Will Help Company to Continue Its Energy-Conservation Improvements Amid Increasing Demands on Its Network

NEW YORK – April 28, 2011 –

Verizon has developed a new metric for measuring carbon efficiency, enabling the company, for the first time, to accurately quantify the impact of all of its green initiatives.

The metric will help Verizon continue to make improvements in energy conservation and efficiency, as the rapid increase in demand for broadband, IP network services, wireless data and video increases the demands on the company's network - and the amount of energy needed to operate the network.

Called the "carbon intensity metric," the new measurement was developed by Verizon's Sustainability Office and tested over the past 12 months. The tests showed an improvement of approximately 15 percent in the company's carbon efficiency, from 2009 to 2010.

The metric is derived by first combining Verizon's total carbon emissions (in metric tons) from the electricity, building fuels and vehicle fuels used to run the company's business. Then, that total is divided by the number of terabytes of data that the company transports across its network. (One terabyte equals about 300 feature-length movies.) Verizon transported 78.6 million terabytes across its global network in 2010 - an increase of about 16 percent, compared with 2009.

We’ll see if others adopt this method.

One area where I found deceptive is this graph on the Verizon environmental site.

image

Note how the scale is not provided from 2007 – 2009 there is 10% reduction, but the graph deception makes the reduction look like over 33%.  It’s too bad you can’t make money by finding deceptive graphing techniques, it would keep the marketing folks a lot more honest.

Hot Ticket where EU gets priority, Google’s Data Center Summit 2011 in Zurich May 24

Google will officially announce its Data Center Summit 2011 next week for the world of Google data center followers.  Attendance is limited so not everyone can go.  What will get you to the top of the list of potential attendees is if you are based out of the EU and you have a passion for energy efficiency in the data center. 

Information about the Summit itself is straightforward - innovative thinkers and industry leaders are coming together to discuss energy efficiency best practices for data centres. For Google's part, we will share the total cost of ownership analysis of a computing and network room (CRN) retrofit and explain how seawater cooling makes sense for our facility in Finland. Other presenters will give similar accounts of the results from adhering to efficiency best practices as well as taking advantage of local cooling solutions. We are hoping to see the European data centre community continue to focus on improving operational efficiency.

If you don’t qualify pass this post on to a Green Data Center friend in the EU.

image

With Google’s permission I am sharing the event web site before they post the event on their blog next week.

image

Here is the event schedule with speakers.

Event Schedule

8:30
Keynotes on data centre sustainability and best practices
Zahl Limbuwala, Chartered Institute for IT; Harkeeret Singh, The Green Grid

10:00
Best practice implementation case studies
Joe Kava, Google; Dean Nelson, eBay

11:20
Panel discussion
James Hamilton, Amazon; Robert Coupland, Telecity Group; Brian Waddell, Norman Disney and Young; Mark Eichenberger, UBS

12:00
Lunch

13:30
Local cooling solutions and geo-independent approaches to efficiency
Joe Kava, Google; Dileep Bhandarkar, Microsoft; Jochen Berger, PlusServer; Chris Malone, Google

15:50
Panel discussion
Mark Monroe, The Green Grid; Andre Oppermann, DeepGreen; Bruno Michel, IBM Research Lab; Jeff Monroe, Verne Global

17:00
Reception

I’ll be at the event and most likely will attend DataCenterDynamics Zurich the day after and try to add some data center tours while I am there.

image

Attending Structure Cloud Conference June 22-23 2011

A friend asked me recently what Cloud Conference would I recommend he attend.  I suggested GigaOm’s Structure.

image

Making Sense of the Real Cloud

After years of questions about what cloud computing is and how it will affect IT, we’re finally starting to get answers. With major acquisitions having gone down, hybrid clouds now a reality and the federal government eyeing cloud-inspired legislation, it is becoming more clear how the cloud landscape will shape up. At Structure 2011, we’ll address these issues and more to help attendees make sense of where cloud services are headed and how they’ll affect everything from application development to data center design.

One of the main things that got my attention for the event is the list of speakers.  Here are some people I know, and it will be good to see their latest presentations and chat in person again.

Don Basile

CEO, Violin Memory

Don Basile joined Violin Memory in 2009 and grew the company from under $10m in funding and 15 employees to a $110 million backed entity with a staff of over 120. The company’s Memory Arrays are changing the datacenter for companies like AOL, Brand.net, Tagged.com, Oracle, Juniper, and HP through its patent-pending flash vxMemory and vRAID technology. Prior to his role at Violin Memory, Don was Chairman and CEO of Fusion-io. Earlier, during the rise of the Cable Industry, Don pioneered digital video insertion and Internet advertising as an executive of Lenfest. Don holds M.S. and Ph.D. degrees from Stanford University.

Barry Evans

CEO, Calxeda

Barry Evans is an experienced semiconductor executive, most recently as VP and GM of Marvell’s Application Processing BU in the Cellular and Handheld Group. He was responsible for the Xscale (ARM-based) product line, the world’s highest performance handheld processors with revenues exceeding $300 million. He served as Intel’s Director of Marketing for Application Processors for Xscale and Low Power x86 customer engagements and product strategies to address the wireless handheld market. Mr. Evans is an 18 year veteran of the semiconductor industry having held roles in field sales and marketing management across wireless handheld, telecommunications, embedded servers, and embedded computing applications. He holds a BSEE from University of Texas at Austin and an MBA from Boston University.

Luke Kanies

CEO, Puppet Labs

Luke is the founder and CEO of Puppet Labs and the founder of the Puppet project. He helped kickstart the devops movement by preaching Infrastructure as Code, and he believes that computers should be used, not managed.

Paul Maritz

CEO, VMware

Paul Maritz joined VMware in 2008 as President and CEO. He was previously President of Cloud Infrastructure and Services at EMC after the company's acquisition of Pi, where he was founder and CEO. Before that, he spent 14 years at Microsoft. He was a member of the five-person Executive Committee that managed the overall company and he oversaw the development and marketing of Windows 95, Windows NT, and Windows 2000, Visual Studio and SQL Server, and the complete Office and Exchange Product Lines. He also spent five years at Intel. Born and raised in Rhodesia (now Zimbabwe), Paul is a graduate in Mathematics and Computer Science of the Universities of Cape Town and Natal in South Africa. He is Chairman of the Grameen Foundation.

Satya Nadella

Microsoft

Sam Ramji

VP, Strategy, Apigee

Sam is Vice President of Strategy at Apigee, the leading API products and services company. He brings over 15 years of industry experience in enterprise software, product development, and open source strategy. Prior to Apigee, Ramji led open source strategy across Microsoft. He was a founding member of the AquaLogic product team and has built large-scale enterprise and Web-scale applications, leading the Ofoto engineering team through its acquisition by Kodak. Other experience includes hands-on development of client, client-server and distributed applications on Unix, Windows and Macintosh at companies ranging from Broderbund to Fair Isaac.

Google’s Data Center Security Video

Google’s Security and Data Protection in a Google Data Center video is viral with 152,000 views.  Yesterday when I checked it was at 110,000 since posting on Apr 13, 2011.  But, checking the comments started 3 days ago, so most likefly the video has been up on Youtube since April 13, but released Apr 23.  Google could not have asked for better timing to be right after AWS outage.

What I found interesting was one article saying the video is related to Facebook’s Open Compute Project.

The media has juxtaposed the data center video with Facebook's Open Compute project, in which the company open sourced its data center hardware and schematics earlier this month.

Facebook's move was an open-source olive branch to the computing community at large, but it was also a calculated play to urge the creation of less expensive, commodity servers.

Google's video tour is an educational play designed to assure enterprises and federal agencies considering a Google Apps collaboration software contract of its stringent data security.

I know Google guys have been thinking of this video for over 6 months to promote the security of data in Google data centers.  Has Facebook been thinking about the Open Compute Project for over 6 months?

The Register highlights the hard drive security

Nonetheless, when a hard drive fails or no longer exhibits prime performance and must be disposed of, Google uses multiple techniques to ensure that the data can't be read at all. It overwrites the data, and then it uses a complete disk read to verify that all data has been removed. When disk reaches the end of its life, Google will then destroy it. This involves pushing a steel piston through the center of the drive and then shredding it into relatively small pieces. The remains of the drives are then sent to recycling centers.

Google hard drive crusher

The Crusher: Google gives hard drives the piston treatment

What doesn’t get mentioned that I think is cooler than the low tech ways to handle hard drive security is Google’s shard methods to protect data and achieve scalability, but this is way too geeky for most users to be interested in.

We can hope that with the popularity of the video and news coverage that Google and others will create more data center videos.

Could you imagine a documentary style video of AWS outage?  Would the video be a comedy or drama?  A video of Sony’s playstation outage would be a tragedy, or comedy if you are from the Xbox Live team.