Equinix CTO 10 year perspective, data center changes in highly connected Internet services

I had the pleasure of interviewing Equinix's CTO David Pickut as part of Equinix reaching its 50th data center.

Equinix Opens 50th Premier IBX Data Center

Equinix’s 50th IBX Data Center Opens in London; Will Help Company Service Global Demand From Financial Services Firms and Cloud Service Providers

FOSTER CITY, CA and LONDON, UNITED KINGDOM – March 30, 2010 – Equinix, Inc. (NASDAQ: EQIX), a provider of global data center services, today announced a major company milestone: the opening of its 50th premier International Business Exchange™ (IBX®) data center. This announcement demonstrates Equinix’s ability to help its customers fully leverage all of the benefits of an interconnected world.

If you are not familiar with David here is some background.

David Pickut

David Pickut

Chief Technology Officer

Dave Pickut joined Equinix in 2004 and served in several roles before settling into his current role as Chief Technology Officer. Prior to joining Equinix, Mr. Pickut held Vice President positions with a Tier 1 ISP and an IT products/services company, with responsibilities for data center operations and business management. His engineering experience encompasses both consulting services and product design related to mission-critical data center power, cooling, security, controls, and fire protection systems.

Mr. Pickut received a B.S. in Electrical Engineering from Ohio State University and is a registered professional engineer, a member of the IEEE and NFPA.

The perspective I was looking for is David's view of the past 10 years and what the future looks like. Over the past 10 years, Dave has seen three big changes in data centers.

  1. Energy density in racks has gone up.
  2. Energy Efficiency awareness has increased.
  3. Transition from stand alone data center mindset to highly connected data centers.

This is best illustrated by drawings David provided.  Here is what data centers looked like 10 years ago.

image

And this is what data center design looks like it 2010

image

Note in the upper left of each of these slides the external forces affecting data centers. This is proof I was looking for that Equinix is on the right path to Green (low carbon) Data Centers.

The mass media industry will cover Google, Facebook, Microsoft, and Apple data centers.  But, here is a simple way to understand the future of data centers.  David and I chatted about many more things regarding the future 10 years, and he said it is easy to build energy efficient cost effective data centers.  The hard part is accounting for the accelerating rate of external factors that now affect data center design.  Those who put their "head in the sand" and geek out are setting themselves up for unexpected reactions like Greenpeace's focus on Facebook's coal powered data center.

Facebook Responds on Coal Power in Data Center

February 17th, 2010 : Rich Miller

An architectural rendering of the new Facebook data center planned for Prineville, Oregon.

Facebook has responded to growing criticism of its power choices for its new data center in Prineville, Oregon. This is one of the first cases in which a data center’s energy sourcing has attracted this kind of public attention, but it won’t be the last. 

I am looking forward to more posts on what Equinix is doing, and their willingness to share ideas, and what the future of data centers look like.  In 2010, there is SaaS, Cloud Provider, Ethernet Exchange, and Mobile Carrier.  Can you imagine what 2020 will look like?

Read more

Microsoft writes humorous blog post to educate SW developers of the power costs to run their code

Microsoft has a blog on "How Much Does Your Code Cost?", interjecting humor into a typically dry topic.

The big difference is that with cloud computing, you’re renting computing power in a data center somewhere. As far as you’re concerned, it could be on Saturn. Except that the latency figures might be a bit excessive. If you’ve accidentally opened one of those magazines your network administrator takes with him to the bathroom, you might know that these data centers contain racks and racks of servers, all with lots of twinkling lights. If you’ve ever been to a data center, you’ll know that they can be very hot near the server fans, much colder around the cooling vents, and noisy everywhere. All this activity results from removing the heat that the servers produce. But that heat doesn’t get there all by itself – the servers create it from the electricity they use. What’s more, it requires even more electricity to remove that heat.

Consider sending this post on to those who are involved in SW decisions to get them thinking of the impact of their SW code.

When you’re up against deadlines to turn in a software project, you probably are focused on ensuring that you meet the functionality requirements set out in the design specification. If you have enough time, you might consider trying to maximize performance. You might also try to document your code thoroughly so that anyone taking over the project doesn’t need to run Windows for Telepaths to work out what your subroutines actually do. But there is probably one area that you don’t consider: the cost of your code.

You mean what it costs to write the code, right? No.

Er, how about what it costs to compile? You’re getting warmer...

What it costs to support? No, colder again.

OK, you win. What costs do you mean?

I mean what it costs to run your code. In the bad old days, when clouds were just white fluffy things in the sky and all applications ran on real hardware in a server room somewhere or on users’ PCs, then cost simply wasn’t a factor. Sure, you might generate more costs if your application needed beefier hardware to run, but that came out of the cable-pluggers’ capital budget, and we all know that computer hardware needs changing every other year, so the bean-counters didn’t twig. A survey by Avanade showed that 50% of IT departments don’t even budget for the cost of electricity to run their IT systems. For more information, see this Avanade News Release, at http://www.avanade.com/_uploaded/pdf/pressrelease/globalenergyfindingsrelease541750.pdf

Life would be so much easier in the data center if SW developers and others thought of the data center infrastructure costs direct relationship to the code they write and how it is architected.

The good thing is cloud computing is helping to get SW developers to think about the costs to run their code.

If you deploy applications into the cloud, it is highly likely that your service provider will be charging you based on the energy that you use. Although you don’t see electricity itemized as kw/hr, you are billed for CPU, RAM, storage and network resources, all of which consume electricity. The more powerful processor with more memory costs more, not just because the cost of the components, but because they consume more electricity. In many ways, this is an excellent business model, as you don’t have to buy the hardware, maintain it, depreciate it, and finally, replace it. You simply pay for what you use. Or putting it another way, you pay for the resources you use. And this is the point at which you need to ask yourself: How much does my code cost? When power usage directly affects the cost of running your applications, a power-efficient program is more likely to be a profitable one.

The blog post references Visual Studio and Intel resources to help SW developers.

It is possible that future versions of Visual Studio will include options for checking your code for power usage. Until that time, following these recommendations should help minimize the running costs of your applications within a cloud-based environment.

  1. Reduce or eliminate accesses to the hard disk. Use buffering or batch up I/O requests.
  2. Do not use timers and polling to check for process completion. Each time the application polls, it wakes up the processor. Use event triggering to notify completion of a process instead.
  3. Make intelligent use of multiple threads to reduce computation times, but do not generate threads that the application cannot use effectively.
  4. With multiple threads, ensure the threads are balanced and one is not taking all the resources.
  5. Monitor carefully for memory leaks and free up unused memory.
  6. Use additional tools to identify and profile power usage.

For more ideas on how to reduce the memory, check out the following resources and tools:

Energy-Efficient Software Checklist, at http://software.intel.com/en-us/articles/energy-efficient-software-checklist/.

Creating Energy-Efficient Software, at http://software.intel.com/en-us/articles/creating-energy-efficient-software-part-1/.

Intel PowerInformer, at http://software.intel.com/en-us/articles/intel-powerinformer/.

Application Energy Toolkit, at http://software.intel.com/en-us/articles/application-energy-toolkit/.

Read more

ASHRAE Standard 90.1 Data Center Documents

After writing on Google's post regarding ASHRAE's standard 90.1 and requirement for economizer and talking to Google's Chris Malone, I decided to find the documents.

Here is the proposed Addendum bu.

6.5.1 Economizers. Each cooling system that has a fan shall include either an air orwater economizer meeting the requirements of Sections 6.5.1.1 through 6.5.1.4.

There are a through k in just section 6.5.1

j. Systems primarily serving computer rooms where
1) the total design cooling load of all computer rooms in the building is less than 3,000,000 Btu/h (880,000 kW) and the building in which they are located is not served by a centralized chilled water plant, or

And Addendum cy

What is hard to figure out is why ASHRAE is making economizers a requirement for cooling systems instead of a performance based solution, but if you meet an efficiency improvement in table 6.3.2 you are allowed you to eliminate economizers.

image 

Does addendum cy supersede addendum bu?  addendum cy doesn't have the above section j.

DataCenterKnowledge references addendum bu.

ASHRAE, for its part, says it welcomes the feedback on the proposed changes. “ASHRAE is committed to excellence in the consensus standard development process and encourages anyone with comments regarding the proposed addendum regarding data centers (addendum bu) to participate in the public review process,” it said.

Are you confused?  I am.

Read more

Google, Microsoft, Amazon, Nokia, Digital Realty Trust, Dupont Fabros vs. ASHRAE standard 90.1 requirement for economizers limits innovation - comment to be heard

Google's Public Policy blog has a post with some of the most innovative data center operators.

Chris Crosby, Senior Vice President, Digital Realty Trust
Hossein Fateh, President and Chief Executive Officer, Dupont Fabros Technology
James Hamilton, Vice President and Distinguished Engineer, Amazon
Urs Hoelzle, Senior Vice President, Operations and Google Fellow, Google
Mike Manos, Vice President, Service Operations, Nokia
Kevin Timmons, General Manager, Datacenter Services, Microsoft

This collection and probably many others are appealing to ASHRAE to change the requirement for economizers.

Unfortunately, the proposed ASHRAE standard is far too prescriptive. Instead of setting a required level of efficiency for the cooling system as a whole, the standard dictates which types of cooling methods must be used. For example, the standard requires data centers to use economizers — systems that use ambient air for cooling. In many cases, economizers are a great way to cool a data center (in fact, many of our companies' data centers use them extensively), but simply requiring their use doesn’t guarantee an efficient system, and they may not be the best choice. Future cooling methods may achieve the same or better results without the use of economizers altogether. An efficiency standard should not prohibit such innovation.

I know many of these above people and thanks to a friend they forwarded me this link to Google's blog post, I speculated on what drove the economizer requirement:

  1. Without talking to anyone, one assumption is this group who are active in ASHRAE brought up the energy efficiency issue early on, and ASHRAE stakeholders, most likely vendors who make economizers saw an opportunity to make specific equipment a requirement of energy efficiency data centers.  I could be wrong, but it would explain why an organization who sets standards would choose to specify equipment instead of performance.
  2. In many established data center organizations like financials, economizers are/were unacceptable in data centers a few years back.  So, is the move to establish economizers a reaction to those who refused to use economizers for energy efficient cooling.
  3. The ASHRAE consulting community sees a need for their services to meet ASHRAE's economizer requirement.  For example, if in a given area there are X number of hours a year that are available for running economizers, does the economizer need to be run for a specific %.  Hire an ASHRAE consultant to interpret the standard.  I sure can't.

The data center group above proposes the following as a better update to the ASHRAE standard.

Thus, we believe that an overall data center-level cooling system efficiency standard needs to replace the proposed prescriptive approach to allow data center innovation to continue. The standard should set an aggressive target for the maximum amount of energy used by a data center for overhead functions like cooling. In fact, a similar approach is already being adopted in the industry. In a recent statement, data center industry leaders agreed that Power Usage Effectiveness (PUE) is the preferred metric for measuring data center efficiency. And the EPA Energy Star program already uses this method for data centers. As leaders in the data center industry, we are committed to aggressive energy efficiency improvements, but we need standards that let us continue to innovate while meeting (and, hopefully, exceeding) a baseline efficiency requirement set by the ASHRAE standard.

It doesn't make any sense that all data centers built to ASHRAE's standards have to use economizers. If you choose to have a waterfront data center and could use the body of water as a heat sink for your cooling, ASHRAE wouldn't allow it or would they?

The public comment period is open until April 19.  If you disagree with ASHRAE's economizer requirement comment on this blog or Google's blog post.

I was able to talk to Google's Chris Malone on this topic after I wrote the above.  The main concern Google has is if you are trying to be innovative in energy efficiency the last thing you want is a barrier saying you have to use a particular technology.

In other words, the standard should set the required efficiency without prescribing the specific technologies to accomplish that goal. That’s how many efficiency standards work; for example, fuel efficiency standards for cars specify how much gas a car can consume per mile of driving but not what engine to use.

Imagine if MPG numbers were mandated by use of hybrid, diesel, or turbocharger.  It is obvious that the most innovative MPG is going to come from those who have the freedom to use any technology.

You should soon see other data center bloggers write on this issue.  If you think this is wrong comment on the Google Blog post or one of the others.

Read more

50% of IT budgets treat electricity as free resource, Avanade survey discovers

Avanade has a news release on a survey revealing the disconnect between IT and electricity use.

GLOBAL STUDY: MORE THAN HALF OF COMPANIES FAIL TO ACCOUNT FOR ENERGY COSTS IN IT BUDGETS
Executives and IT decision-makers cite energy as a top cost in IT operations; Survey reveals disconnect in budgeting
SEATTLE – March 31, 2010 – According to a recent survey commissioned by Avanade, a business technology services provider, there is a clear gap in energy policies within IT departments. Companies recognize energy as a top cost, but ultimately, more than half of respondents fail to account for energy costs when developing IT budgets.

image

Also, Avanade has a press release on customers interest in Microsoft Cloud Computing.

In 2009 Avanade engaged Kelton Research to conduct two global surveys on cloud computing – one in February 2009 and the other in September 2009. Between the first survey and the second, there was a 320 percent increase in executives and IT decision-makers reporting they are testing or planning to implement cloud computing technologies.

Makes me wonder how many enterprises are forgetting to account for the electricity bill as a cost savings for cloud computing.

Read more