Happy Holidays 2012

It's getting close to the Holidays, and I'll be taking a break from blogging until the new year.  Even this week has been busy with my daughters annual Xmas cookie decorating party that my wife is turned into a well run production.  Here are some pictures.  Why use Instagram to share when I can share them here. :-)

Have a Happy Holidays

-Dave Ohara

 

NewImage

NewImage

NewImage

NewImage

NewImage

NewImage

NewImage

NewImage

Calling Data Center Hardware Hacks, Open Compute Project event on Jan 16-17

Open Compute Project is hosting for the first time a hardware hack event.

OCP Hardware Hack!

Wednesday, December 5, 2012· Posted by at 16:07 PM

OCP is hosting its first hardware hackathon at the upcoming Open Compute Summit, January 16-17, 2013 in Santa Clara, California. Starting today, you can register for the hack. We are limiting attendance to 100 people. Registering for the hack also registers you for the entire OCP summit, so you can register for both events at once. The summit and the hack are both being held at the Santa Clara Convention Center.

We ask that once you register for the hack, you participate in the entire hack, which will last 6-10 hours over the course of the two day summit.

An example hack project is.

Example hack project: Use low-power sensors for temperature information across a data center. Use the Zigbee wireless protocol and aggregate the heat data across the data center. This has the benefit of not requiring any additional wiring or interfaces.

Here is what you can expect.

Goals, Tools and an Example

Goal: Design a set of “Lego” blocks that can be applied to the scale compute data center space with a focus on improving energy efficiency, operational efficiency and cost reduction.

Design tools:

  • ECAD, electrical and holistic collaboration: Upverter
  • Software collaboration: GitHub
  • Mechanical collaboration: GrabCAD

Skill set: electrical engineer, mechanical engineer, software engineer, designer. (Ideally each team has a combination of these skill sets.)

Starting point: We recommend starting with a really simple circuit that can be modified, or even whole-scale deleted, but provides a great base to scaffold onto. It could be a connector, a micro-controller or a power circuit.

Kfir Godrich discusses Data Center Commissioning role in delivering availability

I've had the pleasure of some great conversations with Kfir Godrich.  Kfir has a guest post on Compass Data Centers blog that discusses Data Center Commissioning.

Kfir starts with a subject that reminds me of my first summer jobs at HP working in Quality engineering at HP where I worked on warranty and reliability issues.

The data center commissioning (or Cx) journey starts with understanding the basics of reliability engineering contained in the IEEE Gold Book. First, we need to define the difference between reliability and availability. Availability is the probability that a system will work as required during the period of the mission while Reliability is the probability that the system will in fact maintain operations during the mission. The related terminology that helps us introduce the Cx, is the data center predicted performance model. This model follows a failure mode typical to electronic equipment also known as the “bathtub curve” (see Fig. 1).

Bathtub Curve

In Phase 1, also called the Infant Mortality Period, data centers are going through a decreasing failure rate that it is very much desired to be as short as possible. This can be achieved through performing a full commissioning as described later. It is the author’s humble opinion that the level of commissioning must be proportional to the level of criticality and design Tier (per Uptime Institute) of the data center.

In Phase 2, referred to as the Random Failure Period, the failure rate is constant and mostly known by the fact that MTBF (Mean Time Between Failures) is calculated during this phase. The desire here is to take that flat curve as low as possible. In Phase 3, The Wear-out Period – is where components begin to reach the end of their usable life. Replacing components proactively aids in delaying the ultimate upturn in the graph.

This post is the first in a series so if you are interested in this topic there will be more.

Therefore, data center commissioning is about enabling the business through performance validation and functional testing of integrated platforms. This should typically be performed by an independent agent as part of the customers trusted advisory team and as a core part of the overall project schedule. The cost for a commissioning agent can be in the range of 0.8-2% of the total budget. Since commissioning is essential for government facilities, the US Department of Energy is publishing certain guidelines for commissioning scope and cost. Geographically, commissioning is more popular and comprehensive in North America and parts of Western Europe while the rest of the world is becoming more familiar these concepts. Our next Blog will go a bit deeper into the Integrated Testing—stay tuned. Till next time, Kfir

Kfir's new company is here.

7 things that are wrong with many Enterprise IT systems?

The Enterprise IT organization is an interesting entity.  The following are some observations I have made and are interesting problems to try and solve.

  1. The main priority of many people in Enterprise IT is to protect their jobs.  Due to the crappy way that many companies have treated their IT organizations.  It is a thankless job in many companies. In the past, efforts have been made to outsource the job to other companies and India.  So, the people who are still around have developed a harsh survival instinct to do whatever it takes to protect their job.
  2. Too many nice people who try to do the right thing are the victims not the heroes.
  3. 80% or more of enterprise IT is full of people who are not technical by education. I have been spoiled working in product development at HP, Apple, and Microsoft where you hire the the best technical people to develop products.  These are what I consider technical staff.  They really know how things work and be so valuable people will pay money for them.  Other than Amazon Web Services, what enterprise IT has built an IT system so well that people would pay money for it?
  4. So the 20% of enterprise IT that are technical, can they make the really tough decisions?  Many times no, because the decisions in most enterprise IT systems are not made by the most technical people it is made by the people who have the strongest survival instinct.
  5. The Cloud is a threat to the monopoly of enterprise IT.  Until the Cloud, users had to use the enterprise e-mail system, CRM, file servers, web hosting, etc.  Now the business units have choice.
  6. The private cloud's #1 goal in many companies is to shut down the choice of going outside the enterprise IT monopoly.
  7. The private cloud will be much more expensive than the public cloud, because the private cloud's goal is to protect jobs where the public cloud's goal is to reduce costs which means higher utilization of all resources including people.  Cloud environments have one admin per 1,000+ servers.  Many enterprises have one admin per 10-20 servers.  Some have moved to 100.  Few have achieved 1,000.

Huh, these things sound like they could be in a Dilbert cartoon.  They probably have been.

Here is today's Dilbert cartoon.

NewImage

Analyst Roundtable: ARM in the data center

On Dec 19, 2012 10a will be a webinar discussing the ARM chip in the data center.  The webinar is here.

Power matters: using ARM to reduce data center costsA consumer using a computer to shop, email, search, or any of the other myriad tasks now possible, usually measures power consumption by the dollar figure on the monthly electrical bill. But as two recent, highly controversial articles in the New York Times reiterated, U.S. data centers backing up those tasks consume as much as two percent of the nation’s power consumption. Long before the articles appeared, data centers were aware of the problem and had begun employing various strategies to lower cooling costs, eliminate redundancies, and improve power usage effectiveness (PUE).

Another solution is deploying ultra-low power servers that reduce data center power needs – in a sense, reinventing the server. How does the efficiency of this solution stack up against the alternatives? What are some specific use cases? And, what’s the future for ARM-based server solutions? For answers to these and many other questions, join GigaOM Pro and our sponsor Calxeda for “Power matters: using ARM to reduce data center costs,” a free analyst roundtable webinar on Wednesday, December 19, 2012, at 10 a.m. PT.

I'll be on the webinar and Barry Evans from Calxeda.

Our panel of experts includes: