A 3D modeler of your infrastructure, do you really want to see what your IT systems looks like

GigaOm has a press release on its Structure 2011 LaunchPad finalist.

GigaOM, a leading business media company, today announced the finalists of Structure 2011 LaunchPad, a high-profile competition that recognizes the most promising cloud computing and infrastructure startups. From publicly submitted entries, 11 early-stage companies were
chosen by a panel of expert judges and GigaOM editorial staff based on their product innovation and visionary business models.

One company on the list is Real-Status, a 3D enterprise infrastructure modeling tool.

image

I'll get a chance to see more when I am at Structure, but one thing that I see as a problem is their product HyperGlance is at the mercy of what information exists in the enterprise IT systems.

HyperGlance is the world`s first real-time IT modelling and visualisation software in 3D.  It allows you to view your entire IT infrastructure on one pane of glass - bridging the gap between physical and virtual worlds. This enables you to make faster, and better informed decisions to reduce server degradation, improve capacity planning and utilisation, communicate more effectively to non-IT staff, ensure compliance and improve security.

HyperGlance creates a model of your complete infrastructure in 3D showing the relationships between physical and virtual worlds.  It automatically incorporates topology changes using a physics engine, and it can aggregate data from your existing IT management tools to visualise performance metrics and attributes relating to applications, networks, security, virtual machines and more.

If you haven't designed your systems to work with a 3D modeling system, do you think you are going to end up with pretty pictures like this?

Or will you end up with images that are hard to comprehend?

I've had some good friends at Apple and Adobe who worked on 3D systems and getting user interface design to work for 3D systems is really, really hard.

 

Data Center Conversation with FieldView Solutions David Schirmacher on getting IT and Facilities together, galvanic corrosion metaphor

Over the past 9 months I have had many conversations with David Schirmacher, Chief Strategy Office of FieldView Solutions.

image

image 
David Schirmacher,
Chief Strategy Officer

David Schirmacher is Chief Strategy Officer for FieldView Solutions. He has close to 30 years of experience managing the design and operation of millions of square feet of mission critical facilities, representing billions of dollars in corporate investment.

Mr. Schirmacher was most recently Vice President and Global Head of Engineering and Critical Systems at Goldman Sachs & Company, where he was responsible for the design, operation and overall strategy of the firm's data centers, trading and critical business environments throughout the US, Europe and Asia. Previously, David served as Vice President, Director of Operations at Jones Lang LaSalle, and Compass.

David is on the technical advisory committee of Mission Critical Magazine, VP of 7x24 Exchange International and a member of a task force organized by the EPA and other industry influencers to develop an agreed method for measuring and reporting data center infrastructure efficiency.

Dave and I were chatting at Uptime, and he made the point in jest I hadn't written a blog post about him yet.  I have been meaning to write about David and FieldView Solutions, but actually have a writer's block posting as I know so much and I don't know where to start.  My first conversation with David was a two hour phone call, and we rarely chat for less than an hour as we bounce around many different topics.

An example of areas we will discuss is what is going on in the industry, who is doing interesting work and who isn't, what did we think of a conference.  David and I have run into each at DatacenterDynamics, AFCOM's Data Center World, Gartner Data Center Conference, Uptime Institute Symposium, and The Green Grid over the last 6 months.  We'll see each other next when I make my first trip to 7x24 Exchange Orlando.

image

To break the writer's block I gave David a call and discussed some ideas and one topic we discussed is the recommendation that comes from a variety of people that the data center electricity bill should be moved out of facilities and into IT, so IT has an incentive to save electricity. 

David and I discussed the fallacy of this recommendation fixing the energy efficient IT problem.  Getting facilities and IT to work together is brought up often, but getting the two groups to work together is not easy, and many times does not last as the connection and communications disintegrate after the initial discussions.

Then we hit upon the metaphor of a galvanic corrosion where two metals (IT and facilities) are in contact and one corrodes as electrons flow between the materials.

An infamous example of galvanic corrosion is the Statue of Liberty's copper skin and iron supports.

image

The galvanic reaction between iron and copper was originally mitigated by insulating copper from the iron framework using an asbestos cloth soaked in shellac. However, the integrity and sealing property of this improvised insulator broke down over the many years of exposure to high levels of humidity normal in a marine environment. The insulating barrier became a sponge that kept the salted water present as a conductive electrolyte, forming a crude electrochemical cell as and Volta had discovered a century earlier. The formation of expanded material that followed was typical of confined situations found in crevice corrosion.

When two metals are far apart on the Galvanic series, your corrosion problem gets worse.  The same idea applies to IT and facilities, the further apart the groups the unintended consequences (the corrosion) risk is higher.  You can mitigate the risks, but you should be aware of the differences up front.

Here are a few words of wisdom from David Schirmacher.

although it is difficult to create a lasting connection between the two groups, in essentially every case, you will find that the best practice operations have succeeded at doing it.

if you don’t have the right stakeholders accountable for performance you run a big risk of only appearing to be proactive.

A lesson from Boeing's aggressive outsourcing strategy, supervision of work increases with outsourcing

Boeing's 787 has been plagued by multiple issues that many will point to an over aggressive outsourcing strategy driven by zealous executives who saw a path of lower costs.  SeattleTimes reports on Boeing's annual investor conference.

Like Albaugh, McNerney reiterated Boeing will do more work in-house on the new jet than on the Dreamliner and try to protect its expertise in composites technology.

McNerney said Boeing's attitude with the 787 had been "outsource it ... get rid of the cost of supervising it."

"You end up realizing you need more cost to supervise outside factories," he said. "Unfortunately we paid billions upon billions in the learning process."

Boeing has kept the wing production in house, and now added the horizontal tail to be brought in house for the 787-9.

At lunch with Wall Street analysts later, Albaugh said Boeing has decided the horizontal tails on the next version won't be made in Italy, as they are for the initial 787-8 model.

Engineers will perfect the method for producing the horizontal tail at the Development Center beside Boeing Field, and will do early production runs there to mature the process.

How many IT projects do you wish you had the support to be done in house vs. outsourced?

4 years of writing on the Green Data Center Topic

I went up to Google Trends to search the "green data center" topic.

image

The Green Data Center topic has ridden the wave of overall data center coverage by the media.

image

In 2006 is when I started researching the green data center topic, then in Oct 2007 is when I published an article in Microsoft's TechNet magazine on green data centers.

Building a Green Datacenter

TechNetArchive

17 Sep 2007 5:05 PM

The upcoming October 2007 issue of TechNet Magazinebegins what we hope will be an ongoing dialog on the topic of Green Computing. This is an exciting topic for us, given not only how much there is to cover, but how much potential this topic holds - to improve IT, save a great deal of money, and have a really substantial and lasting impact on the environment.

Dave Ohara kicks off the conversation in his article "Build a Green Datacenter". In it, Dave discusses the concepts that define the topic of Green computing, and also gives some specific, practical advice on how you can get started.

A few months later another Microsoft friend Bob Visse encouraged me to blog on the green data center topic.  I worked with Bob on Windows 2000 when one of the duties I had was program manager for Windows 2000's power management features.  Back in 1999, I was talking to OEMs about power savings, and I was crazy enough then to try and have conversations with the Server OEMs who thought I was being silly to discuss power management features in servers.

Twelve years later, performance per watt is top issue for servers with Intel Atom and ARM servers challenging the established players.

I am in the early stages of researching some other data center ideas.  It will be interesting to see where the data center industry will be in 4 years.

Thanks for continuing to visit this blog.

-Dave Ohara

Urs Hoelzle's keynote at Google European Data Center Summit 2011

James Hamilton has posted his notes on Urs Hoelzle's keynote speech at Google's European Data Center Summit 2011. 

2011 European Data Center Summit

The European Data Center Summit 2011 was held yesterday at SihlCity CinCenter in Zurich. Google Senior VP Urs Hoelzlekicked off the event talking about why data center efficiency was important both economically and socially. He went on to point out that the oft quoted number that US data centers represent is 2% of total energy consumption is usually mis-understood. The actual data point is that 2% of the US energy budget is spent on IT of which the vast majority is client side systems. This is unsurprising but a super important clarification. The full breakdown of this data:

· 2% of US power

o Datacenters: 14%

o Telecom: 37%

o Client Device: 50%

What will get little press is this statement by Urs.

Summarizing: Datacenters consume 0.28% of the annual US energy budget. 72% of these centers are small and medium sized centers that tend towards the lower efficiency levels.