How many funny things are done in data center because risk aversion is a best practice?

Outages are career killers in data centers and IT.  This leads to a risk aversion behavior that becomes a best practice.

Risk aversion

From Wikipedia, the free encyclopedia
 
 

Risk aversion is a concept in psychologyeconomics, and finance, based on the behavior of humans (especially consumers and investors) while exposed to uncertainty to attempt to reduce that uncertainty.

But, Risk Aversion can lead to an obsession to avoid doing things.  A funny example is Ben Stiller in Along Came Polly.

Ben Stillar's risk aversion is resolved when he enters data about his safe ex-wife and the risky Polly.

Reuben is torn between the free-spirited Polly and the safe and familiar Lisa. To solve this issue he enters information about Polly and Lisa into a computer insurance program which measures risk. The computer tells him that, despite his numerous blunders with her, Polly is the best choice for him.

It is too common a best practice to manage risk by avoiding the path that may have risk.  Risk leads to an outage.  An outage leads to job loss.  Don't do those things that increase risk.

Almost all data center innovators learn to live with risk.  Risk is everywhere.  But, risk aversion can still exist in pockets of an organization when one individual finds comfort in steering clear of all risks they identify.  It's too bad you can't enter information in about their situation and given them the best choice.

Aren't you glad you haven't been hit with a 40% Energy Surcharge like Western Texas Industries?

WSJ has an article on the problem the power transmission system in Western Texas being strained by the higher use of Energy drilling and pumping that is increasing the costs for many industries in areas.

While the largely rural region has enough power plants to supply the growing demand for electricity, the high-voltage transmission network hasn't kept pace. Beginning last summer, a shortage of transmission lines in some areas meant that grid operators couldn't automatically send the cheapest power to customers, but had to turn to more expensive power plants elsewhere in the state, where there was enough transmission capacity. Those higher costs were passed on as surcharges to many large customers.

Here are descriptions of some of the pain.

That isn't good news for executives at Tower Extrusions Ltd., which makes aluminum products like stadium seats and storm gutters in Olney, Texas, about 100 miles west of Fort Worth. The company says its power bills climbed 40% last year.

"The congestion charges are putting me at a huge disadvantage, compared to my competitors near Dallas or in other parts of the state," said Mark McClelland, Tower's general manager.

Even oil and gas companies are being hit by the charges. Kinder Morgan Inc.,KMI -0.84% which produces oil in the Permian Basin, said it had to pay as much as $400,000 in congestion costs on a single day in 2012.

Apache Corp., one of the Permian Basin's top oil producers, said its average costs in the area this year are running about 15% higher, largely due to the power-line congestion costs.

Ouch.  Could imagine if you ran a data center in this area?  There are probably some data center operators who are being hit by these costs.

Digital Realty expresses need for Hierarchical Design in DCIM to organize the data

Before David Schirmacher joined Digital Realty trust we would regularly have chats that could easily last hours.  I don't get a chance to chat with David as much now that he is Sr VP of Data Center Operations, but we still connect in many ideas.  Digital Realty has a paper it released on DCIM.  And a couple of things reminded me how we think the same.

NewImage

And David has a hierarchy which i would have drawn from the bottom up, but David has drawn from top down.

NewImage

Here is more information from the press release.

Digital Realty Addresses Challenges of Establishing Successful DCIM Platform
 
White paper explores the challenges of DCIM platforms and provides a comparative outlook on Digital Realty’s newly launched DCIM solution 
 
San Francisco, CA – July 23, 2013 – Digital Realty Trust, Inc. (NYSE: DLR), a leading global provider of data center solutions, today announced the release of a white paper titled, “Real-Time Monitoring for Data Centers: Comprehensive DCIM Solution Creates Connectivity-Rich Environment.” The paper, authored by David Schirmacher, Senior Vice President of Portfolio Operations for Digital Realty, explores the challenges inherent in establishing a successful data center infrastructure management (DCIM) platform and introduces EnVision, Digital Realty’s recently launched DCIM solution.
Historically, vendors have approached DCIM as a hardware problem by implementing specialized equipment in an attempt to achieve a real-time monitoring and management platform for the interdependent systems across IT and infrastructure. However, the challenge with today’s DCIM platforms isn’t with the hardware, but rather it’s with the data that’s being managed, monitored and analyzed. In short, today’s DCIM solutions need to provide a significantly more comprehensive view of all of the resources within a data center – from the mechanical, electrical and plumbing systems that form the backbone of a facility’s infrastructure to the servers and racks that compose the heart of the IT setup.
The paper explains how a data-driven, connected view of a data center enables data center operators to realize the capacity they need in order to help their firms effectively grow their businesses and to ensure their data centers are yielding optimal results. In particular, with EnVision, Digital Realty’s DCIM solution, the company’s customers gain increased visibility into data center operations, the ability to analyze data in a manner that is digestible and actionable, a user interface and data displays/reports that are tailored to data center operators, and access to historical as well as predictive data.
Interested in sharing some insights from the white paper with your Twitter followers? Check out our list of interesting facts pulled from the paper:
  • #DCIM is an emerging form of #datacenter mngt that bridges gap between traditional facilities systems#IT systems http://ow.ly/neWSD
  • #DCIM solutions must allow users to view info about the #datacenter in a truly connected sense http://ow.ly/neWSD
  • The issue at the core of the #DCIM puzzle is stranded data, per @drdatacenters http://ow.ly/neWSD
  • The focus shifts from displaying data to managing data when looking at #DCIM from an operator’s perspectivehttp://ow.ly/neWSD
  • #DCIM is not a hardware problem, it is a data problem” per @drdatacenters http://ow.ly/neWSD
  • A typical #datacenter, might have 10,000 data points which can turn into billions of #datapoints a year http://ow.ly/neWSD#DCIM
  • A typical #datacenter, if it is fully instrumented, might have 5,000 or 10,000 #datapoints http://ow.ly/neWSD #DCIM
  • @drdatacenters builds a comprehensive #DCIM solution with real-time monitoring for #datacenters http://ow.ly/neWSD
  • Key to a comprehensive #DCIM solution is recognizing the breadth & diversity of the available informationhttp://ow.ly/neWSD
  • “Real-Time Monitoring for #DataCenters: Comp #DCIM Solution Creates Connectivity-Rich Enviro http://ow.ly/neWSD via@drdatacenters
 
To view the full paper, click here: http://ow.ly/neWSD

Ah, figured out how DCIM asset deployment should work, do you see the Information Architecture

I am having fun expressing some of my views of DCIM.  One area I don't think I've heard anyone discuss is the Information Architecture of their DCIM solution.

What is Information Architecture?

Information architecture is a specialized skill set that interprets information and expresses distinctions between signs and systems of signs. More concretely, it involves the categorization of information into a coherent structure, preferably one that the intended audience can understand quickly, if not inherently, and then easily retrieve the information for which they are searching[2][page needed]. The organization structure is usually hierarchical, but can have other structures, such as concentric or even chaotic[2][page needed]. Typically this is required in activities such as library systems, content management systems,web developmentuser interactionsdatabase development, computer programmingtechnical writingenterprise architecture, and critical system software design. Information architecture originates, to some degree, in the library sciences. Many schools with library and information science departments teach information architecture.[6]

In the context of information systems design, information architecture refers to the analysis and design of the data stored by information systems, concentrating on entities, their attributes, and their interrelationships. It refers to the modeling of data for an individual database and to the corporate data models that an enterprise uses to coordinate the definition of data in several (perhaps scores or hundreds) distinct databases. The "canonical data model" is applied to integration technologies as a definition for specific data passed between the systems of an enterprise. At a higher level of abstraction, it may also refer to the definition of data stores.

Why do you need an information architecture? For the same reason you need architects to design a building to make sure the end user purpose is met, but there is more to good architecture than does it work.  A good architecture has three qualities.

Three Principles of Good Architecture

The Roman architect Vitruvius in his treatise on architecture, De Architectura, asserted that there were three principles of good architecture:

  • Firmatis (Durability) - It should stand up robustly and remain in good condition.
  • Utilitas (Utility) - It should be useful and function well for the people using it.
  • Venustatis (Beauty) - It should delight people and raise their spirits.

Most would blow off the third principle.  How many users of DCIM delight in their time spent in the system and feel their spirits raised?  

If you don't think Beauty is important.  Than you don't get the way Apple products are designed.  Steve Jobs would be an example of a person who would believe in all three principles of good architecture.  How many executives buying DCIM want or believe in the three principles?  Not many.  On the other hand beauty is the experience people get from almost all luxury services - BMW, Porsche, etc.  So there is value in beauty.  

A great DCIM information architecture will be beautiful one of these days.

The peak of dual processor servers is coming, Intel sets new path towards the mainframe

24 years ago in 1989 Compaq released the first dual processor, RAID Intel 486 Server.

At its initial release in November 1989, the SystemPro supported up to two 33 MHz 386 processors, but early in 1990 33 MHz 486 processors became an option (the processors were housed on proprietarydaughterboards).

The SystemPro, along with the simultaneously released Compaq Deskpro 486, was one of the first two commercially available computer systems containing the new EISA bus. The SystemPro was also one of the first PC-style systems specifically designed as a network server, and as such was built from the ground up to take full advantage of the EISA bus. It included such features as multiprocessing (the original systems were asymmetric-only), hardware RAID, and bus-mastering network cards. All models of SystemPro used a full-height tower configuration, with eight internal hard drive bays.

Over the past 24 years the data center has seen a steady growth of dual processor servers.

Yesterday Intel announced a re-architecting of the datacenter.

NewImage

And the future is not dual processor servers to deliver compute, I/O and memory.  The Pooled compute, Pooled Memory, Pooled I/O  looks like a mainframe.

NewImage

Most media is focusing on new processors announced.  That is the old world of thinking.

NewImage

Intel makes the point of going from proprietary to standards with supercomputers.

NewImage

And a diversity of workloads.  The high cpu, memory, I/O was the realm of supercomputers and mainframes.

NewImage

Intel is also driving innovation at the low end, but these are not the systems to run the high resource workloads.

Traditional servers are also evolving. To meet the diverse needs of datacenter operators who deploy everything from compute intensive database applications to consumer facing Web services that benefit from smaller, more energy-efficient processing, Intel outlined its plan to optimize workloads, including customized CPU and SoC configurations.