HP Butterfly Flexible Data Center, Part 2 - 20 year NPV 37% lower than traditional

I just posted about HP's Butterfly Flexible Data Center.  Now that HP has official announced the solution, there is more on the HP Press Room.

Economical, Efficient, Environmental is a theme for HP's video presentation.

image

 

Here are numbers HP uses to demonstrate a 50% lower CAPex

image

And lower OPex.

image

And HP discusses yearly water usage. Yippee!!!

image 

Typically, data centers use 1.90 liters of water
per kWh of total electricity. A one-megawatt data
center with a PUE of 1.50 running at full load for
one year is expected to consume 13 million kWh
and will consume 6.5 million U.S. gallons of water
annually. FlexDC uses no water in some climates and
dramatically reduces the consumption of water in
others. Actual amounts can vary depending on system
selection and local weather patterns.

HP has a white paper that is a must read for anyone designing a data center.

Introduction
If we briefly go back to the late 1960s and the advent
of transistor, efficiencies and cycles of innovation in
the world of electronics have increased according to
Moore’s law. However, data center facilities, which
are an offshoot of this explosion in technology, have
not kept pace with this legacy. With the magnitude of
capital required and costs involved with the physical
day-to-day operations of data centers, this existing
paradigm could impede the growth of data center
expansions, unless a new group of innovative solutions
is introduced in the market place.


With sourcing capital becoming more difficult to
secure, linked with potential reductions in revenue
streams, an environment focused on cost reduction
has been generated. The pressure to reduce capital
expenditure (CAPEX) is one of the most critical issues
faced by data center developers today. This is helping
to finally drive innovation for data centers.


The key contributors which can reduce CAPEX
and operational expenditure (OPEX) are typically
modularity, scalability, flexibility, industrialization,
cloud computing, containerization of mechanical and
electrical solutions, climate control, expanding criteria
for IT space, and supply chain management. All these
factors come into play when planning a cost-effective
approach to data center deployment. Every company
that develops and operates data centers is attempting
to embrace these features. However, businesses
requiring new facilities usually do not explore all the
strategies available. Generally, this is either due to
lack of exposure to their availability or a perceived
risk associated with changes to their existing
paradigm. Emerging trends such as fabric computing
further exacerbate the silo approach to strategy and
design, where “what we know” is the best direction.

The Four Cooling system alternatives are:

This adaptation of an
industrial cooling approach includes the following
cooling technologies: air-to-air heat exchangers with
direct expansion (Dx) refrigeration systems; indirect
evaporation air-to-air heat exchangers with Dx assist;
and direct evaporation and heat transfer wheel with
Dx assist.

Reducing fan power.  Fan power is a hidden inefficiency in the data center whether in the mechanical systems or IT equipment.  HP discusses how it reduces fan power.

To obtain the maximum use of the environment, supply
air temperature set points need to be set at the highest
temperature possible and still remain within the
warranty requirement range of the IT equipment. The
next critical component is to control the temperature
difference between the supply and return air streams
to a minimum range of 25° F. This reduces the amount
of air needed to cool the data center, thus reducing
fan energy. The configuration of the data center in
general must follow certain criteria in order to receive
greater benefits available through the use of this
concept, as follows:
• Server racks are configured in a hot aisle
containment (HAC) configuration.
• There is no raised floor air distribution.
• The air handlers are distributed across a common
header on the exterior of the building for even air
distribution.
• Supply air diffusers are located in the exterior wall,
connected to the distribution duct. These diffusers
line up with the cold aisle rows.
• The room becomes a flooded cold aisle.
• The hot aisle is ducted to a plenum, normally
created through the use of a drop ceiling. The hot
air shall be returned via the drop ceiling plenum
back to the air handlers.
• Server racks are thoroughly sealed to reduce the
recirculation of waste heat back into the inlets of
nearby servers.
• Server layout is such that the rows of racks do not
exceed 18 feet in length.
The control for the air handlers shall maintain
maximum temperature difference between the supply
and return air distribution streams. The supply air
temperature is controlled to a determined set point
while the air amount is altered to maintain the desired
temperature difference by controlling the recirculation
rate in the servers.

Electrical distribution techniques are listed as well.

Traditional data centers have electrical distribution
systems based on double conversion UPS with battery
systems and standby generators. There are several
UPS technologies offered within FlexDC, which
expands the traditional options:
• Rotary UPS—94% to 95% energy efficient
• Flywheel UPS—95% energy efficient
• Delta Conversion UPS—97% energy efficient
• Double Conversion UPS—94.5% to 97% energy
efficient
• Offline UPS—Low-voltage version for the 800 kW
blocks, about 98% energy efficient
FlexDC not only specifies more efficient transformers
as mandated by energy standards, it also uses best
practices for energy efficiency. FlexDC receives power
at medium voltage and transforms it directly to a
server voltage of 415 V/240 V. This reduces losses
through the power distribution unit (PDU) transformer
and requires less electrical distribution equipment,
thus, saving energy as well as saving on construction
costs. An additional benefit is a higher degree of
availability because of fewer components between the
utility and the server.

And HP takes a modeling approach.

HP has developed a state-of-the-art energy evaluation
program, which includes certified software programs
and is staffed with trained engineers to perform a
comprehensive review of the preliminary system
selections made by the customer. This program
provides valuable insight to the potential performance
of the systems and is a valuable tool in final system
selection process. The following illustrations are typical
outputs for the example site located in Charlotte,
North Carolina. This location was chosen due its very
Figure 4: Shows state-of-the-art data center annual electricity consumption
reliable utility infrastructure and its ability to attract
mission critical type businesses. The illustrations
compare a state-of-the-art designed data center using
current approaches and HP FlexDC for the given
location.

Comparing two different scenarios.

Scenario A: Base case state-of-the-art
brick-and-mortar data center.
A state-of-the-art legacy data center’s shell is typically
built with concrete reinforced walls. All of the cooling
and electrical systems are located in the same shell.
Traditional legacy data center cooling systems entail
the use of large central chiller plants and vast piping
networks and pumps to deliver cooling air handlers
located in the IT spaces. Electrical distribution systems
typically are dual-ended static UPS system with good
reliability but low efficiencies due to part-loading
conditions. PUE for a traditional data center with tier
ratings of III and above are between 1.5–1.9.


Scenario B: HP FlexDC
The reliability of the system configuration is equivalent
to an Uptime Institute Tier III, distributed redundant.
The total critical power available to the facility is
3.2 MW. The building is metal, using materials
standard within the metal buildings industry. The
electrical distribution system is a distributed redundant
scheme based on a flywheel UPS system located in
prefabricated self-contained housings. The standby
generators are located on the exterior of the facility
in prefabricated self-contained housing with belly tank
fuel storage.
The mechanical cooling systems are prefabricated
self-contained air handlers with air-to-air heat
exchangers using Dx refrigerant cooling to assist
during periods of the year when local environment is
not capable of providing the total cooling for the data
center IT space.
The IT white space is a non-raised floor environment.
The IT equipment racks are arranged in a hot aisle
containment configuration. The hot return air is
directed into the drop ceiling above and returned to
the air handlers.
The following life-cycle cost analysis matrix quantifies
the CAPEX and OPEX costs and the resultant PV
dollars for the base case and the alternative scenario.

Which feeds this summary.

image

And a NPV cost savings of 37%.

Besides HP sharing Flexible Data Center design approach, they have published a set of documents that anyone building their own data center can use.

Kfir thanks for taking a step to share more information in industry and show them a better path to green a data center, being economically, efficietly, and environmentally.

Read more

HP introduces Butterfly Flexible Data Center design, reducing CAPex by 50%

OK, my Microsoft past haunts me.  HP introduces a "butterfly" data center design and I think of the MSN butterfly.

HP's butterfly looks different than MSN's, and I doubt we'll see an HP data center staff in a butterfly outfit, but the building does look like a butterfly.

HP Flexible DC “butterfly” design

HP Flexible DC is based on a “butterfly” design featuring four prefabricated quadrants, or modules, that stem off a central administrative section. The offering uses industrial components to improve cost efficiencies as well as a streamlined building process with a variety of options for power and cooling distribution.

image

Joking aside I was able to talk to Kfir Godrich, CTO of HP critical infrastructure.

“Clients, such as financial service providers, government entities, and cloud and colocation hosts, will find the scalable and modular nature of HP Flexible DC a compelling option,” said Kfir Godrich, chief technology officer, Technology Services, HP. “HP can help clients innovate the way they build and operate a greenfield data center for greater savings over its life span.”

I am writing this blog post before the official release, and I will update this blog with the press release link.

HP Flexible Data Center Reduces Clients’ Upfront Capital Investment Requirements by Half, Optimizes Resource Use

Design delivers flexibility, lowers carbon footprint

PALO ALTO, Calif., July 27, 2010

HP today introduced a new way for clients to cut capital investment requirements for the design and build of data centers in half while significantly decreasing their carbon footprint.(1)

The patent-pending HP Flexible Data Center (HP Flexible DC) offers a standardized, modular approach to designing and building data centers that allows clients to replace traditional data center designs with a flexible solution that can be expanded as needed while conserving resources.

Some facts that data center folks will care about are:

  • 3.2 MW is the overall capability of the total butterfly building.
  • Each of the modules is 800 kW which you can deploy in a partial deployment, supporting 800, 1600, 2400, and 3200 kW increments.
  • The central core is shared building support space for the four modules.
  • You can use multiple 3.2 MW deployments for a campus approach like below

image

  • PUE is in the 1.2 - 1.25 range.
  • The design is modular to support multiple power and cooling systems, using multiple vendors while maintaining a high degree of integration across the systems.
  • Total square footage for a 3.2 MW configuration is 25,000.
  • Removing complexity in the system increases availability, efficiency, and cost effectiveness.

Here are images from the official presentation.

The one slide I would add if I was creating the presentation is "Why a new Data Center design?" where HP explained problems it sees customers having and then how Flexible DC creates a new approach to DC design.

image

Modularity and of the system to support 800kW increments.  BTW, Kfir said you could deploy a 400 kW configuration instead of 800kW.

image

To support a low PUE, hot air is exhausted through the roof.  Although in the images HP provided you can't see the roof system which lead me to think HP has a patent in process for the roof.  Note Yahoo patented its Chicken Coop design.  Kfir also made it clear the  design with 4 cooling methods can support deployments anywhere in the world, and where trade-offs can be made for when water is an expensive resource. 

image

Don't ask what Tier the design is.  It is designed for availability, energy and cost efficiency, not to meet a Tier standard.

image

I've been spending more time thinking about how you present data center issues to the CFO and so has HP.

image

There were six things that impressed me

  1. The amount of topics Kfir and I could cover in 25 minutes.
  2. The quality of the presentation and information that HP has in the Flexible Data Center Solution.  They can use this same presentation for CxO and data center geeks.  Although I would add a Why change data center design slide.  On the other hand criticizing decisions people have made in the past could upset some the audience and make them defensive, so being on the safe side I can see HP's choice for not calling out what is wrong with data center design.
  3. A focus on the supply chain.  If you are going to use HP's approach you could efficiently add data center capacity every year or more often.  This approach like a Just In Time manufacturing approach reduces the data center building inventory now that you could add additional space in as little as 3 months.  The current approach of building for 5 years of data center needs now turns into what do you need in 3 - 6 months.
  4. This is going to get a lot of people thinking about how they approach data center capacity.  How many were trying to save 10% in data center construction, and HP says hey you can save 50%?
  5. Building in smaller increments allows management to see data center building costs on a regular basis.
  6. The Butterfly Flexible DC design is a good alternative for Cloud Computing.

image

As HP's Flexible DC hits the market it will be interesting to watch the media coverage and customer interest.  I've already talked to my friends to tell them HP's Butterfly Flexible Data Center is something they should look at.

Five years ago who would have thought HP would have Data Center PODs and Flexible Data Centers?

image

Read more

Lee Tech's Top 10 Data Center Operating Mistakes, Part 2 updated for an architecture order

I wrote on Lee Tech's Top 10 Data Center Mistakes, and made the point of reordering the 10 mistakes for an architecture approach. 

I like the list, but I would change the ordering to create an architecture approach for looking at the issues.  8, 2, 10, 4, 1, 9, 3, 6, 7, 5 is a quick pass at an order I would choose, but I admit this is 3 minutes of thinking about it.

You can go to the updated post here.

Dave, you make two really great points:

  • I think the value of a team discussion is enormous as each customer is different both in requirements, constraints and at what point in the process they currently find themselves.  This discussion would bring out multiple view points and arrive at an even better place than they had started.
  • I had not thought in terms of architecture when sequencing the list, and your three minute analysis is great.  I would make one small change by moving 4 to follow 3 but could see the argument on both sides.

So if you think in terms of building a foundation on which each would progressively build, quality should be engineered into every system and process to ensure you can provide a sustainable solution.  Too many data centers have been subjected to enormous impacts that could have been avoided either during construction or operation.  Often, I hear about data centers no more than five years old having enormous infrastructure issues.  So if I wrote the paper over again, I think I would adopt your approach and sequence the list as follows:

Big Mistake #8:
Failure to develop and implement Quality Systems

Big Mistake #2:
Relying too much on data center design

Big Mistake #10:
Thinking you can build a best in breed program as quickly as a data center

<the remaining 7>

Click this link to get the other rest of the order.  

I could have asked to have this embedded as a comment, but I don't spend much time supporting comments on GreenM3.

Lee Kirby - thanks for updating your document.

Read more

Meditating, Higher Consciousness, Situation Awareness, Seeing Things in Data Centers most miss

I had one of my local high tech friends come by for an afternoon chat.  It was the first time he had come over, and we spent a couple of quiet hours discussing all kinds of topics including data centers.  My friend appreciated the Zen and quiet time I get working from home.  Whenever someone suggests travel, I try to figure out how I can skip the plane flight. :-)

The next day I got up in the morning, and started writing.

image

This is the view to the south at the same time the sun was rising above.

image

As the evening was approaching, and the moon was almost full. I was using the quiet time to meditate, and try some higher consciousness exercises.

The concept of higher consciousness rests on the belief that the average, ordinary human being is only partially conscious due to the character of the untrained mind and the influence of 'lower' impulses and preoccupations. As a result, most humans are considered to be asleep (to reality) even as they go about their daily business.

image

As the night progressed, I decided to have my own outdoor movie and brought out my laptop to stream a Netflix movie, "The Sensei"

A DIFFERENT KIND OF MARTIAL ARTS FILM

A Different Kind of Martial Arts Film: D. Lee Inosanto’s ‘The Sensei’ battles prejudice and homophobia in 1980s small town in Colorado

If there’s one thing D. Lee Inosanto is no stranger to, it’s martial arts. Her father is martial arts legend Dan Inosanto, her godfather was the late Bruce Lee (whom she refers to simply as “Uncle Bruce”), and Inosanto herself is a highly trained martial artist who has worked as a stunt person on projects from Buffy the Vampire Slayer to Face/Off.

image

Then the next morning it hit the connection between the method of higher consciousness and situation awareness.

Situation awareness, or SA, is the perception of environmental elements within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. It is also a field of study concerned with perception of the environment critical to decision-makers in complex, dynamic areas from aviation, air traffic control, power plant operations, military command and control, and emergency services such as fire fighting and policing; to more ordinary but nevertheless complex tasks such as driving an automobile or motorcycle.

Situation awareness (SA) involves being aware of what is happening around you to understand how information, events, and your own actions will impact your goals and objectives, both now and in the near future. Lacking SA or having inadequate SA has been identified as one of the primary factors in accidents attributed to human error (e.g., Hartel, Smith, & Prince, 1991; Merket, Bergondy, & Cuevas-Mesa, 1997; Nullmeyer, Stella, Montijo, & Harden, 2005). Thus, SA is especially important in work domains where the information flow can be quite high and poor decisions may lead to serious consequences (e.g., piloting an airplane, functioning as a soldier, or treating critically ill or injured patients).

There are some people I am working with to apply the ideas of situation awareness (a higher level of consciousness) to the data center.  Here is a situation awareness demo using Geographic Information System (GIS) information from ESRI.

Part of the fun things I am working on is with people who have a situation awareness, a higher level of consciousness of what is going on data centers.  The challenge we have is discussing things we discover with people "who don't know what they don't know."

A Chinese Proverb states.

He who knows not, and knows not that he knows not, is a fool...shun him.

He who knows not, and knows that he knows not, is willing...teach him.

He who knows, and knows not that he knows, is asleep...awaken him.

He who knows, and knows that he knows, is wise...follow him.

How many data center errors/mistakes could be prevented if systems and methods are set up to support a situational awareness.  A consciousness of what what is going on.

Eric Gallant, a Lee Tech employee commented on this concept.

Eric Gallant said...

Excellent thought Dave. Prior to getting into the data center industry, I operated nuclear power plants for the Navy. After countless hours standing watch in the engine room, my senses became tuned to the environment. I could recognize a change in the pitch of a steam turbine, detect the slightest hint of acrid odor from a switchgear section and identify suspicious vibrations in the deck plates through the soles of my boots. My analog senses were often more useful than the abundant digital meters and detectors that monitored plant conditions.
The same phenomenon can be found in experienced data center professionals. I’ve seen data center operators sprint from their offices before the first alarm sounds because of a barely perceptible change in the quality of the light. I’ve even seen an engineer diagnose a bad CRAH shaft bearing by pressing his forehead against the front of the running machine.
As a managers and engineers we focus on metrics. As the old saw goes, “you can’t manage what you can’t measure.” And a good deal of engineering is knowing how, what and why to measure. Perhaps we focus on the measurable and quantifiable to the detriment of our more visceral abilities.

Read more

Top 10 Mistakes/opportunities in Data Center Operations , my #1 implement quality system

Lee Technologies has put out a paper on Top 10 mistakes in Data Center Operations.  They previously posted on Top 9 Data Center Design Mistakes, and I posted commenting my #1.

Having spent 30 years in high tech working at HP, Apple, Microsoft and consulting I've seen my share of mistakes.  You can choose the "ignorance is bliss" strategy and as long as no one else notices, things are fine.  Or  you can look at a mistake as an opportunity.

Mistakes Merely Opportunities in Disguise
OfficePRO magazine, November/December 1998 issue

Accept your mistakes, accept yourself, and turn blunders and missteps into lessons learned

By Dr. Gene Sharratt and Eldene Wall, CEOE

Mistakes are a part of life. We all make mistakes, but the real mistake is not to learn from them. How can mistakes be turned into opportunities? Effective office professionals acknowledge that errors happen. Most importantly, they learn from their mistakes and move forward.

Each person reacts to mistakes differently, but it's natural to feel angry and disappointed when errors are made. While these are normal responses, your reaction to mistakes largely determines what you learn from them. Some people criticize and belittle themselves for their errors longer than necessary, which can be counterproductive to professional growth.

Why are mistakes so painful? Whether a huge and costly mistake, or a relatively minor one, individuals often feel a strong sense of personal failure. While criticism is usually painful and can even be traumatic, the personal disappointment a person feels can be devastating.

It can be hard to address mistakes as few want to discuss the topic as millions of dollars are spent in data centers, and too many people have seen people dismissed or unfairly punished for mistakes made.

image

image

image

One way to break through this barrier is look at the Top 10 mistakes in data center operations as a guide to run an inventory on where you are at.

Lee Kirby the paper author starts with a piece of data center wisdom.

How can you avoid making major mistakes when operating
and maintaining your data center(s)? The key lies in the
methodology behind your operations and maintenance
program. All too often, companies put immense amounts
of capital and expertise into the design of their facilities.
However, when construction is complete, data center
operations are an afterthought. This whitepaper explores
the top ten mistakes in data center operations.

For those of you who want to know what the top 10 are, here is the summary.

Big Mistake #1:
Not including your operations team in facility design

Big Mistake #2:
Relying too much on data center design

Big Mistake #3:
Failure to correctly address the staffing
requirement

Big Mistake #4:
Failure to train and develop your talent

Big Mistake #5:
Failing to consistently drill and test skills

Big Mistake #6:
Failure to overlay your operations program with
documented processes and procedures

Big Mistake #7:
Failure to implement appropriate processes and
procedures

Big Mistake #8:
Failure to develop and implement Quality Systems

Big Mistake #9:
Failure to use software management tools

Big Mistake #10:
Thinking you can build a best in breed program as
quickly as a data center

I like the list, but I would change the ordering to create an architecture approach for looking at the issues.  8, 2, 10, 4, 1, 9, 3, 6, 7, 5 is a quick pass at an order I would choose, but I admit this is 3 minutes of thinking about it. 

You can use the Lee Tech paper in a staff meeting to discuss the Top 10 data center operations mistakes made by others and create your own order, where you are at, and whether the areas warrant investment.

I would start by asking whether you have a quality system (item #8) in place, and whether the quality group is rewarded for finding mistakes and providing early feedback.

Many companies err in thinking that process, once proven, is infallible.
Continuous improvement is the only way to ensure your data center
operations are efficient, reliable and cost effective. A program for quality
systems consists of two principles:
• Quality Assurance (QA): processes to ensure that errors are not
introduced into the system
• Quality Control (QC): measures taken at various stages of the
process to proactively identify problems that could potentially lead
to system failure

Or you can go with "ignorance is bliss" strategy

image

image

BTW, eliminating mistakes is another way to reduce the environmental impact for a greener data center.  Look at the environmental impact of BP's mistake.  Fewer mistakes made the less environmental impact.

Read more