American Express’s “Project Green” Data Center

Winston-Salem Journal reports on American Express’s latest Data Center in NC - “Project Green” where details are hard to discover about the data center.

American Express making plans to hide in plain sight

By Richard M. Barron

NEWS & RECORD

Published: September 7, 2010

GREENSBORO

American Express has shrouded its major data-center project in secrecy from the moment that economic developers said in May that Guilford County had landed the $600 million project.

The company has declined requests to comment about the center it plans to build on two sites near Interstate 40 at Rock Creek Dairy Road. Many people working with the company in business or government relationships are not returning calls or taking great pains not to let much information slip out.

The reason becomes clear when you get inside the philosophy of the data industry.

"Do not make public any information about your operations," according to a report by SANS Institute, a data-center security consultant, issued not long after the Sept. 11 attacks.

"This includes but is not limited to location, staff, design or security features, type of equipment, etc. The smallest pieces of information can be used to compromise security."

But, given the facility is $600 million and 510,000 square feet it is kind of ridiculous to think people won’t know where the site is.

For all its secrecy, however, American Express is building its data center in a remarkably public place: Rock Creek Center, the Triad's largest business park with 1,400 acres.

The park is just south of Interstate 40/85, one of the state's busiest sections of road. The data center will sit south of Franz Warner Parkway, which runs through the park, and just north of a housing development with 375 lots.

The company plans to hide in plain sight.

The site will by 75 % vacant.

After it builds its data center and two large power substations, American Express will leave 75 percent of its land essentially vacant.

As typical the cost of the site is 1% of the cost of the building.

American Express paid $450,000 for that site and $5.62 million for the 107.8 acres it bought in Rock Creek Center.

American Express has two parcels and one may be a back-up data center.

Earlier this month, the Greensboro City Council unanimously approved the annexation and zoning for a site on the north side of Interstate 85 that is expected to be used as a backup data center. The annexation will allow city water and sewer to be extended to the 145 acres. The zoning change will allow a business park to be built on agriculture land.

Read more

IDS launching sea green data center space

Rich Miller DataCenterKnowledge reports on IDS's ocean port based data center ship.

IDS Readies Data Centers on Ships

August 9th, 2010 : Rich Miller

In early 2008, startup International Data Security revealed plans to build a fleet of data centers on cargo ships docked at ports in the San Francisco Bay. After an initial flurry of publicity, the company receded from the spotlight amid industry chatter of funding challenges.

Now IDS is back, and the company says it has lined up funding and an anchor tenant for a proof-of-concept “maritime data center” that will dock at Redwood City, Calif. The first vessel is a former training ship for the California Maritime Academy that IDS has acquired and is prepping for renovation. IDS representatives say the company has lined up $15 million for an initial deployment of 500 racks of servers.

One of the points is the concern about salt water.

Concept Brings Curiosity, Skepticism
IDS said it has experienced the same mix of curiosity and skepticism. “A lot of the conversations we’ve had with data center operators have been around questions like ‘do you want to put this kind of equipment close to salt water’ and ‘is the rolling motion of the ship a problem,’” said Prince. “The reality is that the Navy has had data centers on war-fighting ships for 20 years or more.”

I've been on the USS Abraham Lincoln air craft carrier looking at some of their IT systems.

image

The data center space is isolated as IDS says, but it is the cost of all the other systems supporting the space that is beyond most budgets.

I am curious how much lower IDS's costs if they were able to set up a floating data center in a fresh water port.

Read more

HP Butterfly Flexible Data Center, Part 2 - 20 year NPV 37% lower than traditional

I just posted about HP's Butterfly Flexible Data Center.  Now that HP has official announced the solution, there is more on the HP Press Room.

Economical, Efficient, Environmental is a theme for HP's video presentation.

image

 

Here are numbers HP uses to demonstrate a 50% lower CAPex

image

And lower OPex.

image

And HP discusses yearly water usage. Yippee!!!

image 

Typically, data centers use 1.90 liters of water
per kWh of total electricity. A one-megawatt data
center with a PUE of 1.50 running at full load for
one year is expected to consume 13 million kWh
and will consume 6.5 million U.S. gallons of water
annually. FlexDC uses no water in some climates and
dramatically reduces the consumption of water in
others. Actual amounts can vary depending on system
selection and local weather patterns.

HP has a white paper that is a must read for anyone designing a data center.

Introduction
If we briefly go back to the late 1960s and the advent
of transistor, efficiencies and cycles of innovation in
the world of electronics have increased according to
Moore’s law. However, data center facilities, which
are an offshoot of this explosion in technology, have
not kept pace with this legacy. With the magnitude of
capital required and costs involved with the physical
day-to-day operations of data centers, this existing
paradigm could impede the growth of data center
expansions, unless a new group of innovative solutions
is introduced in the market place.


With sourcing capital becoming more difficult to
secure, linked with potential reductions in revenue
streams, an environment focused on cost reduction
has been generated. The pressure to reduce capital
expenditure (CAPEX) is one of the most critical issues
faced by data center developers today. This is helping
to finally drive innovation for data centers.


The key contributors which can reduce CAPEX
and operational expenditure (OPEX) are typically
modularity, scalability, flexibility, industrialization,
cloud computing, containerization of mechanical and
electrical solutions, climate control, expanding criteria
for IT space, and supply chain management. All these
factors come into play when planning a cost-effective
approach to data center deployment. Every company
that develops and operates data centers is attempting
to embrace these features. However, businesses
requiring new facilities usually do not explore all the
strategies available. Generally, this is either due to
lack of exposure to their availability or a perceived
risk associated with changes to their existing
paradigm. Emerging trends such as fabric computing
further exacerbate the silo approach to strategy and
design, where “what we know” is the best direction.

The Four Cooling system alternatives are:

This adaptation of an
industrial cooling approach includes the following
cooling technologies: air-to-air heat exchangers with
direct expansion (Dx) refrigeration systems; indirect
evaporation air-to-air heat exchangers with Dx assist;
and direct evaporation and heat transfer wheel with
Dx assist.

Reducing fan power.  Fan power is a hidden inefficiency in the data center whether in the mechanical systems or IT equipment.  HP discusses how it reduces fan power.

To obtain the maximum use of the environment, supply
air temperature set points need to be set at the highest
temperature possible and still remain within the
warranty requirement range of the IT equipment. The
next critical component is to control the temperature
difference between the supply and return air streams
to a minimum range of 25° F. This reduces the amount
of air needed to cool the data center, thus reducing
fan energy. The configuration of the data center in
general must follow certain criteria in order to receive
greater benefits available through the use of this
concept, as follows:
• Server racks are configured in a hot aisle
containment (HAC) configuration.
• There is no raised floor air distribution.
• The air handlers are distributed across a common
header on the exterior of the building for even air
distribution.
• Supply air diffusers are located in the exterior wall,
connected to the distribution duct. These diffusers
line up with the cold aisle rows.
• The room becomes a flooded cold aisle.
• The hot aisle is ducted to a plenum, normally
created through the use of a drop ceiling. The hot
air shall be returned via the drop ceiling plenum
back to the air handlers.
• Server racks are thoroughly sealed to reduce the
recirculation of waste heat back into the inlets of
nearby servers.
• Server layout is such that the rows of racks do not
exceed 18 feet in length.
The control for the air handlers shall maintain
maximum temperature difference between the supply
and return air distribution streams. The supply air
temperature is controlled to a determined set point
while the air amount is altered to maintain the desired
temperature difference by controlling the recirculation
rate in the servers.

Electrical distribution techniques are listed as well.

Traditional data centers have electrical distribution
systems based on double conversion UPS with battery
systems and standby generators. There are several
UPS technologies offered within FlexDC, which
expands the traditional options:
• Rotary UPS—94% to 95% energy efficient
• Flywheel UPS—95% energy efficient
• Delta Conversion UPS—97% energy efficient
• Double Conversion UPS—94.5% to 97% energy
efficient
• Offline UPS—Low-voltage version for the 800 kW
blocks, about 98% energy efficient
FlexDC not only specifies more efficient transformers
as mandated by energy standards, it also uses best
practices for energy efficiency. FlexDC receives power
at medium voltage and transforms it directly to a
server voltage of 415 V/240 V. This reduces losses
through the power distribution unit (PDU) transformer
and requires less electrical distribution equipment,
thus, saving energy as well as saving on construction
costs. An additional benefit is a higher degree of
availability because of fewer components between the
utility and the server.

And HP takes a modeling approach.

HP has developed a state-of-the-art energy evaluation
program, which includes certified software programs
and is staffed with trained engineers to perform a
comprehensive review of the preliminary system
selections made by the customer. This program
provides valuable insight to the potential performance
of the systems and is a valuable tool in final system
selection process. The following illustrations are typical
outputs for the example site located in Charlotte,
North Carolina. This location was chosen due its very
Figure 4: Shows state-of-the-art data center annual electricity consumption
reliable utility infrastructure and its ability to attract
mission critical type businesses. The illustrations
compare a state-of-the-art designed data center using
current approaches and HP FlexDC for the given
location.

Comparing two different scenarios.

Scenario A: Base case state-of-the-art
brick-and-mortar data center.
A state-of-the-art legacy data center’s shell is typically
built with concrete reinforced walls. All of the cooling
and electrical systems are located in the same shell.
Traditional legacy data center cooling systems entail
the use of large central chiller plants and vast piping
networks and pumps to deliver cooling air handlers
located in the IT spaces. Electrical distribution systems
typically are dual-ended static UPS system with good
reliability but low efficiencies due to part-loading
conditions. PUE for a traditional data center with tier
ratings of III and above are between 1.5–1.9.


Scenario B: HP FlexDC
The reliability of the system configuration is equivalent
to an Uptime Institute Tier III, distributed redundant.
The total critical power available to the facility is
3.2 MW. The building is metal, using materials
standard within the metal buildings industry. The
electrical distribution system is a distributed redundant
scheme based on a flywheel UPS system located in
prefabricated self-contained housings. The standby
generators are located on the exterior of the facility
in prefabricated self-contained housing with belly tank
fuel storage.
The mechanical cooling systems are prefabricated
self-contained air handlers with air-to-air heat
exchangers using Dx refrigerant cooling to assist
during periods of the year when local environment is
not capable of providing the total cooling for the data
center IT space.
The IT white space is a non-raised floor environment.
The IT equipment racks are arranged in a hot aisle
containment configuration. The hot return air is
directed into the drop ceiling above and returned to
the air handlers.
The following life-cycle cost analysis matrix quantifies
the CAPEX and OPEX costs and the resultant PV
dollars for the base case and the alternative scenario.

Which feeds this summary.

image

And a NPV cost savings of 37%.

Besides HP sharing Flexible Data Center design approach, they have published a set of documents that anyone building their own data center can use.

Kfir thanks for taking a step to share more information in industry and show them a better path to green a data center, being economically, efficietly, and environmentally.

Read more

HP introduces Butterfly Flexible Data Center design, reducing CAPex by 50%

OK, my Microsoft past haunts me.  HP introduces a "butterfly" data center design and I think of the MSN butterfly.

HP's butterfly looks different than MSN's, and I doubt we'll see an HP data center staff in a butterfly outfit, but the building does look like a butterfly.

HP Flexible DC “butterfly” design

HP Flexible DC is based on a “butterfly” design featuring four prefabricated quadrants, or modules, that stem off a central administrative section. The offering uses industrial components to improve cost efficiencies as well as a streamlined building process with a variety of options for power and cooling distribution.

image

Joking aside I was able to talk to Kfir Godrich, CTO of HP critical infrastructure.

“Clients, such as financial service providers, government entities, and cloud and colocation hosts, will find the scalable and modular nature of HP Flexible DC a compelling option,” said Kfir Godrich, chief technology officer, Technology Services, HP. “HP can help clients innovate the way they build and operate a greenfield data center for greater savings over its life span.”

I am writing this blog post before the official release, and I will update this blog with the press release link.

HP Flexible Data Center Reduces Clients’ Upfront Capital Investment Requirements by Half, Optimizes Resource Use

Design delivers flexibility, lowers carbon footprint

PALO ALTO, Calif., July 27, 2010

HP today introduced a new way for clients to cut capital investment requirements for the design and build of data centers in half while significantly decreasing their carbon footprint.(1)

The patent-pending HP Flexible Data Center (HP Flexible DC) offers a standardized, modular approach to designing and building data centers that allows clients to replace traditional data center designs with a flexible solution that can be expanded as needed while conserving resources.

Some facts that data center folks will care about are:

  • 3.2 MW is the overall capability of the total butterfly building.
  • Each of the modules is 800 kW which you can deploy in a partial deployment, supporting 800, 1600, 2400, and 3200 kW increments.
  • The central core is shared building support space for the four modules.
  • You can use multiple 3.2 MW deployments for a campus approach like below

image

  • PUE is in the 1.2 - 1.25 range.
  • The design is modular to support multiple power and cooling systems, using multiple vendors while maintaining a high degree of integration across the systems.
  • Total square footage for a 3.2 MW configuration is 25,000.
  • Removing complexity in the system increases availability, efficiency, and cost effectiveness.

Here are images from the official presentation.

The one slide I would add if I was creating the presentation is "Why a new Data Center design?" where HP explained problems it sees customers having and then how Flexible DC creates a new approach to DC design.

image

Modularity and of the system to support 800kW increments.  BTW, Kfir said you could deploy a 400 kW configuration instead of 800kW.

image

To support a low PUE, hot air is exhausted through the roof.  Although in the images HP provided you can't see the roof system which lead me to think HP has a patent in process for the roof.  Note Yahoo patented its Chicken Coop design.  Kfir also made it clear the  design with 4 cooling methods can support deployments anywhere in the world, and where trade-offs can be made for when water is an expensive resource. 

image

Don't ask what Tier the design is.  It is designed for availability, energy and cost efficiency, not to meet a Tier standard.

image

I've been spending more time thinking about how you present data center issues to the CFO and so has HP.

image

There were six things that impressed me

  1. The amount of topics Kfir and I could cover in 25 minutes.
  2. The quality of the presentation and information that HP has in the Flexible Data Center Solution.  They can use this same presentation for CxO and data center geeks.  Although I would add a Why change data center design slide.  On the other hand criticizing decisions people have made in the past could upset some the audience and make them defensive, so being on the safe side I can see HP's choice for not calling out what is wrong with data center design.
  3. A focus on the supply chain.  If you are going to use HP's approach you could efficiently add data center capacity every year or more often.  This approach like a Just In Time manufacturing approach reduces the data center building inventory now that you could add additional space in as little as 3 months.  The current approach of building for 5 years of data center needs now turns into what do you need in 3 - 6 months.
  4. This is going to get a lot of people thinking about how they approach data center capacity.  How many were trying to save 10% in data center construction, and HP says hey you can save 50%?
  5. Building in smaller increments allows management to see data center building costs on a regular basis.
  6. The Butterfly Flexible DC design is a good alternative for Cloud Computing.

image

As HP's Flexible DC hits the market it will be interesting to watch the media coverage and customer interest.  I've already talked to my friends to tell them HP's Butterfly Flexible Data Center is something they should look at.

Five years ago who would have thought HP would have Data Center PODs and Flexible Data Centers?

image

Read more

Message to the CFO, green the data center by asking for a multi-tier design, saving 15 - 20% - HP Video

HP has a video that illustrates some concepts for greening the data center. I don't think many of you have seen this video as YouTube shows there are only 64 views.

1) Multi-tier design.  I am amazed at how many times people create data center space without thinking of the high rent and low rent district.  The high availability space is more expensive and business units should be charged more to put IT services in these areas.  If you don't provide a choice, then everyone will pay a higher cost.  HP says they can save 15 - 20% data center costs with a mulit-tier design.

image

image

2) Modeling. Data Center Cost Optimization modeling is demo'd as an HP critical infrastructure solution.  This starts at 3:30 min into the video.

HP Optimized Data Centers Simulation
Data Centers require a huge investment to design, build,
and then operate. It is critical to your business success
that your Data Center operates at peak efficiency to
achieve its business goals. Optimized Data Centers
Simulation from HP will help you do exactly that.

image

Two good ideas to improve data center design is to use multi-tiers in one facility and to model the design choices.

How many of your are cornered into one way with out the ability to see your choices?

Read more