ARM Servers in Data Centers is inevitable

I've been discussing ARM servers in data centers for a while and now it is becoming common in media to ask when will ARM have servers in the data center.  GigaOm is one of those keeping up the momentum.

For ARM, It’s Server Side Up

By Om Malik Jul. 29, 2010, 6:00pm PDT 4 Comments

0

Ian Drew, executive vice president of marketing at ARM Holdings, a Cambridge, U.K.-based company that makes semiconductors powering a majority of the smartphones, tablets, 70 percent of world’s hard drives and half the world’s printers, is on a whirlwind tour of Silicon Valley. And what everyone (including me) wants to talk to him about is servers, or rather low-power server chips that can power the data centers of tomorrow.

It's not just media, but customers are interested in low power servers.

And what Drew and his cohorts are seeing is a radical revolution in the data centers. “While the x86 world focused on pure megahertz, we have focused on the megahertz per milliwatt,” Drew said during our conversation earlier today. “We focus on quarter-to-half milliwatts as a key metric.” Most of the new devices such as the iPhones don’t have heat sinks in them, he joked.

I think about 2 years ago I was talking to ARM about why they should go into the server business for data centers.  Now they are comfortable making their own pitch on why.

“If you look at our heritage (of low power chips) it makes perfect sense for us to be looking at the servers and the data centers,” said Drew. With “cooling” making up nearly half the capital expenditure and almost two-thirds of the operation expenses, Drew said power is going to be a bigger part of the conversation.

“Everyone is using the Web and the Web is more demanding today which means all of the stuff is going to run through data centers,” he noted. “Two things are very clear: there is going to be a lot of data and need for less power.” By getting the world to buy more edge devices (iPhones, iPads etc.), ARM is at the same boosting demand for back-end computing infrastructure. Now by diversifying into the data center server business, it can make more money selling its low-power chip technology to server makers. In other words, ARM wins on both sides of the trade.

and GigaOm is even watching Microsoft to look at ARM for data centers.

We also reported on a Microsoft job listing that sought a software development engineer with experience running ARM in the data center for the company’s eXtreme Computing group. For the last couple of decades, Intel’s x86 chips have gained dominance in the data center, but as power considerations begin to outweigh the benefits of a cheap, general purpose processor, other chip makers have started to smell blood. Nvidia is pushing its graphics processors for some types of applications, while Texas Instruments is researching the use of DSPs inside servers.

but, as ARM cautions, don't expect product soon.  This is a long term game for ARM, 2 years or more before we see servers in mass.

But don’t expect this to happen overnight, Drew cautioned. “We are going to see some pilots over next year, but this is a long term initiative.” He believes that this long, continuous transition to lower-power server chips is going to take between three to five years. When I asked Drew what are those pilots, he declined to comment. From our reporting, we can easily tell you Microsoft, Smooth Stone and Marvell are experimenting with ARM-based server processors.

Read more

Worldwide www.greenm3.com visitors

For 6 months I have been using www.clustrmaps.com to see the global reach of this blog.  Here is a map of the last 6months and the 186 countries that have hit this blog.

Thanks for continuing to visit this blog.  What you read influences what I write next.

-Dave Ohara

image

Here are the 186 countries.

1 United States (US)
2 United Kingdom (GB)
3 India (IN)
4 Canada (CA)
5 France (FR)
6 Germany (DE)
7 Australia (AU)
8 Netherlands (NL)
9 Japan (JP)
10 Singapore (SG)
11 Taiwan (TW)
12 Italy (IT)
13 Spain (ES)
14 Brazil (BR)
15 China (CN)
16 Korea, Republic of (KR)
17 Malaysia (MY)
18 Philippines (PH)
19 Sweden (SE)
20 Belgium (BE)
21 Hong Kong (HK)
22 Ireland (IE)
23 Europe (EU)
24 Poland (PL)
25 South Africa (ZA)
26 Switzerland (CH)
27 Turkey (TR)
28 Finland (FI)
29 Indonesia (ID)
30 Romania (RO)
31 Denmark (DK)
32 Russian Federation (RU)
33 New Zealand (NZ)
34 Thailand (TH)
35 Mexico (MX)
36 Israel (IL)
37 Pakistan (PK)
38 Portugal (PT)
39 Czech Republic (CZ)
40 Iran, Islamic Republic of (IR)
41 Norway (NO)
42 Egypt (EG)
43 Saudi Arabia (SA)
44 Ukraine (UA)
45 Austria (AT)
46 Greece (GR)
47 Hungary (HU)
48 United Arab Emirates (AE)
49 Argentina (AR)
50 Colombia (CO)
51 Bulgaria (BG)
52 Vietnam (VN)
53 Chile (CL)
54 Asia/Pacific Region (AP)
55 Holy See (Vatican City State) (VA)
56 Iceland (IS)
57 Slovakia (SK)
58 Peru (PE)
59 Slovenia (SI)
60 Lithuania (LT)
61 Sri Lanka (LK)
62 Croatia (HR)
63 Serbia (RS)
64 Latvia (LV)
65 Estonia (EE)
66 Costa Rica (CR)
67 Qatar (QA)
68 Venezuela (VE)
69 Jordan (JO)
70 Bangladesh (BD)
71 Puerto Rico (PR)
72 Nigeria (NG)
73 Luxembourg (LU)
74 Kenya (KE)
75 Kuwait (KW)
76 Lebanon (LB)
77 Tunisia (TN)
78 Morocco (MA)
79 Ghana (GH)
80 Ecuador (EC)
81 Bahrain (BH)
82 Malta (MT)
83 Oman (OM)
84 Nepal (NP)
85 Jamaica (JM)
86 Trinidad and Tobago (TT)
87 Dominican Republic (DO)
88 Uruguay (UY)
89 Macedonia (MK)
90 Mauritius (MU)
91 Algeria (DZ)
92 Cyprus (CY)
93 Bosnia and Herzegovina (BA)
94 Georgia (GE)
95 Belarus (BY)
96 Mongolia (MN)
97 Sudan (SD)
98 Brunei Darussalam (BN)
99 Syrian Arab Republic (SY)
100 Senegal (SN)
101 Guatemala (GT)
102 Albania (AL)
103 Uganda (UG)
104 Kazakstan (KZ)
105 Paraguay (PY)
106 Bolivia (BO)
107 Libyan Arab Jamahiriya (LY)
108 Cambodia (KH)
109 Maldives (MV)
110 Tanzania, United Republic of (TZ)
111 Mozambique (MZ)
112 Macau (MO)
113 Yemen (YE)
114 French Polynesia (PF)
115 Palestinian Territory (PS)
116 Armenia (AM)
117 Cote D'Ivoire (CI)
118 Cameroon (CM)
119 Isle of Man (IM)
120 Panama (PA)
121 Ethiopia (ET)
122 Honduras (HN)
123 Fiji (FJ)
124 Myanmar (MM)
125 Guyana (GY)
126 El Salvador (SV)
127 Moldova, Republic of (MD)
128 Reunion (RE)
129 Iraq (IQ)
130 Cuba (CU)
131 Angola (AO)
132 Nicaragua (NI)
133 Suriname (SR)
134 Jersey (JE)
135 Guadeloupe (GP)
136 Namibia (NA)
137 Botswana (BW)
138 Zimbabwe (ZW)
139 Bahamas (BS)
140 Netherlands Antilles (AN)
141 Azerbaijan (AZ)
142 Monaco (MC)
143 Rwanda (RW)
144 Bermuda (BM)
145 Cayman Islands (KY)
146 Grenada (GD)
147 Barbados (BB)
148 Greenland (GL)
149 Saint Kitts and Nevis (KN)
150 Montenegro (ME)
151 New Caledonia (NC)
152 Martinique (MQ)
153 Guernsey (GG)
154 Virgin Islands, British (VG)
155 Saint Vincent and the Grenadines (VC)
156 Papua New Guinea (PG)
157 American Samoa (AS)
158 Anguilla (AI)
159 Guam (GU)
160 Afghanistan (AF)
161 Benin (BJ)
162 French Guiana (GF)
163 Liberia (LR)
164 Guinea (GN)
165 Micronesia, Federated States of (FM)
166 Somalia (SO)
167 Tajikistan (TJ)
168 Madagascar (MG)
169 Swaziland (SZ)
170 Andorra (AD)
171 Aland Islands (AX)
172 Djibouti (DJ)
173 Solomon Islands (SB)
174 Burkina Faso (BF)
175 Belize (BZ)
176 Antigua and Barbuda (AG)
177 Virgin Islands, U.S. (VI)
178 Antarctica (AQ)
179 Haiti (HT)
180 Montserrat (MS)
181 Dominica (DM)
182 Gibraltar (GI)
183 Gambia (GM)
184 Kyrgyzstan (KG)
185 Saint Lucia (LC)
186 Bhutan (BT)
Read more

Could you work for Software Executives? IBM reorgs HW to be under SW

Data Centers organizations are interesting to watch as many problems for how data centers get designed and run are influenced by the organizational structure.  Jonathan Eunice guest writes on CNET News regarding IBM's reorg of HW under SW.

IBM's reorg shows shape of IT to come

by Jonathan Eunice

I'm always wary of analyzing corporate organizations and reporting structures. They change frequently--every year or two, in some companies--so they're always in flux. And the details of who reports to whom, or what they want to call this business unit or that--those things matter, a bit, but not nearly so much as the products and services a company offers, how it goes to market, who its competitors are, and so on. Finally, company structures are very "inside baseball"--the kind of detailed who's-on-first, who's-warming-up-in-the-bullpen information that industry insiders may find interesting, but that isn't really all that useful to most customers or investors.

But I'm going to make an exception here because the changes going on at IBM illustrate important structural changes in IT overall, how vendors approach the market (and each other), and what customers can expect from IT providers in the coming years.

The news is as simple as it is astounding: IBM's Systems and Technology Group (STG)--the unit that makes IBM's mainframes, Power and x86 servers, storage, and microprocessors--will henceforth report into the IBM Software Group (SWG). Strictly speaking, IBM hasn't been "a hardware company" for nearly 20 years. Software and services have grown up to be, both in terms of revenue and industry footprint, what the company's about. Everyone who's looked at IBM or industry finances realizes this. Still, when you started thinking about IBM as "a computer company" all those years ago, it's hard to fully internalize the idea that it's no longer really "a computer company."

Google has all its data center infrastructure and hardware report up to Urs Hoelzle.

Urs Hölzle is senior vice president of operations and Google Fellow at Google. As one of Google's first ten employees and its first VP of Engineering, he has shaped much of Google's development processes and infrastructure.

Urs being a SW guy looks at data centers as a computer for Google Software.

Before joining Google, he was an Associate Professor of Computer Science at UC Santa Barbara. He received a master's degree in computer science from ETH Zurich in 1988 and was awarded a Fulbright scholarship that same year. In 1994, he earned a Ph.D. from Stanford University, where his research focused on programming languages and their efficient implementation.

As IBM follows Google's model to have hardware report to a VP who is a software guy, it will be interesting to watch who follows the model.

And the world's changing. Enterprises are much more likely to start their IT planning and acquisitions with what they want and need to accomplish, not with pre-determinations of what hardware they'll run it on. That's a shift in customer thinking that benefits those focused on software and services. It similarly benefits those who consider non-IT issues such as financing, demand generation, and partner ecosystems equally important to products and services. IBM definitely thinks this way, about "the whole package," and not just about computers, or any other single element.

This is one way to bridge IT and Facilities.  Put it all under a software executive.

Read more

HP Butterfly Flexible Data Center, Part 2 - 20 year NPV 37% lower than traditional

I just posted about HP's Butterfly Flexible Data Center.  Now that HP has official announced the solution, there is more on the HP Press Room.

Economical, Efficient, Environmental is a theme for HP's video presentation.

image

 

Here are numbers HP uses to demonstrate a 50% lower CAPex

image

And lower OPex.

image

And HP discusses yearly water usage. Yippee!!!

image 

Typically, data centers use 1.90 liters of water
per kWh of total electricity. A one-megawatt data
center with a PUE of 1.50 running at full load for
one year is expected to consume 13 million kWh
and will consume 6.5 million U.S. gallons of water
annually. FlexDC uses no water in some climates and
dramatically reduces the consumption of water in
others. Actual amounts can vary depending on system
selection and local weather patterns.

HP has a white paper that is a must read for anyone designing a data center.

Introduction
If we briefly go back to the late 1960s and the advent
of transistor, efficiencies and cycles of innovation in
the world of electronics have increased according to
Moore’s law. However, data center facilities, which
are an offshoot of this explosion in technology, have
not kept pace with this legacy. With the magnitude of
capital required and costs involved with the physical
day-to-day operations of data centers, this existing
paradigm could impede the growth of data center
expansions, unless a new group of innovative solutions
is introduced in the market place.


With sourcing capital becoming more difficult to
secure, linked with potential reductions in revenue
streams, an environment focused on cost reduction
has been generated. The pressure to reduce capital
expenditure (CAPEX) is one of the most critical issues
faced by data center developers today. This is helping
to finally drive innovation for data centers.


The key contributors which can reduce CAPEX
and operational expenditure (OPEX) are typically
modularity, scalability, flexibility, industrialization,
cloud computing, containerization of mechanical and
electrical solutions, climate control, expanding criteria
for IT space, and supply chain management. All these
factors come into play when planning a cost-effective
approach to data center deployment. Every company
that develops and operates data centers is attempting
to embrace these features. However, businesses
requiring new facilities usually do not explore all the
strategies available. Generally, this is either due to
lack of exposure to their availability or a perceived
risk associated with changes to their existing
paradigm. Emerging trends such as fabric computing
further exacerbate the silo approach to strategy and
design, where “what we know” is the best direction.

The Four Cooling system alternatives are:

This adaptation of an
industrial cooling approach includes the following
cooling technologies: air-to-air heat exchangers with
direct expansion (Dx) refrigeration systems; indirect
evaporation air-to-air heat exchangers with Dx assist;
and direct evaporation and heat transfer wheel with
Dx assist.

Reducing fan power.  Fan power is a hidden inefficiency in the data center whether in the mechanical systems or IT equipment.  HP discusses how it reduces fan power.

To obtain the maximum use of the environment, supply
air temperature set points need to be set at the highest
temperature possible and still remain within the
warranty requirement range of the IT equipment. The
next critical component is to control the temperature
difference between the supply and return air streams
to a minimum range of 25° F. This reduces the amount
of air needed to cool the data center, thus reducing
fan energy. The configuration of the data center in
general must follow certain criteria in order to receive
greater benefits available through the use of this
concept, as follows:
• Server racks are configured in a hot aisle
containment (HAC) configuration.
• There is no raised floor air distribution.
• The air handlers are distributed across a common
header on the exterior of the building for even air
distribution.
• Supply air diffusers are located in the exterior wall,
connected to the distribution duct. These diffusers
line up with the cold aisle rows.
• The room becomes a flooded cold aisle.
• The hot aisle is ducted to a plenum, normally
created through the use of a drop ceiling. The hot
air shall be returned via the drop ceiling plenum
back to the air handlers.
• Server racks are thoroughly sealed to reduce the
recirculation of waste heat back into the inlets of
nearby servers.
• Server layout is such that the rows of racks do not
exceed 18 feet in length.
The control for the air handlers shall maintain
maximum temperature difference between the supply
and return air distribution streams. The supply air
temperature is controlled to a determined set point
while the air amount is altered to maintain the desired
temperature difference by controlling the recirculation
rate in the servers.

Electrical distribution techniques are listed as well.

Traditional data centers have electrical distribution
systems based on double conversion UPS with battery
systems and standby generators. There are several
UPS technologies offered within FlexDC, which
expands the traditional options:
• Rotary UPS—94% to 95% energy efficient
• Flywheel UPS—95% energy efficient
• Delta Conversion UPS—97% energy efficient
• Double Conversion UPS—94.5% to 97% energy
efficient
• Offline UPS—Low-voltage version for the 800 kW
blocks, about 98% energy efficient
FlexDC not only specifies more efficient transformers
as mandated by energy standards, it also uses best
practices for energy efficiency. FlexDC receives power
at medium voltage and transforms it directly to a
server voltage of 415 V/240 V. This reduces losses
through the power distribution unit (PDU) transformer
and requires less electrical distribution equipment,
thus, saving energy as well as saving on construction
costs. An additional benefit is a higher degree of
availability because of fewer components between the
utility and the server.

And HP takes a modeling approach.

HP has developed a state-of-the-art energy evaluation
program, which includes certified software programs
and is staffed with trained engineers to perform a
comprehensive review of the preliminary system
selections made by the customer. This program
provides valuable insight to the potential performance
of the systems and is a valuable tool in final system
selection process. The following illustrations are typical
outputs for the example site located in Charlotte,
North Carolina. This location was chosen due its very
Figure 4: Shows state-of-the-art data center annual electricity consumption
reliable utility infrastructure and its ability to attract
mission critical type businesses. The illustrations
compare a state-of-the-art designed data center using
current approaches and HP FlexDC for the given
location.

Comparing two different scenarios.

Scenario A: Base case state-of-the-art
brick-and-mortar data center.
A state-of-the-art legacy data center’s shell is typically
built with concrete reinforced walls. All of the cooling
and electrical systems are located in the same shell.
Traditional legacy data center cooling systems entail
the use of large central chiller plants and vast piping
networks and pumps to deliver cooling air handlers
located in the IT spaces. Electrical distribution systems
typically are dual-ended static UPS system with good
reliability but low efficiencies due to part-loading
conditions. PUE for a traditional data center with tier
ratings of III and above are between 1.5–1.9.


Scenario B: HP FlexDC
The reliability of the system configuration is equivalent
to an Uptime Institute Tier III, distributed redundant.
The total critical power available to the facility is
3.2 MW. The building is metal, using materials
standard within the metal buildings industry. The
electrical distribution system is a distributed redundant
scheme based on a flywheel UPS system located in
prefabricated self-contained housings. The standby
generators are located on the exterior of the facility
in prefabricated self-contained housing with belly tank
fuel storage.
The mechanical cooling systems are prefabricated
self-contained air handlers with air-to-air heat
exchangers using Dx refrigerant cooling to assist
during periods of the year when local environment is
not capable of providing the total cooling for the data
center IT space.
The IT white space is a non-raised floor environment.
The IT equipment racks are arranged in a hot aisle
containment configuration. The hot return air is
directed into the drop ceiling above and returned to
the air handlers.
The following life-cycle cost analysis matrix quantifies
the CAPEX and OPEX costs and the resultant PV
dollars for the base case and the alternative scenario.

Which feeds this summary.

image

And a NPV cost savings of 37%.

Besides HP sharing Flexible Data Center design approach, they have published a set of documents that anyone building their own data center can use.

Kfir thanks for taking a step to share more information in industry and show them a better path to green a data center, being economically, efficietly, and environmentally.

Read more

HP introduces Butterfly Flexible Data Center design, reducing CAPex by 50%

OK, my Microsoft past haunts me.  HP introduces a "butterfly" data center design and I think of the MSN butterfly.

HP's butterfly looks different than MSN's, and I doubt we'll see an HP data center staff in a butterfly outfit, but the building does look like a butterfly.

HP Flexible DC “butterfly” design

HP Flexible DC is based on a “butterfly” design featuring four prefabricated quadrants, or modules, that stem off a central administrative section. The offering uses industrial components to improve cost efficiencies as well as a streamlined building process with a variety of options for power and cooling distribution.

image

Joking aside I was able to talk to Kfir Godrich, CTO of HP critical infrastructure.

“Clients, such as financial service providers, government entities, and cloud and colocation hosts, will find the scalable and modular nature of HP Flexible DC a compelling option,” said Kfir Godrich, chief technology officer, Technology Services, HP. “HP can help clients innovate the way they build and operate a greenfield data center for greater savings over its life span.”

I am writing this blog post before the official release, and I will update this blog with the press release link.

HP Flexible Data Center Reduces Clients’ Upfront Capital Investment Requirements by Half, Optimizes Resource Use

Design delivers flexibility, lowers carbon footprint

PALO ALTO, Calif., July 27, 2010

HP today introduced a new way for clients to cut capital investment requirements for the design and build of data centers in half while significantly decreasing their carbon footprint.(1)

The patent-pending HP Flexible Data Center (HP Flexible DC) offers a standardized, modular approach to designing and building data centers that allows clients to replace traditional data center designs with a flexible solution that can be expanded as needed while conserving resources.

Some facts that data center folks will care about are:

  • 3.2 MW is the overall capability of the total butterfly building.
  • Each of the modules is 800 kW which you can deploy in a partial deployment, supporting 800, 1600, 2400, and 3200 kW increments.
  • The central core is shared building support space for the four modules.
  • You can use multiple 3.2 MW deployments for a campus approach like below

image

  • PUE is in the 1.2 - 1.25 range.
  • The design is modular to support multiple power and cooling systems, using multiple vendors while maintaining a high degree of integration across the systems.
  • Total square footage for a 3.2 MW configuration is 25,000.
  • Removing complexity in the system increases availability, efficiency, and cost effectiveness.

Here are images from the official presentation.

The one slide I would add if I was creating the presentation is "Why a new Data Center design?" where HP explained problems it sees customers having and then how Flexible DC creates a new approach to DC design.

image

Modularity and of the system to support 800kW increments.  BTW, Kfir said you could deploy a 400 kW configuration instead of 800kW.

image

To support a low PUE, hot air is exhausted through the roof.  Although in the images HP provided you can't see the roof system which lead me to think HP has a patent in process for the roof.  Note Yahoo patented its Chicken Coop design.  Kfir also made it clear the  design with 4 cooling methods can support deployments anywhere in the world, and where trade-offs can be made for when water is an expensive resource. 

image

Don't ask what Tier the design is.  It is designed for availability, energy and cost efficiency, not to meet a Tier standard.

image

I've been spending more time thinking about how you present data center issues to the CFO and so has HP.

image

There were six things that impressed me

  1. The amount of topics Kfir and I could cover in 25 minutes.
  2. The quality of the presentation and information that HP has in the Flexible Data Center Solution.  They can use this same presentation for CxO and data center geeks.  Although I would add a Why change data center design slide.  On the other hand criticizing decisions people have made in the past could upset some the audience and make them defensive, so being on the safe side I can see HP's choice for not calling out what is wrong with data center design.
  3. A focus on the supply chain.  If you are going to use HP's approach you could efficiently add data center capacity every year or more often.  This approach like a Just In Time manufacturing approach reduces the data center building inventory now that you could add additional space in as little as 3 months.  The current approach of building for 5 years of data center needs now turns into what do you need in 3 - 6 months.
  4. This is going to get a lot of people thinking about how they approach data center capacity.  How many were trying to save 10% in data center construction, and HP says hey you can save 50%?
  5. Building in smaller increments allows management to see data center building costs on a regular basis.
  6. The Butterfly Flexible DC design is a good alternative for Cloud Computing.

image

As HP's Flexible DC hits the market it will be interesting to watch the media coverage and customer interest.  I've already talked to my friends to tell them HP's Butterfly Flexible Data Center is something they should look at.

Five years ago who would have thought HP would have Data Center PODs and Flexible Data Centers?

image

Read more