Christian Belady's Article on Annual Amortized Costs in the Data Center for a 1U server

I've seen this graphic many times in presentations, and I wanted more background on it.

image

The original article is here. With the details behind Christian's calculations.

In a recent article, the 3-year Energy Cost to Acquisition Cost ratio (EAC) [3] was introduced as a metric to understand the cost of energy relative to the cost of the server. Today, for 1U servers this is approaching unity and comes as a surprise to most data center managers. To convince the reader, it is a simple calculation to determine the energy cost of the server:

3-yr Energy Use Cost = 3 yrs x (8760 hrs/yr) x ($0.10/kWhr) x (Server Power in kW) x PUE (1) where PUE is the Power Usage Effectiveness [3] or the Data Center Electrical Load over the IT Electrical Load. For a well managed data center this value is usually about 2.0 (or less), which implies that for every Watt of server power, an additional Watt is consumed by the chillers, UPSs, air handlers, pumps, etc. Indications are that for some data centers this value can be as high as 3.0 [4] and in some cases higher. Usually, this variation is completely due to how well the cooling environment is designed in the data center and has a direct relationship to the energy cost [5].

Using equation (1) for a 1U server (which, when fully configured, costs about $4,000 and consumes about 500 W) and a PUE of 2.0 results in a cost of energy of $2,628. This is almost as much as the server itself, but the reality is that in many cases the cost is much higher. In Japan, energy costs are twice as much, so this number would be double. To make matters worse, in data centers where the cooling design is poor (PUE = 3.0, for example), the cost of energy would be 50% higher.

This means that the energy cost would be $3,942 in the U.S. and $7,884 in Japan. Clearly, there can be huge savings in this energy cost by focusing on optimizing the cooling in the data center as shown in the articles identified earlier [3,5].

Unfortunately, the energy usage is not the only cost driver that scales with power. In 2005, fundamental research [6] was published showing that the infrastructure cost is a big portion of the TCO and quantified the real cost drivers in the data center, which included the amortized cost of the power and cooling infrastructure. This research shows that a fundamental problem is the over-provisioning of cooling due to poor cooling and the lack of understanding of the environment. In addition, The Uptime Institute has also introduced a simplified way for estimating the cost of data center infrastructure [7] based on Tier ratings. For brevity, only Tier IV data centers (with dual redundant power throughout) will be examined since this is the recommended approach for mission critical operations. The Uptime Institute’s Infrastructure Cost (IC) equation for a Tier IV data center is as follows:

IC = Total Power x ($22,000/kW of UPS output for IT) + ($2370/m2 of allocated raised floor for IT) (2)

While, admittedly, the authors state that there is a large error band around this equation, it is very useful in capturing the magnitude of infrastructure cost. In fact, it is this author’s contention that this equation could be fine tuned for more accuracy using PUE because poor cooling will mean that more infrastructure will be needed.

However, that discussion is beyond the scope of this paper. Again, looking at the 500 watts of power consumed by the 1U server and using equation (2) and ignoring the IT space the server occupies, the cost of infrastructure to support that server would be enormous at $11,000. The reality is that this cost would be amortized over 10 to 15 years so real annual cost of the infrastructure is $1,100 per year. For the 3-year life of the server, this equates to $3,300 or again close to the cost of the server. Note that there is also an adjustment in the cost as a result of the space occupied by the server, but its calculation is beyond the scope of this discussion.