Google Ads

Enter your email address:

Delivered by FeedBurner


This form does not yet contain any fields.

    Data Center Myth – Thermal/Temperature Shock

    Mike Manos has a post pointing out what he calls “data center junk science” and the data center thermal shock requirement. 

    Mike’s post got my curiosity up, and I spent time researching to build on Mike’s post. This is my 956th post in less than 2 years, and people many times think I have a journalism writing background.  Well fooled you, I am an Industrial Engineer and Operations Research graduate from Cal Berkeley.  So, even thought I write a lot, you are reading my notebook of stuff that I discover I want to share with others. For those of you who don’t want industrial engineers do.

    Industrial engineering is a branch of engineering that concerns with the development, improvement, implementation and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material and process. It also deals with designing new prototypes to help save money and make the prototype better. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical and social sciences together with the principles and methods of engineering analysis and design to specify, predict and evaluate the results to be obtained from such systems. In lean manufacturing systems, Industrial engineers work to eliminate wastes of time, money, materials, energy, and other resources.

    This background all helps me think of how to green the data center.

    And Operations Research helps me think about the technical methods and SW to do this.

    interdisciplinary branch of applied mathematics that uses methods such as mathematical modeling, statistics, andalgorithms to arrive at optimal or near optimal solutions to complex problems. It is typically concerned with determining the maxima (of profit, assembly line performance, crop yield, bandwidth, etc) or minima (of loss, risk, etc.) of some objective function. Operations research helps management achieve its goals using scientific methods.

    Mike’s post got me thinking because one of my summer internships was at HP where I worked as a reliability/quality engineer figuring out how to build better quality HP products.  The team I worked in were early innovators in thermal cycling and stressing components back in the early 1980’s. 

    Data Center Junk Science: Thermal Shock \ Cooling Shock

    October 1, 2009 by mmanos

    I recently performed an interesting exercise where I reviewed typical co-location/hosting/ data center contracts from a variety of firms around the world.    If you ever have a few long plane rides to take and would like an incredible amount of boring legalese documents to review, I still wouldn’t recommend it.  :)

    I did learn quite a bit from going through the exercise but there was one condition that I came across more than a few times.   It is one of those things that I put into my personal category of Data Center Junk Science.   I have a bunch of these things filed away in my brain, but this one is something that not only raises my stupidity meter from a technological perspective it makes me wonder if those that require it have masochistic tendencies.

    I am of course referring to a clause for Data Center Thermal Shock and as I discovered its evil, lesser known counterpart “Cooling” Shock.    For those of you who have not encountered this before its a provision between hosting customer and hosting provider (most often required by the customer)  that usually looks something like this:

    If the ambient temperature in the data center raises 3 degrees over the course of 10 (sometimes 12, sometimes 15) minutes, the hosting provider will need to remunerate (reimburse) the customer for thermal shock damages experienced by the computer and electronics equipment.  The damages range from flat fees penalties to graduated penalties based on the value of the equipment.

    As Mike asks the issue of duration.

    Which brings up the next component which is duration.   Whether you are speaking to 10 minutes or 15 minutes intervals these are nice long leisurely periods of time which could hardly cause a “Shock” to equipment.   Also keep in mind the previous point which is the environment has not even violated the ASHRAE temperature range.   In addition, I would encourage people to actually read the allowed and tested temperatures in which the manufacturers recommend for server operation.   A 3-5 degree swing  in temperature would rarely push a server into an operating temperature range that would violate the range the server has been rated to work in or worse — void the warranty.

    Here is the military specification typically used by vendors. MIL-STD- 810G to define temperature/thermal shock.

    METHOD 503.5
    METHOD 503.5

    Use the temperature shock test to determine if materiel can withstand sudden changes in the temperature of the surrounding atmosphere without experiencing physical damage or deterioration in performance. For the purpose of this document, "sudden changes" is defined as "an air temperature change greater than 10°C (18°F) within one minute."
    Normal environment.
    Use this method when the requirements documents specify the materiel is likely to be deployed where it may experience sudden changes of air temperature. This method is intended to evaluate the effects of sudden temperature changes of the outer surfaces of materiel, items mounted on the outer surfaces, or internal items situated near the external surfaces. This method is, essentially, surface-level tests. Typically, this addresses:
    The transfer of materiel between climate-controlled environment areas and extreme external ambient conditions or vice versa, e.g., between an air conditioned enclosure and desert high temperatures, or from a heated enclosure in the cold regions to outside cold temperatures.
    Ascent from a high temperature ground environment to high altitude via a high performance vehicle (hot to cold only).
    Air delivery/air drop at high altitude/low temperature from aircraft enclosures when only the external material (packaging or materiel surface) is to be tested.

    As Mike says the surprising part is the requirement for thermal shock is coming from technical people, most likely who have military backgrounds.

    Even more surprising to me was that these were typically folks on the technical side of the house more then the lawyers or business people.  I mean, these are the folks that should be more in tune with logic than say business or legal people who can get bogged down in the letter of the law or dogmatic adherence to how things have been done.  Right?  I guess not.

    I can’t imagine any business person or attorney thinking a thermal shock is 3 degree change in 15 minutes.  If there was an attorney involved they would go to MIL-STD 810G definition of temperature shock being greater than 10°C (18°F) within one minute.

    So where does this myth come from?  Most likely their is a social network effect of people who have consider themselves smarter than others and have added thermal shock to the requirements.  One of the comments from Mike’s blog documents the possible social network source.

    Dave Kelley, Liebert Precision Cooling

    The only place where something like this is “documented” in any way is in the ASHRAE THermal Guidelines book. Since the group that wrote this book included all of the major server vendors, it must have been created with some type of justifiable reason. It states that the “maximum rate of temperature change is 5 degress C (9 degrees F) per hour.

    And as Mike closes this has unintended consequences.

    But this brings up another important point.  Many facilities might experience a chiller failure, or a CRAH failure or some other event which might temporarily have this effect within the facility.    Lets say it happens twice in one year that you would potentially trigger this event for the whole or a portion of your facility (your probably not doing preventative maintenance  – bad you!).  So the contract language around Thermal shock now claims monetary damages.   Based on what?   How are these sums defined?  The contracts I read through had some wild oscillations on damages with different means of calculation, and a whole lot more.   So what is the basis of this damage assessment?   Again there are no studies that says each event takes off .005 minutes of a servers overall life, or anything like that.   So the cost calculations are completely arbitrary and negotiated between provider and customer. 

    This is where the true foolishness then comes in.   The providers know that these events, while rare, might happen occasionally.   While the event may be within all other service level agreements, they still might have to award damages.   So what might they do in response?   They increase the costs of course to potentially cover their risk.   It might be in the form of cost per kw, or cost per square foot, and it might even be pretty small or minimal compared to your overall costs.  But in the end, the customer ends up paying more for something that might not happen, and if it does there is no concrete proof it has any real impact on the life of the server or equipment, and really only salves the whim of someone who really failed to do their homework.  If it never happens the hosting provider is happy to take the additional money.

    Temperature/thermal shock is a term that doesn’t apply to data centers.  Hopefully you’ll know when to call temperature/thermal shock requirements in data center operations a myth.

    Thanks Mike for taking the time to write on this.

    Click to read more ...


    Dark Side of Smart Grid – Privacy and Exponential Data Growth

    MSNBC has a post on the impact of power meters most don’t talk about.  The fact that power meters provide data on what you do in your house and all this data is going to mean petabytes of data.

    What will talking power meters say about you?

    Posted: Friday, October 9 2009 at 05:00 am CT by Bob Sullivan

    Would you sign up for a discount with your power company in exchange for surrendering control of your thermostat? What if it means that, one day, your auto insurance company will know that you regularly arrive home on weekends at 2:15 a.m., just after the bars close?

    Welcome to the complex world of the Smart Grid, which may very well pit environmental concerns against thorny privacy issues. If you think such debates are purely philosophical, you’re behind the times.

    The gov’t is bringing up privacy concerns.

    Pepco’s discount plan is among the first signs that the futuristic “Smart Grid” has already arrived. Up to three-fourths of the homes in the United States are expected to be placed on the “Smart Grid” in the next decade, collecting and storing data on the habits of their residents by the petabyte. And while there’s no reason to believe Pepco or other utilities will share the data with outside firms, some experts are already asking the question: Will saving the planet mean inviting Big Brother into the home? Or at least, as Commerce Secretary Gary Locke recently warned, will privacy concerns be the “Achilles’ heel” of the Smart Grid?

    The dark side of what could be is discussed.

    Dark side of a bright idea
    But others see a darker side. Utility companies, by gathering hundreds of billions of data points about us, could reconstruct much of our daily lives -- when we wake up, when we go home, when we go on vacation, perhaps even when we draw a hot bath. They might sell this information to marketing companies -- perhaps a travel agency will send brochures right when the family vacation is about to arrive. Law enforcement officials might use this information against us ("Where were you last night? Home watching TV? That's not what the power company says … ”). Divorce lawyers could subpoena the data ("You say you're a good parent, but your children are forced to sleep in 61-degree rooms. For shame ..."). A credit bureau or insurance company could penalize you because your energy use patterns are similar to those of other troublesome consumers. Or criminals could spy the data, then plan home burglaries with fine-tuned accuracy.

    How big is the data growth?

    According to a recent discussion by experts at Smart Grid Security, here’s a quick explanation of the sudden explosion in data. In the United Kingdom, for example, 44 million homes had been creating 88 million data entries per year. Under a new two-way, smart system, new meters would create 32 billion data entries. Pacific Gas & Electric of California says it plans to collect 170 megabytes of data per smart meter, per year. And if about 100 million meters are installed as expected in the United States by 2019, 100 petabytes (a million gigabytes) of data could be generated during the next 10 years.

    And you can expect storage vendors and enterprise consultants to support the mindset to leverage the data.

    "Once a company monetizes data it collects, even if the amount is small, it is very reluctant to give it up," he said. Many companies he audits have robust privacy policies but end up using information in ways that frustrate or cost consumers, he said. "They talk a good game, but I'm sure (utility companies) will find ways to use the data, and not necessarily to benefit people but to harm people."

    This complexity of privacy and data storage is part of why I have not focused much effort on the consumer smart meter market.  Applying the concepts of smart metering in data centers and commercial buildings has the potential to be adopted much quicker and supports energy efficiency and the green data center.

    Click to read more ...


    Blogs vs. Twitter – Big vs. Small

    Microsoft Research has an article discussing efforts by their researchers to understand blogs vs. twitter.  At first I wasn’t going to blog this article, but it brings up interesting points to consider in data centers of the big vs. small.  Which is a good question to ask in data centers.  Is it best to be big or small in data centers?

    Researchers Ride the Twitter Wave

    By Rob Knies

    August 6, 2009 2:00 PM PT

    He rocks in the treetops all the day long,

    Hoppin’ and a-boppin’ and a-singin’ his song.

    All the little birds on Jaybird Street

    Love to hear the robin go tweet tweet tweet …

    * * *

    When L.A. R&B singer Bobby Day took Jimmie Thomas’ lyrics to the top of the charts in the summer of 1958—a tune memorably revived in 1972 by a 13-year-old Michael Jackson—there was no way to foresee how those words would resonate a half-century later.

    But they certainly do. Twitter, the wildly popular micro-blogging service, has become an Internet sensation, with millions flocking to the site each month to post a jittery stream of brief status updates. Whether it’s Ashton Kutcher or your cousin Sue, these days, it seems, everybody wants to emulate Rockin’ Robin.

    There will be few people arguing for small data centers as the whole supply chain system is set up to maximize profits by building bigger more complex data centers.  What is the right size for data centers?  The problem is the data center construction teams think from their construction and provisioning view.  The whole social network effect of what happens with something like Twitter is beyond the data center construction team.

    “Blogging has long been studied as a medium of information diffusion, and micro-blogging has started to be used for marketing. Analyzing the differences and similarities in terms of information-diffusion structure and efficiency can yield valuable knowledge to the proper use of each.”

    Aren’t data centers built for information diffusion?

    What types of data centers are ideal for information diffusion?  Bet you Facebook and Twitter can look at their data center as social networks instead of buildings.

    Click to read more ...


    Disney & Verizon join Green Data Center Movement

    NetworkWorld writes about Disney and Verizon making green data center announcements.

    Disney, Verizon go green in the data center

    Disney, Verizon lay out IT energy efficiency plans at Green Grid event

    By Jon Brodkin , Network World , 10/06/2009

    Energy efficiency in the data center is a top priority for Disney and Verizon, technology executives from the companies said last week. But the industry is still in the early stages of understanding how best to measure effectiveness, they said.

    Disney has a companywide goal to reduce electricity consumption by 10% between 2006 and 2013, and the data center has to play a big role in achieving that objective, says Denis Weber, director of IT critical facilities infrastructure for the Walt Disney Co.

    Five tools to prevent energy waste in the data center

    For Disney, energy efficiency is being achieved through a series of small improvements, Weber said in an interview with Network World.

    "Some of it just comes down to cleaning the facility up," Weber says. "And I don't mean with a dust pail and so on and a broom, but cleaning the data center up from obstructions and ensuring that every one of our floor tiles is sealed properly for air flow. Blanking panels -- not only that we have them but that they're in the right spot. Variable speed fans and motors on our CRAC units, increasing temperature settings across the board. These are all things that are not unique to Disney. But we have done it and that's where we've started to make progress."

    The green grid was able to leverage their NYSE closing bell ringing to pull Disney and Verizon.

    Disney and Verizon officials discussed their energy efficiency programs at the New York Stock Exchange last week during an event hosted by the Green Grid industry consortium.

    Click to read more ...


    VMware opens its Green Data Center

    Seattletimes has an article on VMware’s new Wenatachee green data center.

    VMWare opens a green data center in Wenatchee

    Posted by Sharon Chan


    While a tax law prompted Microsoft to move its cloud business Azure out of Washington state, the state just attracted a software company, VMWare, to build its data center in Wenatchee.

    VMWare, a software company and Microsoft competitor in Palo Alto, Calif., opened a 61,000-square-foot data center in Wenatchee in January to consolidate several smaller labs and data centers the company was using to run and test its virtualization software. The company's chief executive officer Paul Maritz, is a former top exeuctive at Microsoft.

    Click to read more ...