Energy savings claims for liquid cooling, no transparency on claims

Greenbang has an article on liquid cooling.

Immersing servers in liquid can be a good thing

By Greenbang on Nov 17, 2009 in Data centres, Featured

Iceotope Heat ExchangerIf you’ve ever accidentally spilled tea on your laptop, you understand why liquids seem like one of the things you want to keep out of data centres, not in them. But one firm says surrounding server components with water and liquid coolant is an ideal way to save energy and money.

The UK-based Iceotope launched its new liquid-cooled server technology today at the Supercomputing 2009 conference in Portland, Oregon.

“We have spent 18 months developing this technology in stealth mode, with input from a number of interested customers,” said Dan Chester, CEO of Iceotope. “We believe that we will see a huge growth in the use of liquid-cooled servers as people see the ease with which these systems can be deployed.”

Iceotope claims its system is the first to use modular “liquid immersion” of server components and can reduce data centre cooling costs by 93 per cent. That’s no small feat when you consider a data centre with around 1,000 servers can spend more than $260,000 a year on air cooling systems.

The claim of 95% savinsg is mentioned on their web site.

Because of the greater thermal efficiency of this “end to end liquid” cooling path, the building water circuit can be run much warmer – potentially eliminating the need for chiller plant and enabling year-round free cooling. With this approach, the 3 year cooling cost of a 1 megawatt data centre could be reduced from around $788,400 to around $52,560; a 93% ($735,840) reduction compared to air cooling. By enabling servers to be packed more tightly without compromising the cooling efficiency, the same approach could reduce the space required for the servers by 84%.

I am amazed companies make claims like the above without any transparency on how they came up with these numbers.

Read more

Network Computing India cover story “On a High Energy Curve”

Network Computing India has an article specifically on efficiency in the data center.

Cover Story


On a High Efficiency Curve

Increasing power, cooling and real estate costs are pushing CIOs to squeeze out more efficiency from their data centers

By Varun Aggarwal

Increasing awareness and computerization is propelling the growth of data centers in the country. Gartner predicts that the total data center capacity in India will reach 5.1 million square feet by 2012, growing at a CAGR of 31 percent. This is partly fueled by the fact that India is expected to be the data center hub for markets such as the Middle East and South East Asia. There are also instances of European customers opting to host their data centers in India.

Green data center is discussed.

Painting the Data Center Green
Green is the new mantra, and a host of organizations in India are exploring every possibility of saving on energy costs. Says Sanjeev Gupta, Service Product Line Leader, Site & Facilities Services, IBM India/South Asia, “As hardware purchases go up and organizations deploy high-density computing and network storage for mission-critical applications, there is an immediate impact on energy consumption for IT resources. This further impacts the need for implementing environments that ensure high performance levels and longevity of the server and storage environment, leading to the demand for ‘Green Data Centers’.”
In a tropical country like India, companies mainly rely on air conditioning units to keep servers at the right temperature. The more powerful the machine, the more cool air is required to keep the machine from overheating.

To meet these challenges, data centers have undergone changes in design to accommodate rapidly changing server technologies over the past 5 to 7 years. Servers that used about 150W power and 4 to 6 RUs of physical space are now replaced with servers using 3kVA in 8 RU space. This has necessitated radical approaches for supplying power and cooling requirements to such high-density racks.

More and more I am seeing green discussed as a benefit of cloud computing.

The impact of cloud computing
The latest buzzword in the industry today is cloud computing. This is essentially because it has the potential to completely revamp the way an organization works. According to IDC reports, cloud computing is reshaping the IT marketplace, creating new opportunities for suppliers and catalyzing changes in traditional IT offerings. This will have a tremendous impact on the way data centers are built today, and in the future.
Analysts believe that with the advent of cloud computing, efficiency levels in data centers will go up significantly.

Read more

Securing a Small Nuclear Reactor – bury it in a missile silo or bunker

I was talking to an entrepreneur at Santa Fe Institute’s Business Networking event after a presentation by Stewart Brand on Nuclear Power.  We discussed the idea of micro nuclear reactors, and he says it will not happen because of the security issues required for a small nuclear plant vs. a large one, and the danger of terrorist attacks.

With all the talk of data center in bunkers and missile silos.  How about burying a small nuclear reactor in a missile silo?  Seems pretty secure.  It is another way to recycle and re-use.

Here is a dataecenterknowledge post from 2007 where a missile silo was being sold as a data bunker.

Missile Bunker Listed on eBay, Again

September 27th, 2007 : Rich Miller

An abandoned missile base in Washington State is back in the news. The former Titan missile silo at Larsen Air Force base in central Washington, which for many years was marketed as a potential “data bunker,” has been featured this week on Boing Boing andthe BBC. The news: the 57-acre site is for sale, and is actually listed on eBay for $1.5 million.

Here is a short video of some of Stewart’s ideas and how recycled Russian nukes are being used in US Nuclear reactors.

Read more

Evolution of Data Center – Where is the battle for survival?

Thanks to a connection with Eleanor Wynn at Intel Developer Forum discussing social networking, I was invited as a business guest to Santa Fe Institute Business Network. The theme of this year’s event was evolution.  After two intense days and nights hanging around a bunch of PhD’s and business people who think about complex systems.

An Introduction

The Santa Fe Institute is a private, not-for-profit, independent research and education center founded in 1984, for multidisciplinary collaborations in the physical, biological, computational, and social sciences. Understanding of complex adaptive systems is critical to addressing key environmental, technological, biological, economic, and political challenges.

Renowned scientists and researchers come to Santa Fe Institute from universities, government agencies, research institutes, and private industry to collaborate in attempts to uncover the mechanisms that underlie the deep simplicity present in our complex world.

I have some ideas to write on the evolution of the data center.

Survival of fittest is a common term used in evolution and Darwin, and can be applied to the concept of an artificial system like data centers and IT.  Within an organization there is competition for resources and budget.  Where problems occur is when the competition becomes more internal vs. external resource battles.

Those organizations who fight mostly internal battles for survival with limited budget and resources are fighting for survival within the organization. Those organizations who focus on beating the competition for users/customers with IT services create models of what they should be doing to work together to win the scarce resources of customers and their money.The competition for survival with external competitors vs. internal competitors identifies those who model the effectiveness of their IT organizations efforts to support the business.

A clear focus on the impact to the business and customers, changes the evolution of IT.  Now if you are big enough you can throw enough money at IT so you do get the business benefits, but how much of the budget is going to feed the internal battles?  The internal battles will actually grow disconnected from business growth.  Which could be used to explain why IT budgets growth is a problem, and the answer is to limit the budgets.  This action can limit the growth of business and frustrate business units, but with limited resources in these times it is standard practice.

And who wins?  The companies who can allocate IT resources to win more customers.

It’s too bad we can’t have a PUE (power usage effectiveness) for IT systems.  How much is the total IT spend divided by the valued IT services to users?  I would bet most companies have IT PUE over 2.0. 

And the leaders of  IT PUE are Google and Amazon, because they are fighting battles for end users as a higher percentage vs. internal battles.

Read more

Earthquake Forecasting Tool for Data Center risk management

At the Santa Fe Institute Business Network I met John B. Rundle, UC Davis, Director of Center for Computational Science and Engineering who pointed me to OpenHazards web site that provides earthquake forecasting and hazard analysis.  Just enter in a location, radius, and time horizon.  It is easy to get a % of a greater than 5.0 earthquake.

image 

Here is background on the Open Hazards Group

The Open Hazards Group
Open Hazards is a group of scientists, technologists, and business people dedicated to the proposition that, through advances in forecasting and sensor technology, as well as an open, web-based approach to public information availability and sharing, we can enable a more sustainable human society in the face of severe, recurring natural disasters.
The objective of this web site is to inform and educate the public worldwide.  We provide a free, open, and independent assessment of hazard and risk due to major earthquakes, using a self-consistent, global, validated methodology. The information displayed on our web site is based on the best available science and technology as determined by the professional, peer-reviewed literature, as well as our own judgments, informed by many years of professional practice at the highest levels of academia and government. Our forecasts and risk estimates allow members of the public world-wide to understand and address, for the first time, their space- and time-dependent risk from major damaging earthquakes.

Being open, the site has an xml API to allow web sites to interact with the OpenHazards.

GetEarthquakeProbability API

For a specified location, the GetEarthquakeProbability API returns:

  1. The latitude and longitude of the location.

  2. The current expected rate of earthquakes of specified magnitude over a specifiec time window.

  3. The current exptected probability of experiencing at least one such earthquake.

The GetEarthquakeProbability Web Service is located at:
http://api.openhazards.com/GetEarthquakeProbability.xml.

How do you validate OpenHazard results?

How does OH validate its forecasts?

Open Hazards validates its forecasts using the same types of statistical testing that are used in the weather/climate/financial forecasting communities.  These tests are used to determine resolution, the ability of a forecast to discriminate between alternative outcomes; reliability, whether the predicted frequency of events matches the observed frequency of events; and sharpness, whether events tend to occur at high forecast probabilities, and no events tend to  occur at low forecast probabilities, in contrast to methods in which events tend to occur near average values of probability.

Why did OH choose this approach to validate its forecasts?

Open Hazards feels that it is best to use testing procedures that have become standardized by extensive use in other fields, rather than inventing new statistical tests whose uses and properties are not well understood.

Read more