Google Ads

Enter your email address:

Delivered by FeedBurner

This form does not yet contain any fields.

    Securing a Small Nuclear Reactor – bury it in a missile silo or bunker

    I was talking to an entrepreneur at Santa Fe Institute’s Business Networking event after a presentation by Stewart Brand on Nuclear Power.  We discussed the idea of micro nuclear reactors, and he says it will not happen because of the security issues required for a small nuclear plant vs. a large one, and the danger of terrorist attacks.

    With all the talk of data center in bunkers and missile silos.  How about burying a small nuclear reactor in a missile silo?  Seems pretty secure.  It is another way to recycle and re-use.

    Here is a dataecenterknowledge post from 2007 where a missile silo was being sold as a data bunker.

    Missile Bunker Listed on eBay, Again

    September 27th, 2007 : Rich Miller

    An abandoned missile base in Washington State is back in the news. The former Titan missile silo at Larsen Air Force base in central Washington, which for many years was marketed as a potential “data bunker,” has been featured this week on Boing Boing andthe BBC. The news: the 57-acre site is for sale, and is actually listed on eBay for $1.5 million.

    Here is a short video of some of Stewart’s ideas and how recycled Russian nukes are being used in US Nuclear reactors.

    Click to read more ...


    Evolution of Data Center – Where is the battle for survival?

    Thanks to a connection with Eleanor Wynn at Intel Developer Forum discussing social networking, I was invited as a business guest to Santa Fe Institute Business Network. The theme of this year’s event was evolution.  After two intense days and nights hanging around a bunch of PhD’s and business people who think about complex systems.

    An Introduction

    The Santa Fe Institute is a private, not-for-profit, independent research and education center founded in 1984, for multidisciplinary collaborations in the physical, biological, computational, and social sciences. Understanding of complex adaptive systems is critical to addressing key environmental, technological, biological, economic, and political challenges.

    Renowned scientists and researchers come to Santa Fe Institute from universities, government agencies, research institutes, and private industry to collaborate in attempts to uncover the mechanisms that underlie the deep simplicity present in our complex world.

    I have some ideas to write on the evolution of the data center.

    Survival of fittest is a common term used in evolution and Darwin, and can be applied to the concept of an artificial system like data centers and IT.  Within an organization there is competition for resources and budget.  Where problems occur is when the competition becomes more internal vs. external resource battles.

    Those organizations who fight mostly internal battles for survival with limited budget and resources are fighting for survival within the organization. Those organizations who focus on beating the competition for users/customers with IT services create models of what they should be doing to work together to win the scarce resources of customers and their money.The competition for survival with external competitors vs. internal competitors identifies those who model the effectiveness of their IT organizations efforts to support the business.

    A clear focus on the impact to the business and customers, changes the evolution of IT.  Now if you are big enough you can throw enough money at IT so you do get the business benefits, but how much of the budget is going to feed the internal battles?  The internal battles will actually grow disconnected from business growth.  Which could be used to explain why IT budgets growth is a problem, and the answer is to limit the budgets.  This action can limit the growth of business and frustrate business units, but with limited resources in these times it is standard practice.

    And who wins?  The companies who can allocate IT resources to win more customers.

    It’s too bad we can’t have a PUE (power usage effectiveness) for IT systems.  How much is the total IT spend divided by the valued IT services to users?  I would bet most companies have IT PUE over 2.0. 

    And the leaders of  IT PUE are Google and Amazon, because they are fighting battles for end users as a higher percentage vs. internal battles.

    Click to read more ...


    Earthquake Forecasting Tool for Data Center risk management

    At the Santa Fe Institute Business Network I met John B. Rundle, UC Davis, Director of Center for Computational Science and Engineering who pointed me to OpenHazards web site that provides earthquake forecasting and hazard analysis.  Just enter in a location, radius, and time horizon.  It is easy to get a % of a greater than 5.0 earthquake.


    Here is background on the Open Hazards Group

    The Open Hazards Group
    Open Hazards is a group of scientists, technologists, and business people dedicated to the proposition that, through advances in forecasting and sensor technology, as well as an open, web-based approach to public information availability and sharing, we can enable a more sustainable human society in the face of severe, recurring natural disasters.
    The objective of this web site is to inform and educate the public worldwide.  We provide a free, open, and independent assessment of hazard and risk due to major earthquakes, using a self-consistent, global, validated methodology. The information displayed on our web site is based on the best available science and technology as determined by the professional, peer-reviewed literature, as well as our own judgments, informed by many years of professional practice at the highest levels of academia and government. Our forecasts and risk estimates allow members of the public world-wide to understand and address, for the first time, their space- and time-dependent risk from major damaging earthquakes.

    Being open, the site has an xml API to allow web sites to interact with the OpenHazards.

    GetEarthquakeProbability API

    For a specified location, the GetEarthquakeProbability API returns:

    1. The latitude and longitude of the location.

    2. The current expected rate of earthquakes of specified magnitude over a specifiec time window.

    3. The current exptected probability of experiencing at least one such earthquake.

    The GetEarthquakeProbability Web Service is located at:

    How do you validate OpenHazard results?

    How does OH validate its forecasts?

    Open Hazards validates its forecasts using the same types of statistical testing that are used in the weather/climate/financial forecasting communities.  These tests are used to determine resolution, the ability of a forecast to discriminate between alternative outcomes; reliability, whether the predicted frequency of events matches the observed frequency of events; and sharpness, whether events tend to occur at high forecast probabilities, and no events tend to  occur at low forecast probabilities, in contrast to methods in which events tend to occur near average values of probability.

    Why did OH choose this approach to validate its forecasts?

    Open Hazards feels that it is best to use testing procedures that have become standardized by extensive use in other fields, rather than inventing new statistical tests whose uses and properties are not well understood.

    Click to read more ...


    Evolution of the Data Center Idea – Remove people from installation and service of servers

    I had a conversation with John G. Miner at Santa Fe Institute’s Business Network event.  John wrote the Intel paper on air economizer’s to reduce data center cost.

    One of the ideas we discussed that totally makes sense for cloud computing and big data center companies is to look at automated handling systems for servers to be installed and serviced.  At some point if not now it is going to make economic sense to use automated handling equipment to deliver servers to the rack location.

    This means current racks are obsolete and current form factors.  And special connectors need to be developed for power and cooling.  Imagine servers automatically being loaded in shipping to be sent for service, and the reverse for receiving.

    How hot can you make a data center with no people?  And how much could you increase densities or change airflow with no thought for people access.

    If anybody can do this I would guess Google and Amazon.  Google has complete control over its data center systems.  Amazon has experience with automated handling systems in its warehouses and knows the ROI of these devices.

    Click to read more ...


    Missouri Data Center site update

    DataCenterKnowledge had a post on a Missouri Data  Center Site.

    Koman Eyes Missouri Plot for Data Center

    February 10th, 2009 : Rich Miller

    The Koman Group is working with officials in Boone County, Missouri to develop a 192-acre tract as a data center, according to local media reports. Koman was one of the developers of a huge speculative project in Illinois that was later leased by Microsoft and will soon become one of the world’s largest data centers.

    Ewing Industrial Park has replaced the Koman Group to develop the property.


    Ewing Industrial Park of Columbia, Missouri is in an exceptionally strong position among development sites in the Midwest. The location sits on an unparalleled physical crossroads of power transmission, data infrastructure, water source, and transportation access. In concert with the physical aspects of the site, the Park is remarkably located with respect to intellectual assets, with easy access to world-class researchers in multiple global research-based organizations, including the University of Missouri, Danforth Plant Center, and Monsanto, among others.

    The Ewing Industrial Park can easily accommodate a 150 acre industrial customer with up to 280 acres of total "shovel ready" area planned.  The industrial park offers the following assets in place or adjacent to the site.

    • Water (1.5 million gallon elevated tower)
    • Sewer
    • Redundant telecommunication with Centurytel Metro Ethernet to site and Fiber Path Technologies are available
    • High service gas
    • High service electric with multi-feed, multi-supplier substation on-site
    • Landfill adjacent to the site
    • 5 lane arterial (Missouri Route B)
    • Access by interchange to US Highway 63 (3.1 miles)
    • Access by interchange to I-70 (4.7 miles)
    • Fully signalized access to Missouri Route B
    • Direct access to COLT Railroad (Transload Facility) with Connection to Norfolk Southern
    • Directly adjacent to rail car transfer station
    • Columbia School District with Hallsville School District on site
    • Pre-graded sites with average site grades 0%-2%
    • Existing industrial uses

    Here are some pictures of the site.



    Click to read more ...