Google Ads

Enter your email address:

Delivered by FeedBurner

Search
This form does not yet contain any fields.
    Navigation
    Monday
    Apr052010

    Relationship of Electricity generation and Water, changes the game, 2 GW Entergy Nuclear Power Plant renewal permit denied based on warm water discharge

    WSJ and others report on the New York environmental regulators, not the US EPA, denying Entergy's request for a 2 gigawatt Nuclear Power Plant renewal, supplying 30% of NYC's electricity.

    image

    New York Regulators Deny Water Permit for Nuclear Plant

    By MARK LONG

    NEW YORK -- New York environmental regulators have denied a key water-quality certificationEntergy Corp. needs to extend by 20 years its license to operate the 2,000-megawatt Indian Point nuclear-power plant.

    The New York Department of Environmental Conservation said in a letter to Entergy dated April 2 that the two units of the plant "do not and will not comply with existing New York State water quality standards," even with the addition of a new screening technology favored by Entergy to protect aquatic life. The plant's existing "once-through" system withdraws and returns as much as 2.5 billion gallons of Hudson River water a day for cooling, a system blamed by environmentalists for damaging the river's ecosystem and killing millions of fish a year, including the endangered shortnose sturgeon.

    Certification under the Clean Water Act is required before the U.S. Nuclear Regulatory Commission can approve an extension of the operating license for Indian Point, which generates enough electricity to power approximately 2 million homes and is major power source for New York City. The licenses for Indian Point units 2 and 3, which came online in the 1970s, are due to expire in September 2013 and December 2015, respectively.

    What is humorous is the environmental group Riverkeeper thinking that 2 gigawatt of baseload can be brought on line by 2015.

    "That power is replaceable," said Alex Matthiessen, president of environmental group Riverkeeper. "The evidence for why the plant doesn't meet state water-quality standards is overwhelming," he said, adding Indian Point accounts for the deaths of about a billion fish a year and that the group estimates cooling towers could be constructed for $200 million to $300 million.

    The following is a study published on air or hybrid cooling for power plants vs. water.

    Emerging Issues and Needs

    in Power Plant Cooling Systems

    Water availability is affecting power plant placement.  You need to be thinking the same for data center placement.

    However, with the construction of new power plants in recent years, perhaps the most prevalent concern with wet cooling systems has been water availability. Growing competition from municipal and agricultural users has decreased the amounts and increased the prices of good quality water resources available to industrial users. This competition is most apparent in the southwestern U.S. where the need for new electric power generation is significant, but regional surface water sources are minimal and groundwater sources are highly prized and may have designated use restrictions. But even in areas usually considered “water rich”, such as the northeastern U.S., the combination of environmental, safety & health, and resource availability concerns has resulted in an increasing interest in dry and hybrid cooling systems as alternatives to wet cooling systems.

    Size of Dry Cooling system vs. Wet Cooling - 2.2 times larger

    Size. By definition, dry cooling involves the transfer of heat to the atmosphere
    without the evaporative loss of water (i.e., by sensible heat transfer only). Because sensible heat transfer is less efficient than evaporative heat transfer, dry cooling systems must be larger than wet cooling systems. For example, to achieve a comparable heat rejection, one study estimates that a direct dry cooling system (ACC) will have a footprint about 2.2 times larger than a wet cooling tower and a height about 1.9 times greater.2

    Maintenance of operations.

    • Maintenance. Both direct and indirect dry cooling systems, as well as hybrid cooling systems, are larger and mechanically more complex than corresponding wet cooling systems. In addition to the larger heat transfer surface area, dry and hybrid cooling systems will have more fans, meaning more electrical motors, gearboxes and drive shafts. As such, labor requirements for a large ACC can be substantial. At one site with a 60-cell ACC (three 20-cell bays for three separate steam turbines), the maintenance staff was increased by two people for such activities as cleaning fan
    blades and heat exchanger tube fins, monitoring lube-oil systems, and leak checking the vacuum system.3
    • Energy penalties. Because sensible heat transfer is directly related to the ambient dry-bulb temperature, a dry cooling system must have the flexibility to respond to typical daily temperatures variations of 20-25 °F. A dry system that maintains an optimum turbine backpressure at ambient dry-bulb temperatures of 90-95 °F, may not - 6 - be able to do so as the temperature increases, meaning a lower generating efficiency.


    From a design perspective, more surface area (i.e., a larger dry cooling system) can compensate for the decline in heat transfer at high ambient temperatures; but the greater size and associated operational control are also concerns, as previously discussed.

    When all  things are equal, it comes down to cost of systems.

    Costs. If performance, availability and reliability appear to be equal, then the single issue that will most likely govern the selection and use of a power plant cooling system is cost. Unfortunately, the economics of power plant cooling systems are complex, which means cost estimates are frequently mistaken, misunderstood or misrepresented.
    This complexity results from the complicated relationships of three key costs: installed equipment capital cost, annual operating and maintenance or O&M cost, and energy penalty cost. For most manufacturing processes, the first two costs can be fairly well defined and, to a certain extent, contractually guaranteed by the vendor/supplier. But the energy penalty cost is somewhat unique to power plant cooling systems because it reflects a direct performance link between the cooling system and the low-pressure
    turbine-generator. Consequently, the potential for and the magnitude of an energy penalty cost can dictate cooling system design and operating changes that directly affect the capital and O&M costs. So in a competitive market, generating power in the most cost-effective manner depends upon a company’s ability to balance all three key costs and optimize the overall life-cycle cost of the cooling system.

    What is the water footprint of the power plant supplying your data center?

    Are you planning for water as a scarce resource affecting the cooling systems for your data center?

    Here is what Google presented on water use at it's data center event a year ago.

    Multiple Speakers Discuss Water Issues at Google’s Efficiency Data Center Summit

    I have been blogging about water issues in the data center for a while, and have a category for tagging posts for “water.”

    Click to read more ...

    Friday
    Apr022010

    Alternative to Google's hiring Renewable Energy Systems Modeling Engineer

    I am spending more time researching the Low Carbon Data Center ideas and I ran across Google's job posting on Renewable Energy System Modeling Engineer.

    The role: Renewable Energy System Modeling Engineer - Mountain View

    RE<'C will require development of new utility-scale energy production systems. But design iteration times for large-scale physical systems are notoriously slow and expensive. You will use your expertise in computer simulation and modeling to accelerate the design iteration time for renewable energy systems. You will build software tools and models of optical, mechanical, electrical, and financial systems to allow the team to rapidly answer questions and explore the design-space of utility-scale energy systems. You will draw from your broad systems knowledge and your deep expertise in software-based simulation. You will choose the right modeling environment for each problem-from simple spreadsheets to time-based simulators to custom software models you create in high-level languages. The models you create will be important software projects unto themselves. You will follow Google's world-class software development methodologies as you create, test, and maintain these models. You will build rigorous testing frameworks to verify that your models produce correct results. You will collaborate with other engineers to frame the modeling problem and interpret the results.

    It's great Google see the need for this person, but I was curious if anyone else has done Renewable Energy System Modeling.  Guess what there is, since 1993 in fact.  NREL has this page on Homer.

    New Distribution Process for NREL's HOMER Model

    Note! HOMER is now distributed and supported by HOMER Energy (www.homerenergy.com)

    To meet the renewable energy industry’s system analysis and optimization needs , NREL started developing HOMER in 1993. Since then it has been downloaded free of charge by more than 30,000 individuals, corporations, NGOs, government agencies, and universities worldwide.

    HOMER is a computer model that simplifies the task of evaluating design options for both off-grid and grid-connected power systems for remote, stand-alone, and distributed generation (DG) applications. HOMER's optimization and sensitivity analysis algorithms allow the user to evaluate the economic and technical feasibility of a large number of technology options and to account for uncertainty in technology costs, energy resource availability, and other variables. HOMER models both conventional and renewable energy technologies:

    image

    I signed up for the Homer Energy site which has 510 users, non apparently Google engineers.

    image

    I hope to make contact with the Homer Energy team as we are trying to have a session at DataCenterDynamics Seattle on a Low Carbon Data Center.

    Maybe Google doesn't have to hire the Renewable Engineering System Modeling engineer after all.  :-)

    Click to read more ...

    Wednesday
    Mar312010

    Google Goes Nuclear to increase its defense capabilities, April Fools

    Today is Mar 31, but April 1, April Fool's is right around the corner.

    Techcrunch has a post on Google's new nuclear acquisition.

    Exclusive: Google To Go Nuclear

    by Michael Arrington on Mar 31, 2010

    Google has acquired a company that has created a new process for highly efficient isotope separation, we’ve confirmed from multiple sources. The primary use of this technology, say experts we’ve spoken with, is uranium enrichment.

    Enriched uranium is a necessary ingredient in the creation of nuclear energy, and one source we’ve spoken with at Google says that this is part of the Google Green Initiative. The company will use the new technology to enable it to design and possibly build small, mobile and highly efficient nuclear power generators. “Google has already begun building an enrichment plant,” says a high ranking IAEA source.

    The story continues implying that Google is developing capability for nuclear weapons.

    And more chillingly: “It would be trivial for anyone with this technology to build a nuclear weapon.”

    Google, which has been shaken by its inability to counter Chinese censorship and hacking efforts, may be engaging in enrichment research as part of a new effort to simply protect itself from outside threats.

    Click to read more ...

    Wednesday
    Mar312010

    Long Now, Long View, Long Lived Data Center, a 10,000 year clock - a 10,000 year data center?

    I am currently thinking of rules for the ontology in data center designs.  Translated, I am trying to figure out the principles, components, and relationships for the Open Source Data Center Initiative. 

    This is a complex topic to try and explain, but I found an interesting project the Long Now started by a bunch of really smart people, Jeff Bezos, Esther Dyson, Mitch Kapor, Peter Schwartz, and Steward Brand.  Here is a video discussing the idea of a 10,000 year clock.

     

    But, what I found interesting was their long term approach and transparency that we will be using in the Open Source Data Center Initiative.  And, now thinking a Long View is part of what we have as principles.

    Here are the principles of the Long Now Clock that make a lot of sense to use data center design.


    These are the principles that Danny Hillis used in the initial stages of designing a 10,000 Year Clock. We have found these are generally good principles for designing anything to last a long time.

    Longevity

    With occasional maintenance, the clock should reasonably be expected to display the correct time for the next 10,000 years.

    Maintainability

    The clock should be maintainable with bronze-age technology.

    Transparency
    It should be possible to determine operational principles of the clock by close inspection.
    Evolvability
    It should be possible to improve the clock with time.
    Scalability
    It should be possible to build working models of the clock from table-top to monumental size using the same design.

    Some rules that follow from the design principles:

    Longevity:
    Go slow
    Avoid sliding friction (gears)
    Avoid ticking
    Stay clean
    Stay dry
    Expect bad weather
    Expect earthquakes
    Expect non-malicious human interaction
    Dont tempt thieves
    Maintainability and transparency:
    Use familiar materials
    Allow inspection
    Rehearse motions
    Make it easy to build spare parts
    Expect restarts
    Include the manual
    Scalability and Evolvabilty:
    Make all parts similar size
    Separate functions
    Provide simple interfaces

    Why think about a 10,000 year clock, because thinking about slowness teaches us things we don't have time when think only of speed.

    Hurry Up and Wait

    The Slow Issue > Jennifer Leonard on January 5, 2010 at 6:30 am PST

    018-futurists-1

    We asked some of the world’s most prominent futurists to explain why slowness might be as important to the future as speed.

    Julian Bleecker
    Julian Bleecker, a designer, technologist, and co-founder of the Near Future Laboratory, devises “design-to-think experiments” that focus on interactions away from conventional computer settings. “When sitting at a screen and keyboard, everything is tuned to be as fast as possible,” he says. “It’s about diminishing time to nothing.”
    So he asks, “Can we make design where time is inescapable and not be brought to zero? Would it be interesting if time were stretched, or had weight?” To test this idea, Bleeker built a Slow Messaging Device, which automatically delayed electronic (as in, e-mail) messages. Especially meaningful messages took an especially long time to arrive.

    Read more: http://www.good.is/post/hurry-up-and-wait#ixzz0jmOEcLg4

    The biggest unknown problems in data centers are those things that we didn't think were going to happen in the future.  And, this leaves the door open to over-engineering, increasing cost, brittleness of systems, and delays.  Taking a Long View what will the future possibly look like can help you see things you normally wouldn't.

    image

    Click to read more ...

    Wednesday
    Mar312010

    50% of IT budgets treat electricity as free resource, Avanade survey discovers

    Avanade has a news release on a survey revealing the disconnect between IT and electricity use.

    GLOBAL STUDY: MORE THAN HALF OF COMPANIES FAIL TO ACCOUNT FOR ENERGY COSTS IN IT BUDGETS
    Executives and IT decision-makers cite energy as a top cost in IT operations; Survey reveals disconnect in budgeting
    SEATTLE – March 31, 2010 – According to a recent survey commissioned by Avanade, a business technology services provider, there is a clear gap in energy policies within IT departments. Companies recognize energy as a top cost, but ultimately, more than half of respondents fail to account for energy costs when developing IT budgets.

    image

    Also, Avanade has a press release on customers interest in Microsoft Cloud Computing.

    In 2009 Avanade engaged Kelton Research to conduct two global surveys on cloud computing – one in February 2009 and the other in September 2009. Between the first survey and the second, there was a 320 percent increase in executives and IT decision-makers reporting they are testing or planning to implement cloud computing technologies.

    Makes me wonder how many enterprises are forgetting to account for the electricity bill as a cost savings for cloud computing.

    Click to read more ...