Google Ads

Enter your email address:

Delivered by FeedBurner

Search
This form does not yet contain any fields.
    Navigation
    Thursday
    Dec102009

    Container Data Center form a silo cylinder or shipping container box

    Clumeq has a new super computer repurposing their decommissioned Van de Graff particle accelerator.

    The Quebec site is on the campus of Université Laval inside a renovated van de Graaf silo, with an innovative cylindrical layout for the data center. This cluster will feature upwards of 12,000 processing elements. Compute racks will be distributed among three floors of concentrical rings with a total surface area of 2,700 sq.ft. with an IT capacity of approximately 600 kW.

    DataCenterKnowledge picked up the news.

    Wild New Design: Data Center in A Silo

    December 10th, 2009 : Rich Miller

    clumeq-design-470

    A diagram of the design of the CLUMEQ Colossus supercomputer, from a recent presentation by Marc Parizeau of CLUMEQ.

    Here’s one of the most unusual data center designs we’ve seen. The CLUMEQsupercomputing center in Quebec has worked with Sun Microsystems to transform a huge silo into a data center. The cylindrical silo, which is 65 feet high and 36 feet wide with two-foot thick concrete walls, previously housed a Van de Graaf particle accelerator. When the accelerator was decommissioned, CLUMEQ decided to convert the facility into a high-performance computing (HPC) cluster known as Colossus.

    Here is the youtube video.

    This idea may seem strange, but it is part of connecting the building to IT equipment.  Microsoft just did this showing their Windows Azure Containers with the cooling system integrated in the container.

    L1020981

    Sun has their own page on Clumeq.

    When supercomputing consortium CLUMEQ designed its high-performance computing (HPC) system in Quebec, it was able to house it in the silo of a former particle accelerator on the Université Laval campus. The structure's 3-level cylindrical floor plan was ideal for cooling the 56 standard-size racks, and enabled the university to retain a treasured landmark.

    Background

    CLUMEQ is a supercomputing consortium of universities in the province of Quebec, Canada. It includes McGill University, Université Laval, and all nine components of the Université du Québec network. CLUMEQ supports scientific research in disciplines such as climate and ecosystems modeling, high energy particle physics, cosmology, nanomaterials, supramolecular modeling, bioinformatics, biophotonics, fluid dynamics, data mining and intelligent systems.

    Click to read more ...

    Wednesday
    Dec092009

    Christian Belady’s history of PUE

    Christian Belady has a post on the Inflection Point for Efficiency in the data center which provides an early history of PUE as a data center metric.

    In my opinion, it wasn’t until 2006 that the industry really did go through a paradigm shift. While there were a few of us who had been pushing efficient computing approaches for over a decade), we had limited success in moving the industry until 2006. What happened in 2006? I think the notion of establishing an industry efficiency metric and data center metrics were born. I thought it would be interesting to recap those milestones based on my perspective:

    Christian provides three milestones you can read in his post.  Here is the third milestone.

    April 23-26, 2006: High-Density Computing: Trends, Challenges, Benefits, Costs, and Solutions

    This symposium was the Uptime Institute’s first and it focused on Density trends. However, it was this conference where I first presented an Efficiency Metric called PUE which seemed to capture the attention of many of the attendees. As a result, I published a paper on PUE with my good friend Chris Malone later in the year at the Digital Power Forum. At this same conference, AMD’s Larry Vertal and Bruce Shaw sat down with Paul Perez (my former VP) and I to discuss the idea of starting a consortium called the Green Grid. Ten months later the Green Grid was officially announced with one of its first whitepapers evangelizing metrics and in particular PUE.

    and Christian, takes times to reflect.

    So our industry woke up in 2006 and while the Gartner graph does show we have work ahead of us, I do think we can say that in less than four years the industry has made great strides (and perhaps I shouldn’t complain so much!).

    What we need from Christian is Part 3 on what his prediction of the future is as he is already proven to make history.

    He did do this in 1998 and ten years later. http://www.greenm3.com/2008/03/christian-belad.html

    Mar 27, 2008

    Christian Belady's Bottom Line Opinion 10 years ago, We Need A Better System

    Microsoft's Christian Belady was going through his old presentations and found a public presentation on The Big Picture, A Philosophical Discussion to Make US Think. Download Cbelady.pdf The presentation is an accumulation of predictions he was making in the late '90s as part of making a case for more efficient computing while at HP.

    Summary
    Power is not just a….
    •component problem
    •System problem
    •Data center problem
    •Utility Infrastructure problem
    We have a huge opportunity to solve these problems as one system and optimize the solution.
    WE NEED A BETTER SYSTEM!

    Big Picture
    Bottom Line
    We need to cooperate to solve these problems on a much larger scale.
    Develop consortiums to address these global issues and influence the industry, government and culture proactively.
    We need to ensure that we have a better world.

    Click to read more ...

    Tuesday
    Dec082009

    Amazon Web Services Economics Center, comparing AWS/cloud computing vs co-location vs owned data center

    Amazon Web Services has a post on the Economics of AWS.

    The Economics of AWS

    For the past several years, many people have claimed that cloud computing can reduce a company's costs, improve cash flow, reduce risks, and maximize revenue opportunities. Until now, prospective customers have had to do a lot of leg work to compare the costs of a flexible solution based on cloud computing to a more traditional static model. Doing a genuine "apples to apples" comparison turns out to be complex — it is easy to neglect internal costs which are hidden away as "overhead".

    We want to make sure that anyone evaluating the economics of AWS has the tools and information needed to do an accurate and thorough job. To that end, today we released a pair of white papers and an Amazon EC2 Cost Comparison Calculator spreadsheet as part of our brand new AWS Economics Center. This center will contain the resources that developers and financial decision makers need in order to make an informed choice. We have had many in-depth conversations with CIO's, IT Directors, and other IT staff, and most of them have told us that their infrastructure costs are structured in a unique way and difficult to understand. Performing a truly accurate analysis will still require deep, thoughtful analysis of an enterprise's costs, but we hope that the resources and tools below will provide a good springboard for that investigation.

    The AWS team has laid out the costs of AWS Cloud vs. owned IT infrastructure.


    Whitepaper
    The Economics of the AWS Cloud vs. Owned IT Infrastructure. This paper identifies the direct and indirect costs of running a data center. Direct costs include the level of asset utilization, hardware costs, power efficiency, redundancy overhead, security, supply chain management, and personnel. Indirect factors include the opportunity cost of building and running high-availability infrastructure instead of focusing on core businesses, achieving high reliability, and access to capital needed to build, extend, and replace IT infrastructure.

    If you have every wished for a spreadsheet to help you calculate data center costs, AWS has this.


    Ec2costcalcThe Amazon EC2 Cost Comparison Calculator is a rich Excel spreadsheet that serves as a starting point for your own analysis. Designed to allow for detailed, fact-based comparison of the relative costs of hosting on Amazon EC2, hosting on dedicated in-house hardware, or hosting at a co-location facility, this detailed spreadsheet will help you to identify the major costs associated with each option. We've supplied the spreadsheet because we suspect many of our customers will want to customize the tool for their own use and the unique aspects of their own business.

    And, they launched an Economics Center.

    AWS Economics Center

    The AWS Economics Center provides access to information, tools, and resources to compare the costs of Amazon Web Services with IT infrastructure alternatives. Our goal is to help developers and business leaders quantify the economic benefits (and costs) of cloud computing.

    Overview

    Amazon Web Services (AWS) gives your business access to compute, storage, database, and other in-the-cloud IT infrastructure services on demand, charging you only for the resources you actually use. With AWS you can reduce costs, improve cash flow, minimize business risks, and maximize revenue opportunities for your business.

    • Reduce costs and improve cash flow.
      Avoid the capital expense of owning servers or operating data centers by using AWS’s reliable, scalable, and elastic infrastructure platform. AWSallows you to add or remove resources as needed based on the real-time demands of your applications. You can lower IT operating costs and improve your cash flow by avoiding the upfront costs of building infrastructure and paying only for those resources you actually use.
    • Minimize your financial and business risks.
      Simplify capacity planning and minimize both the financial risk of owning too many servers and the business risk of not owning enough servers by usingAWS’s elastic, on-demand cloud infrastructure. SinceAWS is available without contracts or long-term commitments and supports multiple programming languages and operating systems, you retain maximum flexibility. And for many businesses, the security and reliability of the AWS platform often exceeds what they could develop affordably on their own.
    • Maximize your revenue opportunities.
      Maximize your revenue opportunities with AWS by allocating more of your time and resources to activities that differentiate your business to your customers – instead of focusing on IT infrastructure “heavy lifting.” Use AWS to provision IT resources on-demand within minutes so your business’s applications launch in days instead of months. UseAWS as a low-cost test environment to sample new business models, execute one-time projects, or perform experiments aimed at new revenue opportunities.

    Capacity vs. Usage Comparison

    This last graph is the Christmas wish list for enlightened green IT thinkers.  IT load that tracks to demand.

    Click to read more ...

    Tuesday
    Dec082009

    Projected PUE 1.18 for NCSA Blue Water Data Center

    I blogged yesterday on the Univ of Illinois NCSA Blue Waters super computer.

    Univ of Illinois NCSA facility drops UPS for energy efficiency and cost savings, bldg cost $3 mil per mW

    Below is a lot of different parts in what Univ of Illinois’s NCSA facility is building to host the IBM Blue Waters Super Computer.  I’ve seen lots of people talk about energy efficiency and cost savings.  But, the things that got my attention is the fact is this facility dropped the UPS feature and it is built for $3mil per mW for a 24 mW facility.

    The one thing I was looking for and couldn’t find was what the PUE would be for the data center.  Thanks to Google Alerts a person from the NCSA contacted me and I asked for the PUE of the facility.  They sent me this article that mentions PUE.   The answer is 1.18

    With PCF and Blue Waters, we will achieve a PUE in the neighborhood of about 1.18.

    The article is an interview with IBM Fellow Ed Seminaro, chief architect for Power HPC servers at IBM.  There are actually some excellent points that Ed makes.

    Q: What are the common mistakes people make when building a data center?

    A: One of the most common mistakes I see is designing the data center to be a little too flexible. It is easy to convince yourself that, when you build a building, you really want to build it to accommodate any type of equipment, but this is at the cost of power efficiency.

    As I mentioned in my post yesterday, the building cost is $3mil per mW, much lower than a typical data center.

    Another is cost of building construction. Some people spend enormous sums, but really, it gets back to can you design the IT equipment so that it doesn't require too much special capability. And what that really means is that you don't have to build a very special facility, you just have to be able to build the general power and cooling capabilities you need and a good sturdy raised floor. This can save a phenomenal amount of money.

    Click to read more ...

    Monday
    Dec072009

    Oregon State Data Center, learns from its first data center, a bit of humor

    Saw this Oregon article about Oregon’s state data center.  I started reading expecting to hear interesting data center ideas, but I started to laugh as it was humorous to see this was Oregon state's first true data center and they thought they could run a data center with unqualified staff and they could do server consolidation across organizational boundaries.

    Here is the background.

    The Lesson from Oregon's Data Center: Don't Promise Too Much

    12/04/2009

    State governments across the country are making big changes in their IT departments. They're centralizing their own state data systems in a push to save money. The state of Washington is building a $300 million data center in Olympia. Oregon undertook a similar project a few years ago, but it's been criticized for failing to produce the promised financial savings. Salem Correspondent Chris Lehman found lessons from Oregon.

    The State Data Center is a generic looking office building on the edge of Salem. Inside are the digital nerve centers of 10 state agencies, including Human Services, Corrections and Transportation. This mammoth information repository is so sensitive, you can't get very far before you get to something that operations manager Brian Nealy calls the "man trap." It's kind of like an air lock, you have to clear one set of doors before you can get through the next set.

    And the story continues.

    They have a physical security system.

    Bryan Nealy: "You'll notice there are card readers on every door in the secure part of the data center. That way we can give people access only to the areas they need to go into. It's very granular as far as where people can get. This is the command center. This is manned 24–7, 365."

    Yet, their goal was to consolidate across agencies which would cause huge workflow and security problems.

    Koreski says the original business case for this $63 million facility made assumptions that turned out to be impractical. For example, planners figured they could combine servers from different agencies just by putting them under the same roof. But that's not what happened. Koreski says you can't do the two things at once: physically move the servers and combine their functions.

    Due to this assumption they promised a cost savings.

    Three years after it opened, data managers are still trying to reduce the number of physical machines at the Oregon Data Center. That ongoing work is one of the reasons Data Center Director John Koreski concedes the facility isn't on track to meet the original goal of saving the state money within the first five years.

    John Koreski: "It's not even close."

    So, data center operations is dancing to show they didn’t save money, but they did reduce future costs.

    And that change has meant the economies of scale haven't materialized as fast as once thought. Koreski took the reigns of the Data Center in January. His predecessor left after a scathing audit from the Oregon Secretary of State's office last year. It said, quote, "It is unlikely that the anticipated savings will occur." But Director Koreski insists the Data Center is saving the state money.

    John Koreski: "What our consolidation efforts resulted in was a cost avoidance, as opposed to a true cost savings where we actually wrote a check back to the Legislature."

    Luckily Intel and Moore’s law saved their ass even though they are making it seem like the data center addresses budget issues.

    In other words, Koreski says the Data Center is growing its capacity at a faster rate than it's growing its budget. That explanation computes for at least one analyst. Bob Cummings works in the Legislative Fiscal Office. It's his job to make sure the numbers add up for major state technology projects. He jumped into the Data Center fray as soon as he was hired last summer, and what Cummings found shocked him.

    The Legislative Fiscal office faults the rationale for the data center as bullshit.

    Bob Cummings: "It was the right thing to do. However, the rationale for doing it, and the baseline cost estimates and stuff for doing it, were all b–––––––. They were all wrong. They were all low."

    Then it gets funnier.

    Cummings says the state of Oregon failed to take into account one key detail: Washington already had a data center and is building a bigger one. In Oregon, no one with the state had ever run a Data Center before.

    We have never done this before, but our first try was a great job.

    Bob Cummings: "I mean, we had to build everything from scratch. And by the way, we did a great job of building a data center but didn't have anybody to run it, didn't have any procedures, no methods. We outsourced to a non–existent organization."

    These guys are amateurs.

    Oregon Department of Administrative Services Director Scott Harra echoed this in his response to the Secretary of State's audit. Harra wrote that the consolidation effort was hampered because it required skills and experience that did not previously exist in Oregon's state government. After last year's audit, Democratic State Representative Chuck Riley led a hearing that looked into the Data Center. He says he's convinced Data Center managers are saving the state money, but:

    Rep. Chuck Riley: "The question is, did they meet their goals. And the answer is basically no, they didn't meet their goals. They over promised."

    And that's the basic message Riley and others have for developers of Washington's data center: Keep expectations realistic. I'm Chris Lehman in Salem.

    So, for all of you looking at Oregon for a state to put a data center. You can skip a trip to the Oregon state data center as I doubt you will hear this story.  Although it would be entertaining to hear an Oregon politician explain data center operations.

    Click to read more ...