Google Ads

Enter your email address:

Delivered by FeedBurner

Search
This form does not yet contain any fields.
    Navigation
    Tuesday
    Oct202009

    Google, Intel, Netapp fund Wimpy Node/Server Research

    News.com has an article on low power servers/nodes which is funded by Google, Intel, and Netapp.

    Researchers tout 'wimpy nodes' for Net computing

    by Stephen Shankland

    Mainstream servers are growing increasingly brawny with multicore processors and tremendous memory capacity, but researchers at Carnegie Mellon University and Intel Labs Pittsburgh think 98-pound weaklings of the computing world might be better suited for many of the jobs on the Internet today.

    This first-generation FAWN system has an array of boards, each with its own processor, flash memory card, and network connection.

    This first-generation FAWN system has an array of boards, each with its own processor, flash memory card, and network connection.

    (Credit: Carnegie Mellon University)

    The alternative the researchers advocate is named FAWN, short for Fast Array of Wimpy Nodes. It's described in a paper just presented at the Symposium on Operating Systems Principles.

    In short, the researchers believe some work can be managed with lower expense and lower power consumption using a cluster of servers built with lower-end processors and flash memory than with a general-purpose server. And these days, with green technology in vogue and power costs no longer an afterthought, efficient computing is a big deal.

    "We were looking at efficiency at sub-maximum load. We realized the same techniques could serve high loads more efficiently as well," said David Andersen, the Carnegie Mellon assistant professor of computer science who helped lead the project.

    It's not just academic work. Google, Intel, and NetApp are helping to fund the project, and the researchers are talking to Facebook, too. "We want to understand their challenges," Andersen said.

    What scenarios are they looking at?

    The FAWN approach can be adjusted with hard drives or conventional memory to match various sizes of datasets or rates or the queries retrieving that data.

    The FAWN approach can be adjusted with hard drives or conventional memory to match various sizes of datasets or rates or the queries retrieving that data.

    (Credit: Carnegie Mellon et al.)

    The key value of FAWN
    So where exactly is FAWN useful? Andersen makes no claims that it's good for everything--but the use cases are often central to companies at the center of the ongoing Internet revolution.

    Specifically, it's good for situations where companies must store a lot of smaller tidbits of information that's read from the storage system much more often than it's written. Often this data is stored in a form called "key-value pairs." These consist of an indexing key and some associated data: "The key might be 'Dave Andersen update 10,579.' The update value might be 'Back in Pittsburgh.'"

    How much power can they save?  52 queries per joule for typical server vs. 346 queris per joule for FAWN.

    The researchers compared how many datastore queries could be accomplished per unit of energy and found FAWN compelling: a conventional server with a quad-core Intel Q6700 processor, 2GB of memory, and an Mtron Mobi solid-state drive measured 52 queries per joule of energy compared to 346 for a FAWN cluster. And tests of a newer design show even more promise: "Our preliminary experience using Intel Atom-based systems paired with SATA-based Flash drives shows that they can provide over 1,000 queries per Joule," the paper said.

    Click to read more ...

    Tuesday
    Oct202009

    Congress requires Green Data Center For Homeland Security Department

    DataCenterKnowledge has a post on Congress requiring Department Homeland Security (DHS) be greener.

    DHS Data Center Funding Tied to Efficiency

    October 19th, 2009 : Rich Miller

    Congress has told the Department of Homeland Security that it must improve the power efficiency of its data center in Mississippi before it can get additional funds for an ongoing data center consolidation, NextGov reports.

    The facility at NASA’s Stennis Space Center is one of two sites where DHS hopes to consolidate its data centers by 2013. But the facility’s power consumption is taxing the capacity of the Stennis campus, leading the House to restrict nearly half of the site’s $83 million budget until it upgrades its power capacity and improves its power efficiency.

    The NextGov site has additional information.

    Congress requires Homeland Security's data center to go green

    BY JILL R. AITORO 10/16/2009

    In the funding bill for the Homeland Security Department that it passed on Thursday, the House restricted more than half of the nearly $83 million budget for a massive data center until DHS develops ways to ensure there is enough power to sustain operations.

    The fiscal 2010 Homeland Security appropriations bill requires the department to spend $38.5 million to upgrade the power capabilities at the National Center for Critical Information Processing and Storage, known as Data Center One and based at NASA's Stennis Space Center, near the Gulf Coast in Mississippi. Homeland Security cannot spend the remaining $45 million on building out the data center, which will provide information processing for the entire department, until DHS officials can make certain the data center has enough power and uses green technologies to reduce demand.

    Is this the start of more gov’t data centers to be green?  How much energy efficiency is sufficient to meet the Congress’s approval?

    Click to read more ...

    Monday
    Oct192009

    Virtualization Blogger asks “how green is your data center?”

    Virtualization Journal has a post asking How green is your data center?

    How Green is Your Data Center?

    Give us your opinions and experiences designing and implementing the green data center

    BY JOHN SAVAGEAU

    Data Center “X” just announced a 2 MegaWatt expansion to their facility in Northern California. A major increase in data center capacity, and a source of great joy for the company. And the source of potentially 714 additional tons of carbon introduced each month into the environment.

    Think Green and EfficientMany groups and organizations are gathering to address the need to bring our data centers under control. Some are focused on providing marketing value for their members, most others appear genuinely concerned with the amount of power being consumed within data centers, the amount of carbon being produced by data centers, and the potential for using alternative or clean energy initiatives within data centers. There are stories around which claim the data center industry is actually using up to 5% of power consumed within the United States, which if true, makes this a really important discussion.

    What I found entertaining was the author’s use of search results to imply the importance of the topic

    If you do a “Bing” search won the topic of “green data center,” you will find around 144 million results. Three times as many as a “paris hilton” search. That makes it a fairly saturated topic, indicating a heck of a lot of interest. The first page of the Bing search gives you a mixture of commercial companies, blogs, and “ezines” covering the topic – as well as an organization or two. Some highlights include:

    I show up # 2 in bing.com search results vs. #1 in google.com “green data center” search results.  It turns out I get 16X more total traffic (not just “green data center”) through google search than bing search.  Search is relevant as I get 63% of my web site traffic through search.  In fact, I get more traffic through images.google.com search than bing.com search.

    image

    image

    Click to read more ...

    Friday
    Oct162009

    Data Center Summit – Social Networking driving Innovation

    KC Mares posted a blog entry on the SVLG Data Center Summit.

    SVLG Data Center Summit a GREAT Success

    Yesterday, October 15th, after a culmination of a year’s worth of work from over 60 people, the SVLG Data Center Energy Efficiency Summit went off smoothly. We had 44 presenters, 24 case case studies presented and about 450 people at the summit. The event was hosted by NetApp in Sunnyvale. Representatives from numerous Silicon Valley elites, start ups, VCs, and solution companies were present. All case studies were presented from data center end-users, showing what they are doing to reduce energy use in their data centers. We had brief sessions about cloud, carbon reductions, notable sessions called the Chill Off 2, in which various cooling technologies were tested with real load in a real data center, also testing the systems at various temperature ranges. Andrew Fanara with EPA gave a quick update of EnergyStar for servers, storage and networking gear. Paul Scheihing with DOE provided an update of the energy efficiency programs and plans for data centers. I had a candid interview with California Energy Commission Commissioner and old friend Jeff Byron about California’s energy policy, zero-energy buildings requirement, renewable portfolio standards, energy efficiency standards for TVs and other consumable devices, etc. It was fun!

    KC goes on to highlight the collaboration and interaction.

    Overall, a wonderful event. It was great to see so many industry friends, old and new, and to make new friends. As the co-chair of the program and summit, it was great to see so many people interacting with each other, beginning collaborations stimulated from the excellent case studies presented, which is what the program is all about: Innovation through collaboration. Together we are benefit when we share with each other, and consequently, we as an industry is then improved. It was wonderful to see every presenter do a fantastic job showing off their wonderful case studies. No vendors showing off their product, instead, everyone sharing information.

    What this event is proving effective for the data center industry is the power of social networking to drive innovation.  Intel’s Eleanor Wynn gave a presentation that discusses this concept.

    Session Title:
    Social Networking and Innovation

    Abstract:
    This session will present research on social network topographies.
    Topics include:
    • Can social networks generate innovation?
    • Effective and ineffective network topologies
    • Characteristics of social networks that allow predictions on success
    • Current social media technologies at Intel and the types of additional capabilities that are needed to support ongoing collaborative networks across the globe

    Speaker:
    Eleanor Wynn Social Technology Architect and Principal Engineer
    Intel Corporation

    image

    I am talking about social networking, not social media.

    image image

    image 

    The organizers and sponsors got their value as people stuck around for the cocktail reception.

    The cocktail reception at the end of the day drew about 200 people that wanted to stay and chat, make friends, and just have fun. So many thanks go out to my committee, which brought the case studies and presentations to us, which includes but not limited to: Bill Tschudi, Bob Hines, Bruce Myatt, Dale Sartor, David Mastrandrea, Deborah Grove, James Bickford, Kelly Aaron, Mukesh Khattar, Patricia Nealon, Ralph Renne, Rosemary Scher, Tersa Tung, and Zen Kishimoto; to Ray Pfeifer, my program co-chair, who brought this program to us last year and so many of the case studies this year again, and his leadership to keep this program about the end-user; to LBNL, CEC, CIEE and PG&E for helping to fund case studies and support the program; to the many sponsors of the summit. And certainly to SVLG for their staff to help make this summit a reality, and most certainly also their lead person, Bob Hines, for his drive and energy. Overall, an excellent day, full of wonderful people, making new and great little discoveries which each other to advance the energy efficiency and this financial success of our businesses, and helping to lead the data center industry to greater success.

    Unfortunately, I couldn’t make it to the event, but I am reaching out to my social network of people who did go to find out comments they had.  And, KC is going to help extract some of the highlights.

    Congratulations KC for hosting a great event.

    Click to read more ...

    Thursday
    Oct152009

    Google Releases Q3 2009 PUE Numbers

    Google just updated their PUE measurement page with Q3 2009 numbers.

    Quarterly energy-weighted average PUE:
    1.22

    Trailing twelve-month energy-weighted avg. PUE: 
    1.19

    Individual facility minimum quarterly PUE:
    1.15, Data Center B

    Individual facility minimum TTM PUE*:
    1.14, Data Center B

    Individual facility maximum quarterly PUE:
    1.33, Data Center H

    Individual facility maximum TTM PUE*:
    1.21, Data Center A

    * Only facilities with at least twelve months of operation are eligible for Individual Facility TTM PUE reporting

    What is nice is the Google guys have discussed their latest data center J even though it has only one data point.  Data Centers G, H, and I are mentioned as well as not being tuned yet.

    image
    Notes:

    We added one new facility, Data Center J, to our PUE report. Overall, our fleet QoQ results were as expected. The Q3 total quarterly energy-weighted average PUE of 1.22 was higher than the Q2 result of 1.20 due to expected seasonal effects. The trailing twelve-month energy-weighted average PUE remained constant at 1.19. YoY performance improved from facility tuning and continued application of best practices. The quarterly energy-weighted average PUE improved from 1.23 in Q3'08, and the TTM PUE improved from 1.21. New data centers G, H, I, and J reported elevated PUE results as we continue to tune operations to meet steady-state design targets.

    The Google guys know they are going to get critiqued on how good their numbers are, so they described their measurement methods and error analysis.

    Measurement Methodology

    The PUE of a data center is not a static value. Varying server and storage utilization, the fraction of design IT power actually in use, environmental conditions, and other variables strongly influence PUE. Thus, we use multiple on-line power meters in our data centers to characterize power consumption and PUE over time. These meters permit detailed power and energy metering of the cooling infrastructure and IT equipment separately, allowing for a very accurate PUE determination.  Our facilities contain dozens or even hundreds of power meters to ensure that all of the power-consuming elements are accounted for in our PUE calculation, in accordance with the metric definition6. Only the office space energy is excluded from our PUE calculations. Figure 3 shows a simplified power distribution schematic for our data centers.

    image

    Figure 3: Google Data Center Power Distribution Schematic

    Equation for PUE for Our Data Centers

    image

    • EUS1 Energy consumption for type 1 unit substations feeding the cooling plant, lighting, and some network equipment
    • EUS2 Energy consumption for type 2 unit substations feeding servers, network, storage, and CRACs
    • ETX Medium and high voltage transformer losses
    • EHV High voltage cable losses
    • ELV Low voltage cable losses
    • ECRAC CRAC energy consumption
    • EUPS Energy loss at UPSes which feed servers, network, and storage equipment
    • ENet1 Network room energy fed from type 1 unit substitution
    Error Analysis

    To ensure our PUE calculations are accurate, we performed an uncertainty analysis using the root sum of the squares (RSS) method.  Our uncertainty analysis shows that the overall uncertainty in the PUE calculations is less than 2% (99.7% confidence interval).  Our power meters are highly accurate (ANSI C12.20 0.2 compliant) so that measurement errors have a negligible impact on overall PUE uncertainty.  The contribution to the overall uncertainty for each term described above is outlined in the table below.

    Term
    Overall Contribution to Uncertainty

    EUS1
    4%

    EUS2
    9%

    ETX
    10%

    ECRAC
    70%

    EUPS
    <1%

    EHV
    2%

    ELV
    5%

    ENet1
    <1%

    Click to read more ...