Google Ads

Enter your email address:

Delivered by FeedBurner

Search
This form does not yet contain any fields.
    Navigation
    Friday
    Mar052010

    Improving your Washington Data Center visit, spend extra time at Cave B Inn

    If you ever go to Eastern Washington for a Data Center visit, don’t fall into the trap most do, booking your schedule tight to drive in and out. 

    I’ve fallen into this trap on my visits to Eastern Washington Data Center in the Quincy, WA area where Microsoft, Yahoo, Intuit, and ask.com have data centers.  You leave late in the day after work or people arrive on an evening flight, then drive 2 – 2 1/2 hrs east to the Quincy, WA area.  You ask where to stay and the overwhelming recommendation is Cave B Inn.  You book the corporate rate that one of the companies has with Cave B Inn, and you arrive somewhere between 9 – 11p, the restaurant is closed, you have a full day the next day, then go home.  You wonder to your room after checking in.

    And don’t even notice the view as it is dark.

    You get up in the morning and see the view.

    You missed the wine tasting room.

    I just got back from a night’s stay at Cave B Inn and didn’t visit a data center as we had a school auction we had bid on, so this was a pleasure trip, and got a chance to enjoy the visit.  i told my wife I’ve been here before and she couldn’t believe how nice it was.

    Here is a 360 degree view I shot this morning.

    How nice is the hotel?  When the Dave Matthews band plays at The Gorge Amphitheatre, he books the whole hotel for ten days.

     

    Dave Matthews Band Tickets

    The Gorge Amphitheatre
    George, WA

    If Dave Matthews Band can stay here for 10 days, it’s worth you spending a bit extra time.  I am glad I did.  And, there are many who have learned to make the time if they make frequent trips to Quincy, WA.  But, first timers will miss out. 

    BTW, I couldn’t resist sending an e-mail to Mike Manos who has done this trip many times, telling him I was in the Cave B wine tasting room.  I bought two cases of wine after this visit.

    image

    Hint: for you those of you bringing customers out to Quincy, don’t miss the opportunity to add the wine tasting.  Who else gets to wine taste as part of a data center visit?

    Click to read more ...

    Thursday
    Mar042010

    Microsoft Research Images of Measuring Energy use of a Virtual Machine, JouleMeter, Part 2

    I blogged about Microsoft Research’s JouleMeter, and I am getting lots of his to the post.  I found this PDF that explains more details in a poster.

    Below are images extracted from the PDF with images reoriented for the data center audience.

    image

    image

    image

    image

    image

    Click to read more ...

    Wednesday
    Mar032010

    Christian Belady migrates to Cloud Computing joins Microsoft Research eXtreme Computing Group

    I’ve started discussing the Cloud Computing more as the cloud infrastructure does almost all the things a green data center does and has industry’s attention.  I can’t think of anyone who wants to go to the cloud to over provision hardware, create silos that don’t work together, and ignore their energy use.

    Mike Manos announced his move to the Mobile Cloud.

    I am extremely happy to announce that I have taken a role at Nokia as their VP of Service Operations.  In this role I will have global responsibility for the strategy, operation and run of infrastructure aspects for Nokia’s new cloud and mobile services platforms.

    Christian Belady announced his move to Microsoft Research’s eXtreme Computing Group.

    But even with all of this change, I see there is even more opportunity now then there was when I started at Microsoft almost three years ago. Cloud computing has made mining and developing the “right” opportunities even that much more important. We need to think about how we tie together the complete ecosystem of the software stack, the IT, the data center and the grid today and what efficiencies we can drive from our research and development for the future. For those of you that know me – this is the kind of opportunity that makes me salivate. There aren’t many people around tasked with this kind of challenge and this is the opportunity I have been given in the evolution of my career at Microsoft. This week I begin tackling these projects within the Microsoft Research group in team called the Extreme Computing Group.

    How big is the Cloud for Microsoft Research?  The top 3 demos listed in their demo page from TechFest are Cloud Computing.

    Client + Cloud Computing for Research

    Scientific applications have diverse data and computation needs that scale from desktop to supercomputers. Besides the nature of the application and the domain, the resource needs for the applications also vary over time—as the collaboration and the data collections expand, or when seasonal campaigns are undertaken. Cloud computing offers a scalable, economic, on-demand model well-matched to evolving eScience needs. We will present a suite of science applications that leverage the capabilities of Microsoft's Azure cloud-computing platform. We will show tools and patterns we have developed to use the cloud effectively for solving problems in genomics, environmental science, and oceanography, covering both data and compute-intensive applications.

    Cloud Faster

    To make cloud computing work, we must make applications run substantially faster, both over the Internet and within data centers. Our measurements of real applications show that today's protocols fall short, leading to slow page-load times across the Internet and congestion collapses inside the data center. We have developed a new suite of architectures and protocols that boost performance and the robustness of communications to overcome these problems. The results are backed by real measurements and a new theory describing protocol dynamics that enables us to remedy fundamental problems in the Transmission Control Protocol. We will demo the experience users will have with Bing Web sites, both with and without our improvements. The difference is stunning. We also will show visualizations of intra-data-center communication problems and our changes that fix them. This work stems from collaborations with Bing and Windows Core Operating System Networking.

    Energy-Aware VMs and Cloud Computing

    Virtual machines (VMs) become key platform components for data centers and Microsoft products such as Win8, System Center, and Azure. But existing power-management schemes designed at the server level, such as power capping and CPU throttling, do not work with VMs. VMmeter can estimate per-VM power consumption from Hyper-V performance counters, with the assistance of WinServer2008 R2 machine-level power metering, thus enabling power management at VM granularity. For example, we can selectively throttle VMs with the least performance hit for power capping. This demo compares VMmeter-based with hardware-based power-management solutions. We run multiple VMs, one of them being a high-priority video playback on a server. When a user requests power capping with our solution, the video playback will maintain high performance, while with hardware-capping solutions, we see reduced performance. We also will show how VMmeter can be part of System Center management packs.

    Microsoft Research is lucky to find someone who is influential in the industry and has spent 3 years being in its own data center operations. There are few who could make the jump from data center operations to Microsoft Research, and I am sure Christian will constantly be working on knowledge transfers across the teams.

    For the rest of the industry, I’ve go a feeling we are going to see even more of Christian’s ideas out there now that he is in Microsoft Research.

    This is exciting in itself, but what really gets me “charged” is the opportunity I have now to work between my former group Global Foundation Services (that drives the current Microsoft cloud infrastructure) and my new group Microsoft Research (MSR). Taking the best practices from what we have learned with our current and future Gen 4 data centers and combining them with the resources of one of the best research organizations in the world (MSR), I am convinced that many new and exciting things will come. And best of all, I am lucky to be right smack in the middle of it and will still be working closely with the teams driving the hardware architecture for the cloud today and in the future. So actually, I am really not leaving GFS but rather extending the reach of GFS into the future. Who can ask for a better opportunity….man I love this company!

    Click to read more ...

    Wednesday
    Mar032010

    Open Source Data Center Initiative Story by Mike Manos

    I wrote a post announcing GreenM3 partnering with University of Missouri and ARG Investments with Mike Manos as an industry advisor.  I spent a few paragraphs explaining the use of an Open Source Software model applied to data centers.

    Mike Manos took the time to write his own post in response to mine and it is well written story that explains why we are using this approach.

    My first reaction was to cut and paste relevant parts and add comments, but the whole story makes sense. So for a change, I am going to copy his whole post below to make sure we have it in two places.

    Open Source Data Center Initiative

    March 3, 2010 by mmanos

    There are many in the data center industry that have repeatedly called for change in this community of ours.  Change in technology, change in priorities, Change for the future.  Over the years we have seen those changes come very slowly and while they are starting to move a little faster now, (primarily due to the economic conditions and scrutiny over budgets more-so than a desire to evolve our space) our industry still faces challenges and resistance to forward progress.   There are lots of great ideas, lots of forward thinking, but moving this work to execution and educating business leaders as well as data center professionals to break away from those old stand by accepted norms has not gone well.

    That is why I am extremely happy to announce my involvement with the University of Missouri in the launch of a Not-For-Profit Data Center specific organization.   You might have read the formal announcement by Dave Oharawho launched the news via his industry website, GreenM3.   Dave is another of of those industry insiders who has long been perplexed by the lack of movement and initiative we have had on some great ideas and stand outs doing great work.  More importantly, it doesn’t stop there.  We have been able to put together quite a team of industry heavy-weights to get involved in this effort.  Those announcements are forthcoming, and when they do, I think you will get a sense of the type of sea-change this effort could potentially have.

    One of the largest challenges we have with regards to data centers is education.   Those of you who follow my blog know that I believe that some engineering and construction firms are incented ‘not to change’ or implementing new approaches.  The cover of complexity allows customers to remain in the dark while innovation is stifled. Those forces who desire to maintain an aura of black box complexity  around this space and repeatedly speak to the arcane arts of building out  data center facilities have been at this a long time.  To them, the interplay of systems requiring one-off monumental temples to technology on every single build is the norm.  Its how you maximize profit, and keep yourself in a profitable position.

    When I discussed this idea briefly with a close industry friend, his first question naturally revolved around how this work would compete with that of the Green Grid, or Uptime Institute, Data Center Pulse, or the other competing industry groups.  Essentially  was this going to yet another competing though-leadership organization.  The very specific answer to this is no, absolutely not.  

    These groups have been out espousing best practices for years.  They have embraced different technologies, they have tried to educate the industry.  they have been pushing for change (for the most part).  They do a great job of highlighting the challenges we face, but for the most part have waited around for universal good will and monetary pressures to make them happen.  I dawned on us that there was another way.   You need to ensure that you build something that gains mindshare, that gets the business leadership attention, that causes a paradigm shift.   As we put the pieces together we realized that the solution had to be credible, technical, and above all have a business case around it.   It seemed to us the parallels to the Open Source movement and the applicability of the approach were a perfect match.

    To be clear, this Open Source Data Center Initiative is focused around execution.   Its focused around putting together an open and free engineering framework upon which data center designs, technologies, and the like can be quickly put together and more-over standardize the approaches that both end-users and engineering firms approach the data center industry.

    Imagine if you will a base framework upon which engineering firms, or even individual engineers can propose technologies and designs, specific solution vendors could pitch technologies for inclusion and highlight their effectiveness, more over than all of that it will remove much mystery behind the work that happens in designing facilities and normalize conversations.   

    If you think of the Linux movement, and all of those who actively participate in submitting enhancements, features, even pulling together specific build packages for distribution, one could even see such things emerging in the data center engineering realm.   In fact with the myriad of emerging technologies assisting in more energy efficiency, greater densities, differences in approach to economization (air or water), use of containers or non use of containers, its easy to see the potential for this component based design. 

    One might think that we are effectively trying to put formal engineering firms out of business with this kind of work.  I would argue that this is definitely not the case.  While it may have the effect of removing some of the extra-profit that results from the current ‘complexity’ factor, this initiative should specifically drive common requirements, and lead to better educated customers, drive specific standards, and result in real world testing and data from the manufacturing community.  Plus, as anyone knows who has ever actually built a data center, the devil is in the localization and details.  Plus as this is an open-source initiative we will not be formally signing the drawings from a professional engineering perspective.

    Manufacturers could submit their technologies, sample application of their solutions, and have those designs plugged into a ‘package’ or ‘RPM’ if I could steal a term from the Redhat Linux nomenclature.  Moreover, we will be able to start driving true visibility of costs both upfront and operating and associate those costs with the set designs with differences and trending from regions around the world.  If its successful, it could be a very good thing. 

    We are not naive about this however.  We certainly expect there to be some resistance to this approach out there and in fact some outright negativity from those firms that make the most of the black box complexity components.

    We will have more information on the approach and what it is we are trying to accomplish very soon. 

    \Mm

    Click to read more ...

    Wednesday
    Mar032010

    Microsoft Research Paper, Measuring Energy use of a Virtual Machine

    Microsoft TechFest is going on now.

    About TechFest

    TechFest 2010TechFest is an annual event that brings researchers from Microsoft Research’s locations around the world to Redmond to share their latest work with fellow Microsoft employees. Attendees experience some of the freshest, most innovative technologies emerging from Microsoft’s research efforts. The event provides a forum in which product teams and researchers can discuss the novel work occurring in the labs, thereby encouraging effective technology transfer into Microsoft products.

    I used to go when I was a full time employee, but there are some good things you can learn by going to the demo site.

    One that caught my eye is the Network Embedded Computing that has consistently worked on data center energy sensor systems.

    Their latest project is Joulemeter, a project that can measure the energy usage of VM, Server, Client and Software.

    Joulemeter: VM, Server, Client, and Software Energy Usage

    Joulemeter is a software based mechanism to measure the energy usage of virtual machines (VMs), servers, desktops, laptops, and even individual softwares running on a computer.

    Joulemeter estimates the energy usage of a VM, computer, or software by measuring the hardware resources (CPU, disk, memory, screen etc) being used and converting the resource usage to actual power usage based on automatically learned realistic power models.

    Joulemeter overview

    Joulemeter can be used for gaining visibility into energy use and for making several power management and provisioning decisions in data centers, client computing, and software design.

    For more technical details on the system here is their paper.

    Virtual Machine Power Metering and Provisioning
    Aman Kansal, Feng
    Zhao, Jie Liu
    Microsoft Research
    Nupur Kothari
    University of Southern
    California
    Arka Bhattacharya
    IIT Kharagpur
    ABSTRACT
    Virtualization is often used in cloud computing platforms for its
    several advantages in efficient management of the physical resources.
    However, virtualization raises certain additional challenges, and
    one of them is lack of power metering for virtual machines (VMs).
    Power management requirements in modern data centers have led
    to most new servers providing power usage measurement in hardware
    and alternate solutions exist for older servers using circuit and
    outlet level measurements. However, VM power cannot be measured
    purely in hardware. We present a solution for VM power metering.
    We build power models to infer power consumption from resource
    usage at runtime and identify the challenges that arise when
    applying such models for VM power metering. We show how existing
    instrumentation in server hardware and hypervisors can be
    used to build the required power models on real platforms with low
    error. The entire metering approach is designed to operate with
    extremely low runtime overhead while providing practically useful
    accuracy. We illustrate the use of the proposed metering capability
    for VM power capping, leading to significant savings in power provisioning
    costs that constitute a large fraction of data center power
    costs. Experiments are performed on server traces from several
    thousand production servers, hosting Microsoft’s real-world applications
    such as Windows Live Messenger. The results show that
    not only does VM power metering allows reclaiming the savings
    that were earlier achieved using physical server power capping, but
    also that it enables further savings in provisioning costs with virtualization.

    Note there will be a desktop and laptop version available soon.

    Download: A freely downloadable version of the Joulemeter software that measures laptop and desktop energy usage will be be available in a few weeks. Watch this space!

    But I want to get access to the VM, Server and software versions for the data center.  Maybe I can get this group involved with GreenM3 being transition into an NPO.  The University of Missouri is also another connection as a  Mizzou professors worked with the Microsoft Researchers in a prior job developing sensor networks.

    Click to read more ...