Google Ads

Enter your email address:

Delivered by FeedBurner

Search
This form does not yet contain any fields.
    Navigation
    Wednesday
    Mar032010

    Open Source Data Center Initiative Story by Mike Manos

    I wrote a post announcing GreenM3 partnering with University of Missouri and ARG Investments with Mike Manos as an industry advisor.  I spent a few paragraphs explaining the use of an Open Source Software model applied to data centers.

    Mike Manos took the time to write his own post in response to mine and it is well written story that explains why we are using this approach.

    My first reaction was to cut and paste relevant parts and add comments, but the whole story makes sense. So for a change, I am going to copy his whole post below to make sure we have it in two places.

    Open Source Data Center Initiative

    March 3, 2010 by mmanos

    There are many in the data center industry that have repeatedly called for change in this community of ours.  Change in technology, change in priorities, Change for the future.  Over the years we have seen those changes come very slowly and while they are starting to move a little faster now, (primarily due to the economic conditions and scrutiny over budgets more-so than a desire to evolve our space) our industry still faces challenges and resistance to forward progress.   There are lots of great ideas, lots of forward thinking, but moving this work to execution and educating business leaders as well as data center professionals to break away from those old stand by accepted norms has not gone well.

    That is why I am extremely happy to announce my involvement with the University of Missouri in the launch of a Not-For-Profit Data Center specific organization.   You might have read the formal announcement by Dave Oharawho launched the news via his industry website, GreenM3.   Dave is another of of those industry insiders who has long been perplexed by the lack of movement and initiative we have had on some great ideas and stand outs doing great work.  More importantly, it doesn’t stop there.  We have been able to put together quite a team of industry heavy-weights to get involved in this effort.  Those announcements are forthcoming, and when they do, I think you will get a sense of the type of sea-change this effort could potentially have.

    One of the largest challenges we have with regards to data centers is education.   Those of you who follow my blog know that I believe that some engineering and construction firms are incented ‘not to change’ or implementing new approaches.  The cover of complexity allows customers to remain in the dark while innovation is stifled. Those forces who desire to maintain an aura of black box complexity  around this space and repeatedly speak to the arcane arts of building out  data center facilities have been at this a long time.  To them, the interplay of systems requiring one-off monumental temples to technology on every single build is the norm.  Its how you maximize profit, and keep yourself in a profitable position.

    When I discussed this idea briefly with a close industry friend, his first question naturally revolved around how this work would compete with that of the Green Grid, or Uptime Institute, Data Center Pulse, or the other competing industry groups.  Essentially  was this going to yet another competing though-leadership organization.  The very specific answer to this is no, absolutely not.  

    These groups have been out espousing best practices for years.  They have embraced different technologies, they have tried to educate the industry.  they have been pushing for change (for the most part).  They do a great job of highlighting the challenges we face, but for the most part have waited around for universal good will and monetary pressures to make them happen.  I dawned on us that there was another way.   You need to ensure that you build something that gains mindshare, that gets the business leadership attention, that causes a paradigm shift.   As we put the pieces together we realized that the solution had to be credible, technical, and above all have a business case around it.   It seemed to us the parallels to the Open Source movement and the applicability of the approach were a perfect match.

    To be clear, this Open Source Data Center Initiative is focused around execution.   Its focused around putting together an open and free engineering framework upon which data center designs, technologies, and the like can be quickly put together and more-over standardize the approaches that both end-users and engineering firms approach the data center industry.

    Imagine if you will a base framework upon which engineering firms, or even individual engineers can propose technologies and designs, specific solution vendors could pitch technologies for inclusion and highlight their effectiveness, more over than all of that it will remove much mystery behind the work that happens in designing facilities and normalize conversations.   

    If you think of the Linux movement, and all of those who actively participate in submitting enhancements, features, even pulling together specific build packages for distribution, one could even see such things emerging in the data center engineering realm.   In fact with the myriad of emerging technologies assisting in more energy efficiency, greater densities, differences in approach to economization (air or water), use of containers or non use of containers, its easy to see the potential for this component based design. 

    One might think that we are effectively trying to put formal engineering firms out of business with this kind of work.  I would argue that this is definitely not the case.  While it may have the effect of removing some of the extra-profit that results from the current ‘complexity’ factor, this initiative should specifically drive common requirements, and lead to better educated customers, drive specific standards, and result in real world testing and data from the manufacturing community.  Plus, as anyone knows who has ever actually built a data center, the devil is in the localization and details.  Plus as this is an open-source initiative we will not be formally signing the drawings from a professional engineering perspective.

    Manufacturers could submit their technologies, sample application of their solutions, and have those designs plugged into a ‘package’ or ‘RPM’ if I could steal a term from the Redhat Linux nomenclature.  Moreover, we will be able to start driving true visibility of costs both upfront and operating and associate those costs with the set designs with differences and trending from regions around the world.  If its successful, it could be a very good thing. 

    We are not naive about this however.  We certainly expect there to be some resistance to this approach out there and in fact some outright negativity from those firms that make the most of the black box complexity components.

    We will have more information on the approach and what it is we are trying to accomplish very soon. 

    \Mm

    Click to read more ...

    Wednesday
    Mar032010

    Microsoft Research Paper, Measuring Energy use of a Virtual Machine

    Microsoft TechFest is going on now.

    About TechFest

    TechFest 2010TechFest is an annual event that brings researchers from Microsoft Research’s locations around the world to Redmond to share their latest work with fellow Microsoft employees. Attendees experience some of the freshest, most innovative technologies emerging from Microsoft’s research efforts. The event provides a forum in which product teams and researchers can discuss the novel work occurring in the labs, thereby encouraging effective technology transfer into Microsoft products.

    I used to go when I was a full time employee, but there are some good things you can learn by going to the demo site.

    One that caught my eye is the Network Embedded Computing that has consistently worked on data center energy sensor systems.

    Their latest project is Joulemeter, a project that can measure the energy usage of VM, Server, Client and Software.

    Joulemeter: VM, Server, Client, and Software Energy Usage

    Joulemeter is a software based mechanism to measure the energy usage of virtual machines (VMs), servers, desktops, laptops, and even individual softwares running on a computer.

    Joulemeter estimates the energy usage of a VM, computer, or software by measuring the hardware resources (CPU, disk, memory, screen etc) being used and converting the resource usage to actual power usage based on automatically learned realistic power models.

    Joulemeter overview

    Joulemeter can be used for gaining visibility into energy use and for making several power management and provisioning decisions in data centers, client computing, and software design.

    For more technical details on the system here is their paper.

    Virtual Machine Power Metering and Provisioning
    Aman Kansal, Feng
    Zhao, Jie Liu
    Microsoft Research
    Nupur Kothari
    University of Southern
    California
    Arka Bhattacharya
    IIT Kharagpur
    ABSTRACT
    Virtualization is often used in cloud computing platforms for its
    several advantages in efficient management of the physical resources.
    However, virtualization raises certain additional challenges, and
    one of them is lack of power metering for virtual machines (VMs).
    Power management requirements in modern data centers have led
    to most new servers providing power usage measurement in hardware
    and alternate solutions exist for older servers using circuit and
    outlet level measurements. However, VM power cannot be measured
    purely in hardware. We present a solution for VM power metering.
    We build power models to infer power consumption from resource
    usage at runtime and identify the challenges that arise when
    applying such models for VM power metering. We show how existing
    instrumentation in server hardware and hypervisors can be
    used to build the required power models on real platforms with low
    error. The entire metering approach is designed to operate with
    extremely low runtime overhead while providing practically useful
    accuracy. We illustrate the use of the proposed metering capability
    for VM power capping, leading to significant savings in power provisioning
    costs that constitute a large fraction of data center power
    costs. Experiments are performed on server traces from several
    thousand production servers, hosting Microsoft’s real-world applications
    such as Windows Live Messenger. The results show that
    not only does VM power metering allows reclaiming the savings
    that were earlier achieved using physical server power capping, but
    also that it enables further savings in provisioning costs with virtualization.

    Note there will be a desktop and laptop version available soon.

    Download: A freely downloadable version of the Joulemeter software that measures laptop and desktop energy usage will be be available in a few weeks. Watch this space!

    But I want to get access to the VM, Server and software versions for the data center.  Maybe I can get this group involved with GreenM3 being transition into an NPO.  The University of Missouri is also another connection as a  Mizzou professors worked with the Microsoft Researchers in a prior job developing sensor networks.

    Click to read more ...

    Wednesday
    Mar032010

    Microsoft’s Flexible Data Center System, Kevin Timmons presents at DCD NY

    I couldn’t make it to DataCenterDynamics NY, but I have plenty of friends there, so I can get a virtual report.

    Kevin Timmons gave the keynote and Rich Miller wrote up a nice entry.

    Microsoft’s Timmons: ‘Challenge Everything’

    March 3rd, 2010 : Rich Miller

    The building blocks for Microsoft’s data center of the future can be assembled in four days, by one person. The two data center containers, known as IT PACs (short for pre-assembled components) proof of concept, are built entirely from aluminum. The first two proof of concept units use residential garden hoses for their water hookups.

    “Challenge everything you know about a traditional data center,” said Kevin Timmons, who heads Microsoft’s Global Foundation Services, in describing the company’s approach to building new data centers. “From the walls to the roof to where it needs to be built, challenge everything.”

    So much of what is wrong with data centers and prevent them from being Green is people do what they have done in the past.  This includes the engineer companies and the customers who specify the data centers. You don’t hear customers saying “bring me a data  center design no one has done before.”

    The efficiency of the data center is a given to have a low PUE (sub 1.2), but with Cloud Computing and Mobile as top needs for data center growth, speed of how quickly you can add capacity is a higher requirement by executive decision makers.

    Here is a video showing some of the concepts Microsoft has been willing to share.

    Get Microsoft Silverlight

    and the blog post from Microsoft’s Daniel Costello.

    Then we had to take these single lines and schematics and break them into logical modules for the components to reside in. This may seem easy but represents a shift in thinking from a building where, for instance, we would have a UPS room and associated equipment and switchgear manufactured by multiple vendors and put it physically in sometimes separate modules. The challenge became how to shift from a traditional construction mindset to the new, modularized manufacturing mindset. Maintainability is a large part of reliability in a facility, and became a key differentiator between the four classes. Our A Class infrastructure, which is not concurrently maintainable and is on basically street power and unconditioned air, will require scheduled downtime for maintenance. The cost, efficiency, and time-to-market targets for A Class are very aggressive and a fraction of what the industry has come to see as normal today. We realized that standardization and reuse of components from one class to the next was a key to improving cost and efficiency. Our premise was that the same kit of parts (or modules) should be usable from class to class. These modules (in this new mindset) can be added to other modules to transition within the data center from one class to the next.

    I would call this a Flexible Data Center System.  This has been done in manufacturing in flexible manufacturing systems for decades and is just now coming to data center design.

    A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react in the case of changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, which both contain numerous subcategories.

    The advantage of this system is

    Advantages

    Faster, Lower- cost/unit, Greater labor productivity, Greater machine efficiency, Improved quality, Increased system reliability, Reduced parts inventories, Adaptability to CAD/CAM operations.

    With one disadvantage.

    Disadvantages

    cost to implement.

    But in data centers the cost to implement can be lower than traditional data centers with enough people adopting the approach.  And, whereas the flexibility in manufacturing typically applies to the product produced, the flexibility concepts are being applied to the data center infrastructure.

    And, what else is changing is the hardware that goes in these data centers.  Microsoft’s Dileep Bhandarkar discussed here.

    IT departments are strapped for resources these days, and server rightsizing is something every team can do to stretch their budgets. The point of my presentations and the white paper our team is publishing today is two-fold:

    1. To quantify some of the opportunities and potential pitfalls as you look for savings, and

    2. To present best practices from our experiences at Microsoft, where the group I lead manages server purchases for the large production data centers behind Microsoft’s wide array of online, live and cloud services.

    Click to read more ...

    Tuesday
    Mar022010

    News as nodes in a social network

    Slate has a website with news as event nodes in a social network, showing the relationships of information.

    News Dots: The Day's Events as a Social Network

    An interactive map of how every story in the news is related, updated daily.

    By Chris Wilson

    Like Kevin Bacon's co-stars, topics in the news are all connected by degrees of separation. To examine how every story fits together, News Dots visualizes the most recent topics in the news as a giant social network. Subjects—represented by the circles below—are connected to one another if they appear together in at least two stories, and the size of the dot is proportional to the total number of times the subject is mentioned.

    image

    Click to read more ...

    Tuesday
    Mar022010

    Defending your ideas makes it harder to change your mind, give up your mental space

    A friend forwarded the Vanity Fair article about Michael Burry who will be featured in a book “The Big Short: Inside the Doomsday Machine,” by Michael Lewis.  It is the interesting story about a value investor who found his own niche.

    Interestingly I found his investment philosophy the similar to my own views.

    He was genuinely interested in computers, not for their own sake but for their service to a lifelong obsession: the inner workings of the stock market. Ever since grade school, when his father had shown him the stock tables at the back of the newspaper and told him that the stock market was a crooked place and never to be trusted, let alone invested in, the subject had fascinated him. Even as a kid he had wanted to impose logic on this world of numbers. He began to read about the market as a hobby. Pretty quickly he saw that there was no logic at all in the charts and graphs and waves and the endless chatter of many self-advertised market pros. Then along came the dot-com bubble and suddenly the entire stock market made no sense at all. “The late 90s almost forced me to identify myself as a value investor, because I thought what everybody else was doing was insane,” he said. Formalized as an approach to financial markets during the Great Depression by Benjamin Graham, “value investing” required a tireless search for companies so unfashionable or misunderstood that they could be bought for less than their liquidation value. In its simplest form, value investing was a formula, but it had morphed into other things—one of them was whatever Warren Buffett, Benjamin Graham’s student and the most famous value investor, happened to be doing with his money.

    Burry did not think investing could be reduced to a formula or learned from any one role model. The more he studied Buffett, the less he thought Buffett could be copied. Indeed, the lesson of Buffett was: To succeed in a spectacular fashion you had to be spectacularly unusual. “If you are going to be a great investor, you have to fit the style to who you are,” Burry said. “At one point I recognized that Warren Buffett, though he had every advantage in learning from Ben Graham, did not copy Ben Graham, but rather set out on his own path, and ran money his way, by his own rules.… I also immediately internalized the idea that no school could teach someone how to be a great investor. If it were true, it’d be the most popular school in the world, with an impossibly high tuition. So it must not be true.”

    As part of Burry’s success he figured out the sub-prime market collapse, and he ran into the problem of defending his approach.  And makes an excellent point about the problem of defending one’s ideas.

    Inadvertently, he’d opened up a debate with his own investors, which he counted among his least favorite activities. “I hated discussing ideas with investors,” he said, “because I then become a Defender of the Idea, and that influences your thought process.” Once you became an idea’s defender, you had a harder time changing your mind about it.

    Which brings up a lesson I’ve been waiting to write a blog entry about.  One of the biggest lessons I learned in Aikido that sticks in my mind is when trying to do a particular technique in training I would get stuck and fight too hard which creates more tension destroying the energy flow.  The sensei (teacher) came over and said “Move, give up the space you are in.”  It clicked the attacker is attacking the space I occupy, if you give up the space to the attacker, blend and move with the energy, you can successfully complete the technique.

    This lesson has stuck with me for years and years to be willing to give up your space to an attack.  Defending a mental space does influence your thought process.

    BTW, know who else is a Black Belt Aikidoist?  Jonathan Koomey.  He trained up at Berkeley Aikikai.  My training was at Aikido San Jose.  Maybe one of these days when Jonathan and I are at the same conference we can have some fun with some Aikido techniques.

    An Energy and Resources Group graduate student and Dr. Koomey created the first peer-reviewed analysis of power use in high-density computing facilities (13, 14).  Dr. Koomey maintains active collaborations with industry leaders about the electricity used by data centers and has published recent studies on the total power used by these facilities (19, 20) and the total costs of these facilities (21).

    I know at least one person will get this idea of “giving up the mental space to win.” :-)

    Click to read more ...