Tip for Building Green Applications

I just had a conversation with an industry magazine editor and had a good time discussing green IT. One of the topics we talked about was green applications, and how difficult it is to get developers to write greener code.  There are no hard and fast rules as to what code is greener, but after the discussion it hit me there is one way to make a greener app.

Problem: Developers are removed from the production environment and the resources used. As CapEx and OpEx increase for power and cooling infrastructure developers are unaware that their energy use is a dominant cost beyond the cost of the hardware.

Opportunity: How do you get software developers to change their behavior?

Solution: If you use cloud computing services like Amazon Web Services or Google Application Engine, then the costs for compute, storage, and network use are easy to calculate as this is their chargeback model. As the application is developed, tested and put into production, the costs for running the solution is a metered number that has a direct relationship to the code.

Give your developers a cost per unit of work budget. If you are a startup you have limited budget and if you are trying to run a site where advertising is one of your main revenue streams, your profit will be determined by the difference between your revenue and your costs. The last thing you want is a site that costs more to run per click event than an advertising revenue click brings in.

This applies to few of the enterprise developers, but more and more companies are starting up using AWS and GAE, training a set of developers who are learning to be efficient with compute, storage, and network.  Do you think these developers waste storage and network bandwidth needlessly moving data around?

If you could build an internal chargeback system for compute, storage, and network, charging the departments direct costs, then you too can motivate your developers.

If you think it is too much work for you to build your own system, here is a more radical idea, use AWS or GAE as your development environment, and you can roll out the production system in your own environment, but you used AWS/GAE to measure your solution and you outsourced the development environment. The guys at Skytap could support this model if they make changes to their chargeback model.

This is a good opportunity for the management tools companies like CA, HP, and IBM to support as well.

Read more

PUE/DCIE Monitoring Tool - EDSA's Paladin System

EDSA announces a PUE/DCIE monitoring tool.

EDSA Introduces PUE-DCiE Advisor
for Increasing Data Center Energy Efficiency

EDSA’s Paladin® Live™ is one of thefirst power analytics software program to automatically calculate and present power efficiency ratings, using standards published by The Green Grid

SAN DIEGO, Calif. – August 21, 2008 – EDSA Corp., developers of the Paladin® platform of power analytics™ software, today announced the release of its Paladin® Live PUE-DCiE Advisor™ in support of the new electrical power efficiency standards developed by the global IT association, The Green Grid. The Green Grid is a collaborative organization of approximately 200 leading technology organizations, led by such firms as AT&T, Cisco, Dell, Hewlett-Packard, IBM, Intel, Microsoft, and others for which whom mission-critical computing and environmental responsibility are driving corporate priorities.

image

To embrace the organization’s standards, EDSA’s Paladin Live power systems diagnostics platform has been enhanced to automatically present users with The Green Grid’s Power Usage Effectiveness (PUE) and data center infrastructure efficiency (DCiE) ratings in real-time. Used in conjunction with Paladin Live’s new Paladin® BlackBoard™ option, users gain the valuable ability to take a baseline model of their operations, and – in real time – allow them to make changes or propose “what if” scenarios, allowing them to project the future impact of those changes on the calculation of PUE or DCiE.

This looks nice on the surface.  i would be more interested in a diagram that shows all their monitoring points that support their PUE/DCiE calculations.

The Paladin Live PUE-DCiE Advisor feature attacks the problem of energy inefficiency in two ways: first, the Company’s Paladin® DesignBase™ computer-aided design (CAD) modeler allows power systems engineers to design, simulate, and analyze the power systems model to optimize energy usage prior to construction. Once the facility is operational, EDSA’s Paladin® Live™ platform continually diagnoses the facility’s performance by benchmarking it back against the design model. This continual comparison of design specifications with actual operating parameters helps to ensure that anomalies are quickly identified, isolated, and resolved.

By presenting its findings in PUE and DCiE format, Paladin Live helps data center operators, for the first time, to make informed, real-time decisions about the energy efficiency of their facilities, and develop actionable strategies for ensuring that their operations are as failsafe and energy efficient as possible.

As most of you know how you collect the numbers is critical to getting a good PUE calculation.

Digging a little further I found their Power DesignBase product on this page. Here is feature matrix of their different versions.

image

Read more

Cloud Computing Blogs

DataCenterKnowledge has a good list of Cloud Computing Blogs.  If you doubt whether Cloud Computing is game changing, the fact that there are these many bloggers will get you thinking.

  • Elastic Vapor: The ramblings of Reuven Cohen, co-Founder & CTO of Enomaly (ElasticDrive). A consistently interesting read.
  • Cloud Musings: Thoughts on cloud computing from Kevin Jackson, a senior IT technologist specializing in government requirements.
  • Cloudy Thoughts: Markus Klems writes about cloud computing, grids, distributed programming and agile Web development.
  • Geva Perry's Blog: Views on cloud computing from Geva, the Chief Marketing Officer at GigaSpaces Technologies.
  • Cloud Security: Craig Balding provides an important look at the security implications of clouds and how they interact with one another.
  • IT Management and Cloud Blog: The view from the clouds from technologist and author John Willis.
  • Perspectives: Excellent blog from James Hamilton of Microsoft, with great content on scalability and databases as well as cloud computing.
  • The Wisdom of Clouds: James Urquhart's thoughts on "Cloud Computing and Utility Computing for the Enterprise and the Individual." Very informative.
  • GigaOm: Om Malik and company provide strong coverage of cloud computing services and business models.
  • Smoothspan Blog: Blog by Bob Warfield focused on "radical technology innovation with equally radical business model innovation to literally reinvent software."
  • Production Scale: Blog from Kent Langley at SolutionSet tracking scalable web infrastructure and cloud computing.
  • Rough Type: From Nicholas Carr, author of "The Big Switch." Nick is in low-post mode for the summer, but is one of the best soruces on clouds and SaaS.
  • Server Farming: A TechTarget blog on grid computing, supercomputing and utility computing.
  • Sam Johnston: Lots of good info on cloud computing, especially the recent trademark flap.
  • Salesforce Times: All Salesforce, all the time, from founded by Adam Killam of Vancouver.
  • William Vambenepe: Views on grids, clouds and databases from an Oracle architect.
  • Hot Cluster: Topical blog from Bert Armijo and Berry Lynn of 3Tera, which makes the AppLogic "cloud OS" software.
  • OnSaaS: Links and more from Jian Zhen of LogLogic and Michael Mucha from Stanford Hospital.
  • In The Clouds: Cloud computing blog from Dell, where the company shares information about its cloud servers and services.

As Rich Miller says he is monitoring this list.

It's a lot to read. If you can't track that many blogs each day, just keep reading Data Center Knowledge! We're monitoring these sites, as well as numerous searches on topics of interest in the Data Center. News tips and backlinks are always welcome!

Read more

Green Workflows

Workflows create an opportunity for efficiency and minimizing waste. Unfortunately, many workflows are difficult to change and bureaucratic requiring users to go through convoluted steps to get their work done. The cause of this many times are IT systems that are inflexible and do not adapt to changes.

The Seattle Times has an article about a workflow software company which puts workflow development in the hands of the users.

Straw that stirs the drink: Even with improved server software, all would be for naught without robust applications. Nintex makes it easy on a small company, allowing it to develop custom applications without an IT department to create complex workflow procedures. "People who own the process can tailor it to their needs, and can manage their own technological portfolio," Campbell said.

One side would argue users know little about developing workflow software, and putting this capability in the hands of users will not work. But, this is also a closed loop system where the people who define the workflows use them, and they can change them.  So, even if they fail at first, they can fix the issues.

This creates an organic learning environment which will allow the system to adapt to the environment.

The co-founder of http://www.nintex.com/  Brett Campbell even makes a point about being an agile company.

Harboring innovation: During Nintex's collaboration with Microsoft, Campbell said he has seen the software giant go from turning like a speedboat into something closer to an aircraft carrier. "But they still need agile partners," he said, "and that's where we come in."

This is one area I have been meaning to research.

Read more

Living Data Center, Skanska's Tool

A great book to read to think about Green Data Center principles is The Living Company. It has 5 stars from 21 customer reviews on amazon.com

Here is an excerpt from a BusinessWeek post.

After all of our detective work, we found four key factors in common:

1. Long-lived companies were sensitive to their environment. Whether they had built their fortunes on knowledge (such as DuPont's technological innovations) or on natural resources (such as the Hudson Bay Company's access to the furs of Canadian forests), they remained in harmony with the world around them. As wars, depressions, technologies, and political changes surged and ebbed around them, they always seemed to excel at keeping their feelers out, tuned to what-ever was going on around them. They did this, it seemed, de-spite the fact that in the past there were little data available, let alone the communications facilities to give them a global view of the business environment. They sometimes had to rely for information on packets carried over vast distances by portage and ship. Moreover, societal considerations were rarely given prominence in the deliberations of company boards. Yet they managed to react in timely fashion to the conditions of society around them.

2. Long-lived companies were cohesive, with a strong sense of identity. No matter how widely diversified they were, their employees (and even their suppliers, at times) felt they were all part of one entity. One company, Unilever, saw itself as a fleet of ships, each ship independent, yet the whole fleet stronger than the sum of its parts. This sense of belonging to an organization and being able to identify with its achievements can easily be dismissed as a "soft" or abstract feature of change. But case histories repeatedly showed that strong employee links were essential for survival amid change. This cohesion around the idea of "community" meant that managers were typically chosen for advancement from within; they succeeded through the generational flow of members and considered themselves stewards of the longstanding enterprise. Each management generation was only a link in a long chain. Except during conditions of crisis, the management's top priority and concern was the health of the institution as a whole.

3. Long-lived companies were tolerant. At first, when we wrote our Shell report, we called this point "decentralization." Long-lived companies, as we pointed out, generally avoided exercising any centralized control over attempts to diversify the company. Later, when I considered our research again, I realized that seventeenth-, eighteenth-, and nineteenth-century managers would never have used the word decentralized; it was a twentieth-century invention. In what terms, then, would they have thought about their own company policies? As I studied the histories, I kept returning to the idea of "tolerance." These companies were particularly tolerant of activities on the margin: outliers, experiments, and eccentricities within the boundaries of the cohesive firm, which kept stretching their understanding of possibilities.

4. Long-lived companies were conservative in financing. They were frugal and did not risk their capital gratuitously. They understood the meaning of money in an old-fashioned way; they knew the usefulness of having spare cash in the kitty. Having money in hand gave them flexibility and independence of action. They could pursue options that their competitors could not. They could grasp opportunities without first having to convince third-party financiers of their attractiveness.

I've been meaning to write about some of these ideas and yesterday at Data Center Dynamics Seattle, I met Jakob Carnemark from Skanska who embraces these ideas with tools they are developing for the Green Data Center by enabling the Living Data Center in the same principles used in "The Living Company".

Skanska's approach of continuous process improvement fits what I have written in the past on being Green is not a binary decision, but a commitment.

I am going to follow up with Jakob to get more details on how they create metrics, monitoring and modeling. His 3M's vs my M3 (monitoring, metering, managing).

Read more