Pacific NW gets $89 mil of the $620 mil DOE Smart Grid grants

Pacific Northwest National Laboratory will manage the Pacific Northwest Smart Grid Demonstration project.

NW power grid project gets $89 million from DOE

A project to examine how high technology can improve the Pacific Northwest's electric power grid has received an $88.8 million grant from the U.S. Department of Energy.

By The Associated Press

RICHLAND — A project to examine how high technology can improve the Pacific Northwest's electric power grid has received an $88.8 million grant from the U.S. Department of Energy.

The money, to help pay for the $177.6 million Pacific Northwest Smart Grid Demonstration Project, was the largest among 32 grants DOE announced Tuesday as part of $620 million in stimulus aid.

The grant will go to Battelle Memorial Institute's Pacific Northwest National Laboratory in Richland, which will manage the project. The remainder of the project's cost will be borne by energy providers, utilities, technology companies and research organizations taking part.

Electricity Infrastructure Operations Center

Electricity Infrastructure Operations Center  by PNNL - Pacific Northwest National Laboratory.

The Electricity Infrastructure Operations Center at PNNL is a user-based facility dedicated to energy and hydropower research, operations training and back-up resources for energy utilities and industry groups.

Smart meters are part of the project.

Among those taking part in the project are the campuses of the University of Washington in Seattle and Washington State University in Pullman. At both schools, "smart meters" will be installed to provide real-time information on power consumption, along with software and other gear to automate and monitor the electricity distribution system.

I wonder if anyone has thought including the Pacific NW data centers in Washington and Oregon in the project?  Problem is almost all the big data center operators wouldn’t want the public to know the power consumption of their data centers.

I hope someone proves me wrong and signs up with PNNL.

Read more

APC Smart-UPS update, greener and intelligent

APC released new versions of its Smart-UPS with greener features and more intelligence.

image

I had a ten year old Smart-UPS, but stopped using it with my desktop machines to save energy, but these latest UPS are 97% efficient.

And these latest versions are smarter than my old UPS.

user-friendly features like an LCD interface with diagnostic capabilities and advanced energy management that delivers clear and timely energy consumption metrics

I got a chance to talk to Ray Munkelwitz, product manager APC by Schneider electric and he says the life of the batteries are extended as well.

This technology adjusts a battery’s lifetime based on environmental conditions to provide advanced notification.

When there is end of life, batteries are recycled.

Smart-UPS feature a user replaceable battery.   Over time, typically 3 to 5 years, usage and temperature degrade the UPS batteries which need to be replaced.  APC recycles used batteries and almost 100% of the battery lead content is reused, protecting the environment. With each purchase of a genuine APC replacement battery, you get free freight back to APC for proper disposal of your old batteries (currently available in USA only.)  Yet another “green” related aspect of the Smart-UPS.

http://www.apc.com/tools/upgrade_selector/show_option_descriptions.cfm?desc=rbc&country=US&lang=en

My one wish for the product is per port power monitoring.  There is power monitoring for the whole UPS, but not per socket.  If there was per socket, this unit would help measure power consumption per device which is useful in performance lab conditions.

Read more

openQRM Project – open source data center management project

Found openQRM project in a PCWorld article.

Also in the management realm is the openQRM project, a revival of an open source data center management project begun by the now-defunct company Qlusters. OpenQRM is a single-management console for infrastructure both physical and virtual. It provides an API for integrating third-party tools, including Puppet, and incorporating plug-ins.

The newest iteration, version 4.4 includes Simple Object Access Protocol -based Web services for remote control and other infrastructure management tools for cloud deployments

NetworkWorld has more details.

Instant Cloud Computing at UKUUG 2009

By MattR on Fri, 01/16/2009 - 10:53am.

Additional to next weeks presentation about "Cloud Computing with openQRM" at LCA2009 (Virtulization Miniconf) there is another possibility to keep up with openQRM's Cloud Computing features at the UKUUG Spring 2009 Conference in London (24-26 March 2009). This upcoming presentation deals with why open-source is serious for Cloud Computing and provides details about openQRM's implementation of a fully automated and rapid deploying Cloud environment which can be used for private and public Cloud services. It also explains openQRM's business model for Cloud Computing via its integrated billing system and ends up with a live-demonstration.

Please find the abstract about "Instant Cloud Computing with openQRM" as a teaser at :
http://www.ukuug.org/events/spring2009/programme/instant-cloud-computing.shtml

openQRM site is here.

About openQRM

Submitted by matt on Tue, 11/18/2008 - 00:47

openQRM is the next generation, open-source Data-center management platform. Its fully pluggable architecture focuses on automatic, rapid- and appliance-based deployment, monitoring, high-availability, cloud computing and especially on supporting and conforming multiple virtualization technologies. openQRM is a single-management console for the complete IT-infra structure and provides a well defined API which can be used to integrate third-party tools as additional plugins.

Read more

iControl thinks consumers want power metering to be free of monthly fees

I blogged about iControl over a year ago.  And, knew they were going to have a home energy solution, but was waiting for the public disclosure.  cnet news.com has a post.  Here is a point which shows iControl has done the research many haven’t.

"We don't see consumers willing to pay a recurring fee for energy management. They're willing to spend $50 for some energy management solution. What's going to change is when utilities go to time-of-use metering (where there are different prices at different times). Then, the economic incentive is much higher," Dawes said.

iControl is expecting that telecommunications and cable providers will start offering Internet-based home security services and then home energy management. But at this point, it's not clear how those companies will make money in energy management, Dawes said.

The power metering services are part of home security services.

iControl adds home energy services to broadband

by Martin LaMonica

Would you be willing to pay for home security services if they could also help cut your electricity bills?

In a nutshell, that's what start-up iControl is pitching to consumers with its energy management software and home automation gear. The Palo Alto, Calif.-based company is also working with utilities to get its energy management system installed as part of smart-grid trials.

On Tuesday, it said that its home automation equipment can now use the Zigbee wireless protocol to communicate with two-way smart meters.

Will home energy management enter through home automation networks?

(Credit: iControl Networks)

It's part of the company's plan to enter the field of home energy efficiency, where there are dozens of companies already vying for business. The path it's taking is either through security service companies, utilities, or broadband suppliers, such as cable companies or phone companies, said CEO Paul Dawes.

Read more

Google’s Secret to efficient Data Center design – ability to predict performance

DataCenterKnowledge has a post on Google’s (Public, NASDAQ:GOOG) future envisioning 10 million servers.

Google Envisions 10 Million Servers

October 20th, 2009 : Rich Miller

Google never says how many servers are running in its data centers. But a recent presentation by a Google engineer shows that the company is preparing to manage as many as 10 million servers in the future.

Google’s Jeff Dean was one of the keynote speakers at an ACM workshop on large-scale computing systems, and discussed some of the technical details of the company’s mighty infrastructure, which is spread across dozens of data centers around the world.

In his presentation (link via James Hamilton), Dean also discussed a new storage and computation system called Spanner, which will seek to automate management of Google services across multiple data centers. That includes automated allocation of resources across “entire fleets of machines.”

Going to Jeff Dean’s presentation, I found a Google secret.

image

Designs, Lessons and Advice from Building Large
Distributed Systems

Designing Efficient Systems
Given a basic problem definition, how do you choose the "best" solution?
• Best could be simplest, highest performance, easiest to extend, etc.
Important skill: ability to estimate performance of a system design
– without actually having to build it!

What is Google’s assumption of where computing is going?

image

Thinking like an information factory Google describes the machinery as servers, racks, and clusters.  This approach supports the idea of information production.  Google introduces the idea of data centers being like a computer, but I find a more accurate analogy is to think of data centers as information factories.  IT equipment are the machines in the factory, consuming large amounts of electricity for power and cooling the IT load.

 image

Located in a data center like Dalles, OR

image

With all that equipment things must break.  And, yes they do.

Reliability & Availability
• Things will crash. Deal with it!
– Assume you could start with super reliable servers (MTBF of 30 years)
– Build computing system with 10 thousand of those
– Watch one fail per day
• Fault-tolerant software is inevitable
• Typical yearly flakiness metrics
– 1-5% of your disk drives will die
– Servers will crash at least twice (2-4% failure rate)

The Joys of Real Hardware
Typical first year for a new cluster:
~0.5 overheating (power down most machines in <5 mins, ~1-2 days to recover)
~1 PDU failure (~500-1000 machines suddenly disappear, ~6 hours to come back)
~1 rack-move (plenty of warning, ~500-1000 machines powered down, ~6 hours)
~1 network rewiring (rolling ~5% of machines down over 2-day span)
~20 rack failures (40-80 machines instantly disappear, 1-6 hours to get back)
~5 racks go wonky (40-80 machines see 50% packetloss)
~8 network maintenances (4 might cause ~30-minute random connectivity losses)
~12 router reloads (takes out DNS and external vips for a couple minutes)
~3 router failures (have to immediately pull traffic for an hour)
~dozens of minor 30-second blips for dns
~1000 individual machine failures
~thousands of hard drive failures
slow disks, bad memory, misconfigured machines, flaky machines, etc.
Long distance links: wild dogs, sharks, dead horses, drunken hunters, etc.

image

Monitoring is how you know your estimates are correct.

Add Sufficient Monitoring/Status/Debugging Hooks
All our servers:
• Export HTML-based status pages for easy diagnosis
• Export a collection of key-value pairs via a standard interface
– monitoring systems periodically collect this from running servers
• RPC subsystem collects sample of all requests, all error requests, all
requests >0.0s, >0.05s, >0.1s, >0.5s, >1s, etc.
• Support low-overhead online profiling
– cpu profiling
– memory profiling
– lock contention profiling
If your system is slow or misbehaving, can you figure out why?

Many people have quoted the idea “you can’t manage what you don’t measure.”  But a more advanced concept that Google discusses is “If you don’t know what’s going on, you can’t do
decent back-of-the-envelope calculations!”

Know Your Basic Building Blocks
Core language libraries, basic data structures,
protocol buffers, GFS, BigTable,
indexing systems, MySQL, MapReduce, …
Not just their interfaces, but understand their
implementations (at least at a high level)
If you don’t know what’s going on, you can’t do
decent back-of-the-envelope calculations!

This ideas being discussed are by a software architect, but the idea applies just as much to data center design.  And, the benefit Google has it has all of IT and development thinking this way.

image

And here is another secret to great design.  Say No to features.  But what the data center design industry wants to do is to get you to say yes to everything, because it makes the data center building more expensive increasing profits.

image

So what is the big design problem Google is working on?

image

Jeff Dean did a great job of putting a lot of good ideas in his presentation, and it was nice Google let him present some secrets we could all learn from.

Read more