Insights into Google’s PUE, a laptop approach to Power Supplies and UPS for Servers achieves 99.9% efficient UPS system

Chris Malone and Ben Jai presented “Insights into Google’s PUE”

Areas to be discussed are

  • Innovations and best practices
  • Measurement and method & accuracy
  • Benefits of measuring PUE

and a PUE update + a google server on display

Some details discussed were.

Typical data center PUE is 2.0 vs. Google’s 1.16.

85% reduction in cooling .7 vs. 0.15 achieved by Closed-couple cooling , raise temperatures, economizers

80% Power distribution .22 vs. 0.039 by 99.9% efficient UPS

This last point of 99.9% efficient UPS is where Ben Jai explained the google approach for UPS. electronics design responsibility.2003 –2007.

Ben pointed out he did not seek out to find a UPS solution.  But, started looking at the server and issues. Over provisioning in power supply is a problem.  Google Server has PSU to motherboard to disk.  Let motherboard make determination of disk drive power requirements.

The next was add a battery to the motherboard to provide the battery backup required for the server.

Below is a picture of the motheboard with power supply and battery changes.  The server is now close to a laptop design, but only enough battery power to allow backup power to be provided vs. the hours we expect for laptops.

IMG_0746

Read more

Intel 5500 Super Chip – As Fast as an F-16 with Fuel Efficiency of a Glider

Intel in touting how great the Xeon 5500 chip is I found an entertaining SJ Mercury news post.

Intel unveils groundbreaking new server chip

By Matt Nauman

Mercury News

Posted: 03/30/2009 05:54:03 PM PDT

 

Patrick Gelsinger, Intel senior vice president and general manager of the... (Business Wire)

The arrival of the speedy Intel Pentium Pro server chip in 1995 set the stage for the explosion in Internet usage. Intel said Monday that its new Xeon 5500 server chip will help build data centers ready for the next generation of the Internet.

Intel's Patrick Gelsinger described the Xeon 5500 as "the greatest leap in performance in the history of data processing."

How big is the leap?

Gelsinger, senior vice president and general manager of Intel's digital enterprise group, said the Xeon 5500 is faster, more energy efficient and more flexible than its predecessor. He predicted it will propel advances in cloud computing and virtualization that will be the key to data-center growth and efficiency.

Gelsinger described the new chips as "an engineering marvel," comparing them to an aircraft that's as fast as an F-16 with the cargo capacity of a jumbo jet and the fuel efficiency of a glider. The new chips are about twice as fast as the previous Xeon 5400 chips.

As fast as an F-16, and the fuel efficiency of a glider.  I think Gelsinger may have been a little too excited.  Going as fast as an F-16 riding the wind?  I assume Gelsinger’s world of Physics is a different place.

Another interesting announcement is Intel’s Data Center Efficiency challenge on Facebook.

Intel Announces its Data Center Efficiency Challenge!
If you work in IT and have fresh perspectives to make your organization more efficient, you’ve come to the right spot!
Submit a short video “proposal” describing plans to use Intel server technology (Intel Xeon 5500 series or other current Intel-based server platforms) to save both energy and money in your data center.
One winner in each of two categories – enterprise and small business – will receive a new Intel Xeon processor 5500 series server, an energy efficient netbook or mobile PC, will be flown to San Francisco for the Intel Developer Forum (IDF) and will meet with senior data center experts for consultation on their proposal.

I would love to see the video submittals where an Intel Xeon 5500 is as fast an F-16 and rides the wind.

Read more

Memory Virtualization (Getting Servers to Breath Better) vs. the Uber Virtualized Data Center

There is a lot of news and products on Virtualizing the Data Center. 

Cisco announced their Unified Computing System.

Cisco Unleashes the Power of Virtualization with Industry's First Unified Computing System

Innovative Architecture Integrates Compute, Networking and Virtualization in a Single Platform; New Services and Partnerships Focused on Next Generation Data Centers

SAN JOSE Calif. - March 16, 2009 - Cisco today unveiled an evolutionary new data center architecture, innovative services and an open ecosystem of best in class partners to help customers develop next-generation data centers that unleash the full power of virtualization. With today's announcement, Cisco is delivering on the promise of virtualization through Unified Computing - an architecture that bridges the silos in the data center into one unified architecture using industry standard technologies. Key to Cisco's approach is the Cisco Unified Computing System which unites compute, network, storage access, and virtualization resources in a single energy efficient system that can reduce IT infrastructure costs and complexity, help extend capital assets and improve business agility well into the future.

VMware has their Virtual Data Center OS.

The Virtual Datacenter Operating System Defined

VMware's flagship product, VMware Infrastructure, coupled with VMware's comprehensive roadmap of groundbreaking new products provide a virtual datacenter OS for IT environments of all sizes. The virtual datacenter OS addresses customers’ needs for flexibility, speed, resiliency and efficiency by transforming the datacenter into an “internal cloud” – an elastic, shared, self- managing and self-healing utility that can federate with external clouds of computing capacity freeing IT from the constraints of static hardware-mapped applications. The virtual datacenter OS guarantees appropriate levels of availability, security and scalability to all applications independent of hardware and location.  

 

But, what I find interesting in all these solutions is how neither of these solutions focus on memory.

Why memory? The focus is on compute, storage, and network.

I think memory is like air for automotive engine (CPU). You need good intake of air (memory input), and exhaust with minimal backpressure.

So why not virtualize memory across multiple servers?

While in Portland I was able to visit with RNAnetworks and discuss their latest announcement.

RNA networks Brings Memory Virtualization Into the Enterprise Data Center

RNA Memory Virtualization Transforms Memory into a Shared, Networked Resource

Portland, Ore. – February 2, 2009 – RNA networks, a leader in memory virtualization software that transforms server memory into a shared network resource, today announced the launch of its Memory Virtualization Platform (MVP) and first product, RNAmessenger, based on the MVP.  Memory Virtualization unleashes high-performance computing from existing commodity hardware by decoupling memory from the processor and server.  Uniquely, the RNA Memory Virtualization Platform is transparent to existing applications and operating systems allowing enterprises to leverage their existing IT assets with no changes.
“Reliance on fragmented local server memory has been a key roadblock to optimizing performance in data center clusters, but memory virtualization eliminates size limits and slashes access times by providing distributed shared memory for all CPUs in a cluster,” said Eyal Waldman, Chairman, President and CEO, Mellanox Technologies. “By combining RNA Networks' Memory Virtualization with Mellanox Technologies' unrivaled connectivity performance, data center architects can achieve new levels of performance with high efficiency and lower costs.”

The concept is simple.

RNA’s innovative Memory Virtualization Platform works by pooling or aggregating available memory across nodes, and making the memory pool available as a shared network resource available to all servers in the data center.  Servers can access the pool, contribute to it or both.

image

Where is the money savings? This is another problem I see with Cisco and VMware’s uber virtual data center solutions.  Where is the money savings?

I asked RNAnetworks CEO Clive Cook how much could be saved with memory Virtualization, and he said in grid computing type of scenarios where there is a high throughput requirement across multiple machines they have numbers below.

Bottom Line Economic Advantages

Performance Improvement
10-30X

Cost Savings

Fewer Load Balancers
$44,500

Less Aggregate Memory
$64,500

Storage Savings
$163,000

Power Savings
$39,500

Additional Benefits

  • Efficiency
  • Simplicity
  • Reliability
  • Resource Sharing
  • Low TCO
  • Consolidation
Read more

HP’s Green IT Website

I had a chance to talk to HP’s Jim Ganthier about HP’s adaptive infrastructure solutions for green data centers.

Accelerating adoption of next-generation data center technologies and services

Adaptive Infrastructure

Difficult times create opportunities for agile businesses to seize a stronger competitive advantage. Is your IT infrastructure poised to deliver?

More then ever, IT is expected to help the business take advantage of opportunities that arise in this new economic era by reacting quickly to deliver new services that help drive growth.

With agility and cost management top of mind with CIOs, successful companies are turning to next-generation technologies and services that are easier to buy, quicker to deploy and easier to manage so they can extract maximum return from their IT investments. This is where the HP Adaptive Infrastructure can help.

The HP Adaptive Infrastructure provides an advanced portfolio of products, services and solutions – with clear steps and methodologies – designed to move you toward a next-generation data center. The result is increased agility, lower costs and mitigated risk that, when achieved together, help accelerate business growth.

It’s been a while since I’ve checked out HP’s website, so I went and checked out their latest.

HP’s Green Business Technology Initiative

HP helps you address your green IT needs through energy-efficient solutions for the data center and beyond – where better business outcomes equal better environmental outcomes.

Define your datacenter infrastructure

Get a head start on greener IT View the video

What’s haunting your data center?

Business success and environmental responsibility go hand in hand. We hear every day from customers who want to save money through more energy-efficient solutions for their data center and broader workplace, or find easier ways to recycle their technology to become more “green.” At HP, we help our customers meet their most demanding technology challenges while at the same time helping them to reduce their environmental impact.

What I found was a comprehensive list of services.

Let us help you


Extend the capacity and life of your data center and/or design next-generation data centers

Increase data center energy efficiency and business continuity

Achieve sustainable employee growth in business services and end-user computing

Strengthen IT’s contribution to your organization’s green business goals and overall brand

Reduce material resource consumption

Support your compliance needs with current and anticipated e-waste disposal and carbon regulations

Better measure, manage, and report your carbon footprint to meet corporate commitments

Data center

Energy Efficiency Services

Dynamic Smart Cooling

HP BladeSystem Thermal Logic

HP Integrity Thermal Logic

Energy-wise Storage

Modular Cooling System

Consolidation

Virtualization

HP Insight Power Manager

Energy and Space Efficiency solutions

Power Management and Protection

Factory Express

Utility rebates

Data Center Services

Critical Facilities Consulting

Critical Facilities Design

Critical Facilities Assurance

Workplace

Halo telepresence and videoconferencing solutions

Business PCs and workstations—Energy-efficient computing

Thin clients

End-User workplace services

HP recycling programs for IT companies

Managed print services

HP mobility and wireless solutions

HP practices

Sustainable computing with HP

Reducing your carbon footprint

HP recycling programs for IT companies

Environmentally responsible standards for HP suppliers

HP’s 25-year commitment to the environment

HP’s Eco site is well done as well.

Read more

Virtualization + Power Management, Intel’s Blog

Intel has a blog entry on Power Management in a Virtualization scenario.  The blog isn’t very long, so I have the whole text below.

The Curious Case of Virtualized Power

Posted by Enrique Castro-Leon on Mar 8, 2009 5:44:22 PM

Given the recent intense focus in the industry around data center power management and the furious pace of the adoption of virtualization, it is remarkable that the subject of power management in virtualized environments has received relatively little attention.

It is fair to say that power management technology has not caught with virtualization.

Here are a few thoughts on this particular subject, which I intend to elaborate in subsequent transmittals.

For historical reasons the power management technology available today had its inception in the physical world where watts consumed in a server can be traced to the watts that came through the power utility feeds.  Unfortunately, the semantics of power in virtual  machines have yet to be comprehensively defined to industry consensus.

For instance, assume that the operating system running  in a virtual image decides to transition the system to the ACPI S3 state, sleep to memory.  What we have now is the state of the virtual image preserved in the image's memory with the virtual CPU turned off.

Assuming that the system is not paravirtualized, the operating system can't tell if it's running in a physical or virtual instance. The effect of transitioning to S3 will be purely local to the virtual machine.  If the intent of the system operator was to transition the machine to S3 to save power, it does not work this way.   The virtual machine still draws resources from the host machine and requires hypervisor attention. Transitioning the host itself to S3 may not be practical as there might be other virtual machines still running, not ready to go to sleep.

Consolidation is another technology for reducing data center power consumption by driving up the server utilization rates.  Consolidation for power management is a blunt tool, where applications that used to run in a physical server are now virtualized and squished into a single physical host.  The applications are sometimes strange bedfellows.  Profiling might have been done to make sure they could coexist, as a priori, static exercise with the virtual machine instances treated as black boxes. There is no attempt to look at the workload profiles inside each virtualized instance and in real time.  Power savings come from an almost wishful side effect of repackaging applications formerly running in a dedicated server into virtualized instances.

A capability to map power to virtual machines, in both directions, from physical to virtual and virtual to physical would be useful from an operational perspective.  The challenge is twofold, first from a monitoring perspective because there is no commonly agreed method yet to prorate host power consumption to the virtual instances running within, and second from a control perspective.  It would be useful to schedule or assign power consumption to virtual machines, allowing end users tomake a tradeoff between power and performance.  Fine grained power monitoring would allow prorating power costs to application instances, introducing useful pricing checks and balances encouraging energy consumption instead of the more common method today of hiding energy costs in the facility costs.

Read more