Intel to ship OS Independent Power Management

Power Management is strategic for Intel. News.com reports Intel is dedicating 1 million transistors to do power management functions on chip.

That news was revealed to this reporter by an Intel employee as senior vice president Pat Gelsinger was delivering his IDF keynote, which included more specifics about Nehalem, the family of chips the company plans to begin rolling out in the fourth quarter. Gelsinger, the general manager of Intel's Digital Enterprise Group, showed the first wafer holding individual eight-core processors, detailed the power-saving features of the Nehalem processors, and confirmed future mobile Nehalem processors.

Intel Nehalem processor lineup as shown at IDF 2008

Most of his keynote centered on Nehalem, and one of the features Intel was pushing hard at IDF was a technology called Turbo mode.

Turbo mode is essentially a switch that turns off unused processor cores and then uses the remaining active cores more efficiently. This kind of sophisticated power-management technology will be used in both Nehalem-based laptops and servers, according to Gelsinger, and will become increasingly necessary as Intel brings out chips with more cores like the eight-core Nehalem processor due next year.

In short, in multi-core processors, cores not doing much can still use power. So, it's better to use, for example, a couple of cores more efficiently than four cores inefficiently.

Turbo mode is enabled by a "power control unit which is an integrated microcontroller which only works on power management," said Rajesh Kumar, an Intel Fellow, who spoke during Gelsinger's keynote. There are about 1 million transistors dedicated solely to power management, Kumar said.

This makes a lot of sense to eliminate the dependency for power management from the OS or Hypervisor.  If Intel continues down this path, they could build power management integrated into their motherboards independent of the OS.

Saving energy is a top differentiation for Intel. Intel gets it that they need to support Green sustainable computing.  And, this is a way for them to drive upgrades.

Read more

Intel Atom vs 8 year old AMD Athlon, AMD wins on Power and Performance

I’ve had a chance to research the Little Green Server ideas more, and I think Via and AMD are potential better platforms for low power servers.  Tom’s Hardware has this post on Intel Atom vs. AMD Athlon.

With the development of the Atom processorhttp://en.wikipedia.org/wiki/Intel_Atom , Intel introduced a totally new chip design that consumes very little energy. AMDhttp://en.wikipedia.org/wiki/Advanced_Micro_Devices had to strike back, and did so by clocking down its Athlon 64http://en.wikipedia.org/wiki/Athlon_64 , employing the K8 micro architecture, down to the lowest possible frequency of 1 GHz. The Athlon 64 2000+ runs with a core voltage of 0.90 volts and uses just 8 watts. As a result, the CPU easily operates without a fan. If you drop the 8 W Athlon 64 into a motherboardhttp://en.wikipedia.org/wiki/Motherboard based on the 780G chipset, then the system hits power consumption numbers that, in our measurements, are below Intelhttp://en.wikipedia.org/wiki/Intel_Corporation ’s Atom desktop solution. We were even able to lower the core voltage by 11%, without stability problems, and the power analyzer read lower numbers. Interestingly, AMD’s Athlon 64 2000+ processor, unlike Intel‘s Atom CPU, is not embedded in the motherboard. It can be run on any board with an AM2 or AM2+ socket.

Compared to Intel’s Atom, which runs at 1.6 GHz, the Athlon 64 2000+ is clocked at 1 GHz—60% lower. Despite this, the Athlon 64 outperforms the Atom in several benchmark tests as a result of its more efficient K8 architecture. In addition, the energy consumption of the entire system is lower, and that’s what really matters most.

In the conclusion.

In our Munich lab’s duel of the energy-savers, the AMD Athlonhttp://en.wikipedia.org/wiki/Athlon 64 2000+ beats the Intel Atomhttp://en.wikipedia.org/wiki/Atom 230 in energy consumption and processing power. Each of the systems was based on a desktop platform. The Achilles heel of the Intelhttp://en.wikipedia.org/wiki/Intel_Corporation system is its old system platform with the 945GC chipsethttp://en.wikipedia.org/wiki/Chipset , while AMD offers a more modern 780G platform.

The energy-saving solution from AMDhttp://en.wikipedia.org/wiki/Advanced_Micro_Devices offers more possibilities: it has three times as many SATA ports, possesses better onboard graphics performance, and can also support two monitors. Unlike the Intel solution, an HD resolution (1920x1200) with high picture quality is possible through DVI/HDMIhttp://en.wikipedia.org/wiki/High-Definition_Multimedia_Interface ports. And early information suggests that the AMD Athlon 64 2000+ should cost close to $90.

Read more

Little Green Server Ideas Starting

ExtremeTech has an article building an Intel Atom based PC.

Intel's Atom has generated a lot of attention. Some of that attention has been positive: Intel building an x86 CPU whose primary design goal is very low power usage while maintaining good performance. On the other hand, Atom has been criticized for given up some key performance features, such as speculative, out-of-order execution.

Still, Atom has garnered some interesting design wins, including appearing in some of the tiny laptops first pioneered by the ASUS EeePC. Atom's most recent appearance has been in MSI's latest sub-laptop, the Wind.

What about Atom on the desktop?

First, you might ask why you'd want Atom in a desktop configuration. Its very nature suggests that performance would be limited. At 1.6GHz, and lacking out-of-order execution, performance might be pretty low. Other aspects of Atom, such as simultaneous multithreading (Hyper Threading) and a fast SSE2 floating point unit, might mitigate some of that.

in their conclusion, they get to some of the ideas for a Little Green Server.

In the end, a PC built around an Intel D945GCLF makes for a fine, light duty Web-oriented PC. It could even serve as a light duty home server, with the right storage gear. It is not, however, well-suited for even casual gaming, and we'd hesitate to run a full-bore office suite on it. Still, at 46W running flat out, it won't break your power budget.

Intel is really just dipping its toes into this market. We'd love to see more home-server oriented boards, with onboard RAID and no parallel ATA ports, for example. The system is also pretty noisy; the tiny CPU cooler isn't exactly quiet. Still, the D945GCLF will make an interesting test bed for anyone wanting to explore Atom's capabilities.

Another interesting review is from AnandTech and Asus's EEE Box.

Tom's Hardware has a review of the Intel Atom chip.

Intel is rumored to be  releasing dual core Intel Atom's in Q4 '08.

Read more

$2,600,000 UC San Diego Energy Efficient Computing project, Heavily Instrumentation & Monitoring to Calculate Performance/Watt

UC San Diego has an article about their new Energy Efficient Computing project, GreenLight.

UC San Diego’s GreenLight Project to Improve Energy Efficiency of Computing

July 28, 2008

By Doug Ramsey

The information technology industry consumes as much energy and has roughly the same carbon “footprint” as the airline industry. Now scientists and engineers at the University of California, San Diego are building an instrument to test the energy efficiency of computing systems under real-world conditions – with the ultimate goal of getting computer designers and users in the scientific community to re-think the way they do their jobs.

Photo of Sun Datacenter

This Sun Modular Datacenter deployed on the UC San Diego campus will be instrumented for the GreenLight project to offer full-scale processing and storage in order to test how to make computing more energy-efficient.

The National Science Foundation will provide $2 million over three years from its Major Research Instrumentation program for UC San Diego’s GreenLight project. An additional $600,000 in matching funds will come from the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2) and the university’s Administrative Computing and Telecommunications (ACT) group.

The GreenLight project gets its name from its plan to connect scientists and their labs to more energy-efficient ‘green’ computer processing and storage systems using photonics – light over optical fiber.

The goal of GreenLight is to under computational performance per watt.

The GreenLight Instrument will enable an experienced team of computer-science researchers to make deep and quantitative explorations in advanced computer technologies, including graphics processors, solid-state disks, photonic networking, and field-programmable gate arrays (FPGAs). Jacobs School of Engineering computer science professor Rajesh Gupta and his team will explore alternative computing fabrics from array processors to custom FPGAs and their respective models of computation to devise architectural strategies for efficient computing systems.

“Computing today is characterized by a very large variation in the amount of effective work delivered per watt, depending upon the choice of the architecture and organization of functional blocks,” said Gupta. “The project seeks to discover fundamental limits of computing efficiency and device organizing principles that will enable future system builders to architect machines that are orders-of-magnitude more efficient modern-day machines, from embedded systems to high-performance supercomputers.”

The computing and systems research will yield new quantitative data to support engineering judgments on comparative “computational work per watt” across full-scale applications running on full-scale computing platforms.

This is a big win for Sun Microsystems and their containers.

“Using the Sun Modular Datacenter as a core technology and making all measurements  available as open data will form a unique, Internet-accessible resource that will have a dramatic impact on academic, government and private-sector computing,” said Emil J. Sarpa, Director of External Research at Sun Microsystems, Inc. “By placing experimental hardware configurations alongside traditional rack-mounted servers and then running a variety of computational loads on this infrastructure, GreenLight will enable a new level of insight and inference about real power consumption and energy savings.”

According to DeFanti, the project decided to build the GreenLight Instrument around the Sun Modular Datacenter because, “it’s the fastest way to construct a controlled experimental facility for energy research purposes.” The modular structure also means the GreenLight Instrument can be cloned – unlike bricks-and-mortar computer rooms that cannot be ordered through purchasing.

Photo of Sun Modular Datacenter

Interior of the Sun Modular Datacenter prior to deployment of up to 280 servers and other equipment that will turn the shipping container into the GreenLight Instrument.

And to make things a bit sexy, they plan on using a virtual environment to visualize inside the containers..

Rather than give scientists physical access to the GreenLight Instrument, OptIPortal tiled display systems will serve as visual termination points – allowing researchers to “see” inside the instrument. Users will also be able to query and visualize all sensor data in real time and correlate it interactively and collaboratively in this immersive, multi-user environment.

Once a virtual environment of the system has been created, scientists will be able to walk into a 360-degree virtual reality version in Calit2’s StarCAVE. Users will be able to zoom into the racks of clusters as well as see and hear the power and heat, from whole clusters of computers down to the smallest instrumented components, such as computer processing and graphics processing chips.

Read more

Sun's CIO writes Forbes Commentary, provides Green Data Center ideas

Forbes has a commentary by Sun's CIO, Bob Worrall.

Commentary
A Green Budget Line
Bob Worrall 07.28.08, 6:00 AM ET

Bob Worrall

pic

While the headlines are focused on nearly $5 gas and its impact on consumers, skyrocketing energy prices also are biting into information technology organizations as the costs of powering and cooling data centers reach all-time highs. In many companies, the cost to run data centers is now the second-largest expense after people. And the cost of powering data centers worldwide could grow from $18.5 billion in 2005 to $250 billion by 2012. There is no better time than right now to focus investment on more energy-efficient data centers.

Making the data center greener has been a hard sell. Most decision makers find the goal of reducing carbon emissions laudable but are held back by concerns that it will cut into profits or performance. When energy costs were relatively low, arguing that a current investment would reduce future costs was an uphill battle. But today's environment might provide the harsh dollars-and-cents context that's needed to move this issue from a decision about being green to one about being fiscally responsible.

Bob makes good tips in the rest of the article.

Check the vintage of the systems in your server racks, and replace energy and space hogs. Unlike fine wine, computer hardware rarely ages gracefully. Rooting out old systems is often the easiest way to make a data center more efficient. Old hardware almost always consumes more space and power than new systems. Older systems are also usually more difficult to cool efficiently. Switching to more modern systems often allows for impressive consolidation ratios ranging from 2:1 to 10:1. You gain space and save energy even as you increase your computing power.

Tie IT decision makers to their facilities counterparts. Regardless of how much electricity is being consumed by the data center, chief information officers aren't usually the ones writing the utility checks. This disconnect often fuels an unnecessary debate about the importance of compute power over cost savings. IT and facilities organizations need to collaborate to make sure both understand how energy-efficient computing will help address both. If you take the improved energy efficiency, the reduced space utilization and the rebates on servers offered by many municipal power providers, the cost to go green begins to approach negligible.

Look at virtualization and container technologies. Using the right container technologies as part of your virtualization approach isolates applications and services by using flexible, software-defined boundaries. Taking this direction can allow you to achieve even greater compression in both servers and storage. For example, one customer I worked with implemented a new storage system with built-in virtualization and eliminated 40 terabytes of physical capacity, reducing storage power and cooling costs by 60%.

Turn on the meter at the rack level. Legacy measurement of watts per square foot in a data center may show the room is running fine on average, but in reality you have hot spots throughout that are damaging equipment. Most data centers measure load at the perimeter of the data center, which predictably makes things unpredictable. It is important to take metering one step further. It needs to be measured at the rack level (watts per rack) to truly enable energy efficiency. This enables you to reduce power consumption by pinpointing attention on certain areas in a data center instead of using the traditional, scattershot approach of cranking up the fans when a particular area in the data center starts to run hot.

Note: Container Technologies mentioned are the type in this Sun White Paper for software containers, not the containers for data center equipment.

Read more