Infiniband networking saves energy

If you are not familiar with the InfiniBand standard you should think about it as a way to save energy.

InfiniBand™ is an industry-standard specification that defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems. InfiniBand is a true fabric architecture that leverages switched, point-to-point channels with data transfers today at up to 120 gigabits per second, both in chassis backplane applications as well as through external copper and optical fiber connections.

InfiniBand™ has a robust roadmap defining increasing speeds through 2011 and 40 Gb/s InfiniBand™ products are shipping today.  The roadmap shows projected increased market demand for InfiniBand™ 1x EDR, 4x EDR, 8x EDR and 12x EDR beyond 2011, which translates to bandwidths nearing 1,000 Gb's in the next three years.

InfiniBand is a pervasive, low-latency, high-bandwidth interconnect which requires low processing overhead and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded applications that scale from two nodes up to a single cluster that interconnect thousands of nodes.

How good is InfiniBand?  It is used in 18of the top 20 green super computers.

The TOP500 showed InfiniBand rising and connecting 182 systems (36 percent of the TOP500) and it clearly dominated the TOP10 through TOP300 systems. Half of the TOP10 systems are connected via InfiniBand and, although the new #1 system (JAGUAR from ORNL) is a Cray, it’s important to note that InfiniBand is being used as the storage interconnect to connect Jaguar to “Spider” storage systems.

But let’s talk efficiency for a moment… this edition of the TOP500 showed that 18 out of the 20 most efficient systems on the TOP500 used InfiniBand and that InfiniBand system efficiency levels reached up to 96 percent! That’s over 50 percent more efficiency than the best GigE cluster. Bottom line: WHEN PURCHASING NEW SYSTEMS, DON’T IGNORE THE NETWORK! You may be saving pennies on the network and spending dollars on the processors with an unbalanced architecture.

If you don’t have a supercomputer, then virtualized I/O is another area.

The StorageMojo take
Good to see Iband used as a big cheap pipe. Its low latency, cheap switch ports and high bandwidth make it the best choice for this application.

VMware and Hyper-V have serious I/O problems. Xsigo helps manage them.

Read more

Part 2 of Microsoft’s Azure Container, looking closer there are server fans

I wrote the post on Microsoft’s Azure Container referencing InformationWeek’s Bob Muglia quote.

Bob Muglia: In Chicago, we used the previous generation of container. The one on the show floor incorporates advances coming out of Microsoft Research. Our servers have no fans in them

Thanks to smart reader’s ears, they point out there are server fans.

If you listen to this video you hear the server fans that are too familiar.  So InformationWeek’s statement of fanless servers must be a misquote.

If I wanted I could reach out to Microsoft friend to go take a closer better picture of the back of the servers to get a picture of the fans.  But, I am not going to call a favor for something that you can all hear if you listen to the video.

image

Read more

Evolution of the Data Center Idea – Remove people from installation and service of servers

I had a conversation with John G. Miner at Santa Fe Institute’s Business Network event.  John wrote the Intel paper on air economizer’s to reduce data center cost.

One of the ideas we discussed that totally makes sense for cloud computing and big data center companies is to look at automated handling systems for servers to be installed and serviced.  At some point if not now it is going to make economic sense to use automated handling equipment to deliver servers to the rack location.

This means current racks are obsolete and current form factors.  And special connectors need to be developed for power and cooling.  Imagine servers automatically being loaded in shipping to be sent for service, and the reverse for receiving.

How hot can you make a data center with no people?  And how much could you increase densities or change airflow with no thought for people access.

If anybody can do this I would guess Google and Amazon.  Google has complete control over its data center systems.  Amazon has experience with automated handling systems in its warehouses and knows the ROI of these devices.

Read more

HP’s acquisition of 3Com allows energy-efficient networking integration

There is news all over on HP’s acquisition of 3com.  WSJ is one example.

H-P to Acquire 3Com for $2.7 Billion

By JUSTIN SCHECK

Hewlett-Packard Co. said it agreed to buy networking-gear maker 3Com Corp. for $2.7 billion in cash, the latest move by H-P to bulk up its product line amid a broader push by the few remaining technology giants to turn themselves into one-stop shops for corporate customers.

[                    bizworld                ]

Mark Hurd

The Palo Alto, Calif., company—the world's largest tech company by revenue—also preannounced positive fiscal fourth quarter results. It posted an 11% jump in operating earnings and an 8% decline in revenue from a year ago, beating analyst estimates. In a sign the tech industry is leaving the recession behind, H-P also raised its revenue forecast for the new fiscal year.

By buying 3Com, a onetime leading tech company that has fallen on tougher times this decade, H-P is aiming to goose its growth. The move also puts H-P more squarely on the turf of Cisco Systems Inc.

David Donatelli, H-P's vice president in charge of the corporate-computer division, said 3Com has a better set of networking products for large corporate clients than H-P currently sells and a market share of more than 30% in the China networking market. With the deal, Mr. Donatelli said, "we get industry-leading products."

Drilling into the actual HP press release, I found the green, energy-efficient angle most miss.  The following is a quote from Bob Mao, CEO of 3Com

3Com’s networking products are based on a modern architecture which has been designed to offer better performance, require less power and eliminate administrative complexity when compared against current network offerings

HP is going to deploy the 3Com solution company wide, and you can bet the results if good will be in a future HP case study.

“We are confident that we can run our entire global business of 300,000-plus employees, including our next-generation data centers, entirely on the new HP networking solutions,” said Randy Mott, executive vice president and chief information officer, HP. “Based on our experience and extensive testing of 3Com’s products, we are planning to undertake a global rollout within HP as soon as possible after the completion of the acquisition.”

Read more

Push for energy efficient computing

EDN, Electronics Design and Strategy and news has an article on energy-efficient computing.

Industry standards lead push toward energy-efficient computing

Environmental concerns and rising energy costs are spurring industry and government groups to develop requirements for high-efficiency AC and DC power conversion, leading to energy-efficient servers. Meeting the newest specifications will demand knowledge of competing power-conversion topologies, components, and design.

By Lee Harrison, Peritus Power -- EDN, 11/12/2009

AT A GLANCE

  • The ac/dc power-conversion step in the overall power chain for server farms can yield some of the most significant gains in power efficiency.
  • To meet industry standards, manufacturers have taken different approaches, including using interleaved PFC (power-factor control), bridgeless PFC, and resonant topologies.
  • Thanks to its 0V switching losses, higher switching frequencies, and smaller footprints, resonant-converter topology may be able to achieve Energy Star Platinum standards.
  • For the near future, silicon will remain the dominant switchingsemiconductor material, and gallium nitride will start to make inroads over the next year.

In addition to environmental concerns, the increasing cost of electricity is driving data-center managers to more energy-efficient installations. As utility bills become the primary expense for data centers, electricity costs now outweigh real-estate costs, with power consumption per data center ranging from 2 to 22 MW. In 2007, the Internet accounted for 9.4% of total US electricity consumption and 5.3% of global electricity consumption. Networking equipment, such as modems, routers, hubs, and switches, accounted for about 25% of the electricity demand in an average office. If the computers and servers in an infrastructure require 200 kW, then the networking components in that infrastructure need 50 kW. In addition, 45% of the power a data center consumes is for air-conditioning and cooling. In modern data centers, performance per watt has become more critical than performance per processor.

If you are a power supply/conversion geek read the rest of the article.  Here is more background about the author.

Lee Harrison is director of Peritus Power. Previously, he worked at Sun Microsystems as a power-system architect and technical leader for the Sparc and x86 platforms, developing the technology and strategy for power conversion with Emerson Network Power, Delta, Lineage, Power One, and FDK from 2000 to 2009. He provides input to the Environmental Protection Agency on power-related issues and has been a voting member of the Climate Savers ac/dc work group for the last four years. Before joining Sun, Harrison was an engineering manager for a UK-based defensepower-supply company, specializing in high-density, low-profile dc/dc and ac/dc power conversion and nuclear-protected electronics. Click here for his LinkedIn profile.

Peritus Power brings up interesting challenges to make systems more energy-efficient.

The Problems Attaining High Efficiency.

Higher efficiency sounds like an easy task. It is littered with problems for the unwary. Taking a legacy design and modifying it is not always possible, new designs quite often require starting from scratch. Higher efficiency means faster switching edges, this leads to precaution needing to be taken to ensure EMI radiated is not a problem. More noise in the PSU can cause I2C communication issues, and of course, power sapping snubbing is not an option. Cost is an issue if you are not careful throughout the entire design.

I’ve spent many hours working with the EMI testing team while at Apple and it makes total sense that higher efficiency power switching edges has an effect on EMI radiation.

The last thing you want is high efficiency systems that interfere with the operations of other IT equipment.

Read more