More News on Microsoft’s Container Data Center, But What is Inside?

News.com has an article on Microsoft’s Container Data Centers.

Microsoft's data centers growing by the truckload

Posted by Ina Fried

Once upon a time, Microsoft used to fill its data centers one server at a time. Then it bought them by the rack. Now it's preparing to load up servers by the shipping container.

Starting with a Chicago-area facility due to open later this year, Microsoft will use an approach in which servers arrive at the data center in a sealed container, already networked together and ready to go. The container itself is then hooked up to power, networking and air conditioning.

"The trucks back'em in, rack'em and stack'em," Chief Software Architect Ray Ozzie told CNET News. And the containers remain sealed, Ozzie said. Once a certain number of servers in the container have failed, it will be pulled out and sent back to the manufacturer and a new container loaded in.

It's just one way that Microsoft is trying to cope in a world where it adds roughly 10,000 servers a month.

As much talk as there is about the containers,  a part we do not know is what is in the containers.  Here are hints Microsoft is pushing for higher efficiency server hardware.

Gone are the days in which Microsoft settled for off-the-shelf hardware to fill its server farms. These days, Microsoft is looking for servers designed to its exact needs. It's not just that Microsoft doesn't want servers that have keyboard or USB ports--it wants motherboards that don't even have the added wiring necessary to support those things that it will never use. Such moves eliminate cost, space and power consumption.

"We are not physically building our servers, but there is very deep engagement (with the computer makers)," Josefsberg said.

Even a 1 percent or 2 percent reduction in power consumption makes a big difference, Josefsberg said. As it is, Microsoft is trying to cram a whole lot of gear in a small space. While server racks at a Web hosting facility might have power densities of 70 watts to 100 Watts per square foot, things are packed far more tightly in the containers, which might be consuming in the thousands of watts of power per square foot.

Watch for the Server OEMs making noise about new server skus. I wrote about how Microsoft is influencing the industry with its purchasing power.

Given the purchasing by Microsoft's data center properties (search, hotmail, maps,etc.) are now driving Server OEMs with custom RFPs like the CBlox RFP, OEMs are building exactly what Microsoft wants to run a more efficient data center. And, versus Google's model of requiring exclusive designs no one else in the industry can purchase, the Microsoft skus spill into the rest of the market.

People can argue the benefits of containers, but listening to Mike Manos and Christian Belady, part of what the containers give Microsoft is a method to determine the compute per watt efficiency for what is in the container.

Read more

Microsoft Presents Container Data Center Details at Intel Developer Forum

Microsoft Data Center Research Mgr Daniel Costello presented Microsoft’s Cblox (container) solution at Intel Developers Forum. The slides provide a good set of issues to look at when evaluating containers.

There are a total of 6 sildes, Here are three slides that provide the most information.

image

image

image

The full slide deck can be accessed here under Container Slides.

Unfortunately, I couldn’t find the presentation posted as this was a Day 0 presentation.

Read more

Oct '08, Same Month for Microsoft's Container Data Center and PDC, Convenient Timing?

Franklin Park Herald of Northlake has an article about Microsoft's Container Data Center being completed in first half of October.

Microsoft building slated for fall completion

July 23, 2008

Recommend

By MARK LAWTON mlawton@pioneerlocal.com

Construction on Microsoft's data center in Northlake is on schedule to tentatively be finished in the first half of October.

The 550,000-square-foot building at 601 Northwest Avenue will house thousands of servers used to control Microsoft's online services.

» Click to enlarge image

The Microsoft data center in Northlake will contain equipment that supplies power, water and networking capabilities.

The outer shell of the building is complete. A "significant portion" of the building electrical and mechanical work has also been completed, said Mike Manos, general manager of data center services.

The building will be a "container data center" and constructed in a manner that resembles an erector set or Lego toys.

"Literally, a truck will put up with a container on its back," Manos said. "The container has the computer equipment and cooling and electrical systems. It plugs into what we refer to as the spine, which gives it power and water (for cooling the equipment) and networking capabilities. Then the truck pulls away."

When the electrical and mechanical work is done, Microsoft will start the commissioning process. That's when they start testing out all the computer network servers -- "tens to hundreds of thousands," Manos said.

That process will take one to two months. Then the center will start being used.

Microsoft's PDC (professional developer conference) is scheduled for Oct 28 -30 where we will see if rumors of Microsoft's cloud services are true.

Microsoft Mum On 'Red Dog' Cloud Computing

The Windows-compatible platform can be seen as Microsoft's counterpart Amazon's Elastic Cloud Computing service, known as EC2, and to the Google App Engine.

By Richard Martin
InformationWeek
July 23, 2008 12:00 PM

Attempting to respond to cloud computing initiatives from Google and Amazon (NSDQ: AMZN), Microsoft (NSDQ: MSFT) is apparently in the process of preparing a cloud-based platform for Windows that is codenamed "Red Dog."

Though reports of the project have been circulating in the blogosphere for months, Microsoft has not publicly described the service. Microsoft did not respond to repeated requests for comments for this story.

Providing developers with flexible, pay-as-you-go storage and hosting resources on Microsoft infrastructure, Red Dog can be seen as Microsoft's counterpart Amazon's Elastic Cloud Computing service, known as EC2, and to the Google App Engine, according to Reuven Cohen, founder and chief technologist for Toronto-based Enomaly, which offers its own open-source cloud platform.

"It seems that Microsoft is working on a project codenamed 'Red Dog' which is said to be an 'EC2 for Windows' cloud offering," Cohen wrote on his blog, Elastic Vapor, last week. "The details are sketchy, but the word on the street is that it will launch in October during Microsoft's PDC2008 developers conference in Los Angeles."

Convenient Timing?  Or planned execution?

Read more

$2,600,000 UC San Diego Energy Efficient Computing project, Heavily Instrumentation & Monitoring to Calculate Performance/Watt

UC San Diego has an article about their new Energy Efficient Computing project, GreenLight.

UC San Diego’s GreenLight Project to Improve Energy Efficiency of Computing

July 28, 2008

By Doug Ramsey

The information technology industry consumes as much energy and has roughly the same carbon “footprint” as the airline industry. Now scientists and engineers at the University of California, San Diego are building an instrument to test the energy efficiency of computing systems under real-world conditions – with the ultimate goal of getting computer designers and users in the scientific community to re-think the way they do their jobs.

Photo of Sun Datacenter

This Sun Modular Datacenter deployed on the UC San Diego campus will be instrumented for the GreenLight project to offer full-scale processing and storage in order to test how to make computing more energy-efficient.

The National Science Foundation will provide $2 million over three years from its Major Research Instrumentation program for UC San Diego’s GreenLight project. An additional $600,000 in matching funds will come from the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2) and the university’s Administrative Computing and Telecommunications (ACT) group.

The GreenLight project gets its name from its plan to connect scientists and their labs to more energy-efficient ‘green’ computer processing and storage systems using photonics – light over optical fiber.

The goal of GreenLight is to under computational performance per watt.

The GreenLight Instrument will enable an experienced team of computer-science researchers to make deep and quantitative explorations in advanced computer technologies, including graphics processors, solid-state disks, photonic networking, and field-programmable gate arrays (FPGAs). Jacobs School of Engineering computer science professor Rajesh Gupta and his team will explore alternative computing fabrics from array processors to custom FPGAs and their respective models of computation to devise architectural strategies for efficient computing systems.

“Computing today is characterized by a very large variation in the amount of effective work delivered per watt, depending upon the choice of the architecture and organization of functional blocks,” said Gupta. “The project seeks to discover fundamental limits of computing efficiency and device organizing principles that will enable future system builders to architect machines that are orders-of-magnitude more efficient modern-day machines, from embedded systems to high-performance supercomputers.”

The computing and systems research will yield new quantitative data to support engineering judgments on comparative “computational work per watt” across full-scale applications running on full-scale computing platforms.

This is a big win for Sun Microsystems and their containers.

“Using the Sun Modular Datacenter as a core technology and making all measurements  available as open data will form a unique, Internet-accessible resource that will have a dramatic impact on academic, government and private-sector computing,” said Emil J. Sarpa, Director of External Research at Sun Microsystems, Inc. “By placing experimental hardware configurations alongside traditional rack-mounted servers and then running a variety of computational loads on this infrastructure, GreenLight will enable a new level of insight and inference about real power consumption and energy savings.”

According to DeFanti, the project decided to build the GreenLight Instrument around the Sun Modular Datacenter because, “it’s the fastest way to construct a controlled experimental facility for energy research purposes.” The modular structure also means the GreenLight Instrument can be cloned – unlike bricks-and-mortar computer rooms that cannot be ordered through purchasing.

Photo of Sun Modular Datacenter

Interior of the Sun Modular Datacenter prior to deployment of up to 280 servers and other equipment that will turn the shipping container into the GreenLight Instrument.

And to make things a bit sexy, they plan on using a virtual environment to visualize inside the containers..

Rather than give scientists physical access to the GreenLight Instrument, OptIPortal tiled display systems will serve as visual termination points – allowing researchers to “see” inside the instrument. Users will also be able to query and visualize all sensor data in real time and correlate it interactively and collaboratively in this immersive, multi-user environment.

Once a virtual environment of the system has been created, scientists will be able to walk into a 360-degree virtual reality version in Calit2’s StarCAVE. Users will be able to zoom into the racks of clusters as well as see and hear the power and heat, from whole clusters of computers down to the smallest instrumented components, such as computer processing and graphics processing chips.

Read more

Sun's CIO writes Forbes Commentary, provides Green Data Center ideas

Forbes has a commentary by Sun's CIO, Bob Worrall.

Commentary
A Green Budget Line
Bob Worrall 07.28.08, 6:00 AM ET

Bob Worrall

pic

While the headlines are focused on nearly $5 gas and its impact on consumers, skyrocketing energy prices also are biting into information technology organizations as the costs of powering and cooling data centers reach all-time highs. In many companies, the cost to run data centers is now the second-largest expense after people. And the cost of powering data centers worldwide could grow from $18.5 billion in 2005 to $250 billion by 2012. There is no better time than right now to focus investment on more energy-efficient data centers.

Making the data center greener has been a hard sell. Most decision makers find the goal of reducing carbon emissions laudable but are held back by concerns that it will cut into profits or performance. When energy costs were relatively low, arguing that a current investment would reduce future costs was an uphill battle. But today's environment might provide the harsh dollars-and-cents context that's needed to move this issue from a decision about being green to one about being fiscally responsible.

Bob makes good tips in the rest of the article.

Check the vintage of the systems in your server racks, and replace energy and space hogs. Unlike fine wine, computer hardware rarely ages gracefully. Rooting out old systems is often the easiest way to make a data center more efficient. Old hardware almost always consumes more space and power than new systems. Older systems are also usually more difficult to cool efficiently. Switching to more modern systems often allows for impressive consolidation ratios ranging from 2:1 to 10:1. You gain space and save energy even as you increase your computing power.

Tie IT decision makers to their facilities counterparts. Regardless of how much electricity is being consumed by the data center, chief information officers aren't usually the ones writing the utility checks. This disconnect often fuels an unnecessary debate about the importance of compute power over cost savings. IT and facilities organizations need to collaborate to make sure both understand how energy-efficient computing will help address both. If you take the improved energy efficiency, the reduced space utilization and the rebates on servers offered by many municipal power providers, the cost to go green begins to approach negligible.

Look at virtualization and container technologies. Using the right container technologies as part of your virtualization approach isolates applications and services by using flexible, software-defined boundaries. Taking this direction can allow you to achieve even greater compression in both servers and storage. For example, one customer I worked with implemented a new storage system with built-in virtualization and eliminated 40 terabytes of physical capacity, reducing storage power and cooling costs by 60%.

Turn on the meter at the rack level. Legacy measurement of watts per square foot in a data center may show the room is running fine on average, but in reality you have hot spots throughout that are damaging equipment. Most data centers measure load at the perimeter of the data center, which predictably makes things unpredictable. It is important to take metering one step further. It needs to be measured at the rack level (watts per rack) to truly enable energy efficiency. This enables you to reduce power consumption by pinpointing attention on certain areas in a data center instead of using the traditional, scattershot approach of cranking up the fans when a particular area in the data center starts to run hot.

Note: Container Technologies mentioned are the type in this Sun White Paper for software containers, not the containers for data center equipment.

Read more