Dell Servers inside Windows Azure Cloud Containers at PDC 09

Here is a youtube video of the Windows Azure container at PDC 09.

With Dell inside.

image

Steve Clayton has an image.

L1020981

DataCenterKnowledge has a post as well.

Optimized for Outdoors?
The Generation 4 container on display at PDC looks to be completely optimized for outdoor use, with a design that relies upon fresh air (”free cooling”) rather than air conditioning. While we’re not on-site at PDC and haven’t been able to inspect the container, it features louvers on the exterior of the container to draw fresh air into the cold aisle and expel hot air from the rear of the hot aisle. Here’s a look at a video of the container shot by a PDC attendee:

The container features the branding for Windows Azure, Microsoft’s developer-focused cloud computing platform. Windows Azure will run at facilities in Chicago, San Antonio, Dublin, Amsterdam, Singapore and Hong Kong.

Read more

Microsoft’s Daniel Costello and Christian Belady Container Data Centers Video

cnet news has a video interview of Daniel  Costello and Christian Belady.

Many of your recognize Christian.  Daniel is not as well known, and brains behind the 4th generation Microsoft data center.

But Microsoft has indicated how the next generation of data center will improve upon the Chicago design.

Moving to containers allows Microsoft to bring in computing capacity as needed, but still requires the company to build the physical building, power and cooling systems well ahead of time. The company's next generation of data center will allow those things to be built in a modular fashion as well.

Daniel had an interview with PCworld that gives you some ideas of his thinking.

"The idea of modular, portable data centers is key to the industry's future," said Daniel Costello, Microsoft director of data center research, in a presentation at GigaOM's Structure 08 conference in San Francisco. "That's why I'm here to talk about data centers, not just for Microsoft but for our customers as well."

Buying these boxes from server vendors can be more energy-efficient and cost-effective than building a new, traditional data center, he said, and Microsoft sees them as more than just a way to add extra computing capacity at short notice. "We see them as a primary packaging unit," he said.

Using shipping containers is part of an effort by Microsoft to radically rethink its data centers, as it tries to add more computing capacity in a way that is cost effective and power efficient. "At Microsoft, we're questioning every component in the data center, up to and including the roof," Costello said. That includes "eliminating concrete from our data center bills."

"The definition of a datacenter has changed. It's not just bricks and mortar any more, and moving forward, we think it can be a lot more energy efficient," he said.

But vendors building portable data centers today are filling them with equipment that was designed for traditional data centers. "Moving forward, we need to design systems specifically for this form factor. If we look at the containers, that form factor will change over time as well."

Microsoft has approached every major server vendor about providing it with equipment, Costello said. He said he thinks "all major vendors" will offer portable data centers within the next two years. Vendors offering them today include Sun Microsystems, Verari Systems, Rackable Systems and American Power Conversion.

The cost benefits come partly from economies of scale. Shipping 2,000 servers in a container is more cost-effective than shipping and installing separate racks, and portable data centers don't require raised floors or as much wiring.

They can offer a better "power unit efficiency" ratio than do traditional data centers, he said. PUE is a measure of a data center's power efficiency. If a server demands 500 watts and the PUE of a data center is 3.0, the power from the grid needed to run the server will be 1500 watts, according to a definition from the Green Grid industry consortium.

"We've seen PUE at a peak of 1.3" in modular data centers, Costello said, compared with between 1.6 and 2.0 for a traditional data center.

The containers can accommodate 1,000 watts per square foot, allowing companies to power a lot more servers in a given area, he said. Many companies are unable to add more equipment to their data centers because power supplies and cooling equipment are at maximum capacity. The portable data centers are an alternative to building new facilities or extending old ones.

Daniel mentions some of the downsides of containers.

There are some drawbacks and plenty of questions to be answered, he said. Some of the cons include a higher cost of failure if the power to a container is cut off, as well as new risks in terms of regulatory compliance. In addition, portable data centers offered today can't accommodate servers from multiple vendors, he said.

Read more

Mike Manos’s Perspective on Containers in the Data Center – Part 1

Mike has a new post on a practical guide to data center containers. 

Bottom Line: Because containers encapsulate a large amount of information capability with power and cooling infrastructure, the decision to use containers requires a multi-discipline view of the benefits and impacts of using Containers.  Google and Microsoft have done this exercise, and any one who is thinking about containers should consider both of these companies have figured out where it makes sense.

It is one of Mike’s long posts, so I’ll give you excerpts, and give you a Part 1 to get started, and work on a Part 2 later.

I will do my best to try and balance this view across four different axis the Technology, Real Estate, Financial and Operational Considerations.  A sort of ‘ Executives View’  of this technology. I do this because containers as a technology can not and should not be looked at from a technology perspective alone.  To do so is complete folly and you are asking for some very costly problems down the road if you ignore the other factors.  Many love to focus on the interesting technology characteristics or the benefits in efficiency that this technology can bring to bare for an organization but to implement this technology (like any technology really) you need to have a holistic view of the problem you are really trying to solve.

Mike contrasts Moore’s law vs. the glacial pace of innovation in the data center.

Isn’t it interesting then that places where all of this wondrous growth and technological wizardry has manifested itself, the data center or computer room, or data hall has been moving along at a near pseudo-evolutionary standstill.  In fact if one truly looks at the technologies present in most modern data center design they would ultimately find small differences from the very first special purpose data room built by IBM over 40 years ago.

Mike goes on to discuss modularization in the data center.  The NSA is listening to advice like this and have made modularization a requirement of their $1.5 bil data center.

Data Centers themselves have a corollary to the beginning of the industrial revolution.   In fact I am positive that Moore’s observations would hold true and civilization transitioned from an agricultural based economy to that of an industrialized one.   In fact one might say that the current modularization approach to data centers is really just the industrialization of the data center itself.

In the past, each and every data center was built lovingly by hand by a team of master craftsmen and data center artisans.  Each is a one of a kind tool built to solve a set of problems.  Think of the eco-system that has developed around building these modern day castles.  Architects, Engineering firms, construction firms, specialized mechanical industries, and a host of others that all come together to create each and every masterpiece.    So to, did those who built plows, and hammers, clocks and sextants, and the tools of the previous era specialize in making each item, one by one.   That is, of course, until the industrial revolution.

Data Centers are not buildings they are information factories.

The data center modularization movement is not limited to containers and there is some incredibly ingenious stuff happening in this space out there today outside of containers, but one can easily see the industrial benefits of mass producing such technology.  This approach simply creates more value, reduces cost and complexity, makes technology cheaper and simplifies the whole.  No longer are companies limited to working with the arcane forces of data center design and construction, many of these components are being pre-packaged, pre-manufactured and becoming more aggregated.  Reducing the complexity of the past. 

And why shouldn’t it?   Data Centers live at the intersection of Information and Real Estate.   They are more like machines than buildings but share common elements of both buildings and technology.   All one has to do is look at it from a financial perspective to see how true this is.   In terms of construction, the cost of data centers break down to the following simple format.  Roughly 85% of the total costs to build the facility is made up of the components, labor, and technology to deal with the distribution or cooling of the electrical consumption.

Data Centers are mostly built out of sync with the pace of technology change.

From the perspective of the technology drivers behind this change roadsis the fact that most existing data centers are not designed or instrumented to handle the demands of the changing technology requirements occurring within the data center today.

Data Center managers are being faced with increasingly varied redundancy and resiliency requirements within the footprints that they manage.   They continue to support environments that heavily rely upon the infrastructure to provide robust reliability to ensure that key applications do not fail.  But applications are changing.  Increasingly there are applications that do not require the same level of infrastructure to be deployed because either the application is built in such a way that it is more geo-diverse or server-diverse. Perhaps the internal business units have deployed some test servers or lab / R&D environments that do not need this level of infrastructure. With the amount of RFPs out there demanding more diversity from software and application developers to solve the redundancy issue in software rather than large capital spend requirements on behalf of the enterprise, this is a trend likely to continue for some time.  Regardless the reason the variability challenge that data center managers are facing are greater than ever before.

Mike brings up the problem that occurs as people build costly custom buildings.

From a business / finance perspective companies are faced with some interesting challenges as well.  The first is that the global inventory for data center space (from a leasing or purchase perspective) is sparse at best.    This is resulting from a glut of capacity after the dotcom era and the resulting land grab that occurred after 9/11 and the Finance industry chewing up much of the good inventory.    Additive to this is the fact that there is a real reluctance to build these costly facilities speculatively.   This is a combination of how the market was burned in the dotcom days, and the general lack of availability and access to large sums of capital.  Both of these factors are driving data center space to be a tight resource.

In my opinion the biggest problem across every company I have encountered is that of capacity planning.  Most organizations cannot accurately reflect how much data center capacity they will need in next year let alone 3 or 5 years from now.   Its a challenge that I have invested a lot of time trying to solve and its just not that easy.   But this lack of predictability exacerbates the problems for most companies.  By the time they realize they are running out of capacity or need additional capacity it becomes a time to market problem.   Given the inventory challenge I mentioned above this can position a company in a very uncomfortable place.   Especially if you take the all-in industry average of building a traditional data center yourself in a timeline somewhere between 106 and 152 weeks. 

to be continued…

Read more

Dell’s mini Container Data Center fits in a suitcase

GigaOm has a video interview of Dell’s Jimmy Pike, Director of system architecture in Dell’s Data Center Solutions.

Pike has crammed two servers running dual-core, 2.5 GHz Intel processors (Harpertown), 32 GB of memory, 4 TB of disk space for storage, a power supply, a 5-port Gigabit Ethernet Switch and even some solid-state drives into a metal box. The box consumes about 325 watts, is relatively portable and provides enough performance to act as a DNS server or a data center for a small business (although since it’s relatively portable, data theft becomes a distinct possibility.) Pike uses it to test ideas and software for clients of Dell’s Data Center Solutions’ Group, which sells custom-built servers to hyperscale computing clients such as Facebook.

Read more

Microsoft Container Data Center Video

Here is the video of Microsoft’s server containers being installed in the new Chicago data center.  Below are screen captures from the video.  These screen shots are only interesting because we know there are 2,000 + servers in the containers.  Otherwise it’s not that interesting if you don’t know what is in it.  Which is probably another reason why this is not mass tech media news.

image

image

image

image

 

 


Microsoft Chicago Data Center Container Bay

It’s too bad the Container data center isn’t getting better press.

The problem is directly related to this one blog entry http://blogs.technet.com/msdatacenters/archive/2009/09/28/microsoft-celebrates-chicago-data-center-grand-opening.aspx vs. the Microsoft EMEA web site ttp://www.microsoft.com/emea/presscentre/pressreleases/DublinDataCentrePR_240909.mspx.

Keep this example in mind if you want good media coverage.  Many times I am in the role of a media guy, and hang out with media people.  They need help to tell good stories.

BTW, the best quotes I’ve seen are InformationWeek.

Microsoft isn't done pushing this modular approach, says Daniel Costello, director of the company's data center research and engineering. Its researchers are working on ways to deliver air conditioning and heating as modular units as well, since they're a huge part of a data center's fixed equipment costs.

And there's a wide open space in the middle of the Chicago data center, where there are no yellow parking space lines painted. The next generation of modular units won't be shipping containers, Costello says—though he's not yet ready to say what form they will be.

Read more