HP Joins the Container Data Center Market

image

The Register writes on HP’s Container offering.

Hewlett-Packard has finally found its way into the data center trailer park.

It took a while, but the hardware vendor is introducing its own scheme for selling chunks of data centers in pre-packaged containers. HP joins the likes of Sun, IBM, Rackable, and Verari with similar White Trash Data Center programs.

HP calls its offering a "Performance Optimized Data Center" or POD. HP joins IBM by providing the option of filling containers not just with the vendor's own kit, but also a wide variety of third-party metal.

"We engineered our PODs to be the most flexible infrastructure in the industry," said Paul Miller, HP's marketing chief of enterprise gear. "If it can fit into a 19-inch rack, we can pretty much fit it into our POD."

Miller said customers are all about standardized hardware in the container arena. For example, the standards approach lets customers start with half a POD and then move their existing equipment into the container when they need the extra space.

Most other vendors have chosen a ground-up approach for the job — fitting their containers with specialized gear made for life in a 40-foot unit. HP, however, doesn't have new hardware for its container. But at least it's managed to cram a lot of what it currently has in there.

HP claims the shipping containers will support more than 3,500 compute nodes, or 12,000 large form factor hard drives. The company estimates that's equivalent to 4,000-plus square feet of typical data center capacity. It also promises delivery within six weeks of the customer's order.

Here is HP’s POD web site.

HP’s slick video is here.

Read more

How Strong is a Shipping Container? A Fireworks Test Demonstration

For the 4th of July it was appropriate to have fireworks.  For those of you looking at container based data centers, you probably are wondering how strong containers are.  Check out this video of a test by 321 Fireworks crew shown in a PBS show.

The folks at Pyro Works have a geeky version of the test, providing more details and repeat the 12” firework shell test.

If you search on youtube for" “container test” you’ll find a bunch of different tests like a container full of fireworks.  So, for those of you who concerned about the strength of a container data center you can compare it to data center construction.

If you want to launch large fireworks shells as a test.

Happy 4th of July!

Read more

Microsoft Presentation on Containers at GigaOM's Structure 08 - a PUE of 1.3!

PCWorld reports on a Container presentation by Microsoft's Daniel Costello, Director of data center research at GigaOm's Structure 08.

"The idea of modular, portable data centers is key to the industry's future," said Daniel Costello, Microsoft director of data center research, in a presentation at GigaOM's Structure 08 conference in San Francisco. "That's why I'm here to talk about data centers, not just for Microsoft but for our customers as well."

Buying these boxes from server vendors can be more energy-efficient and cost-effective than building a new, traditional data center, he said, and Microsoft sees them as more than just a way to add extra computing capacity at short notice. "We see them as a primary packaging unit," he said.

Using shipping containers is part of an effort by Microsoft to radically rethink its data centers, as it tries to add more computing capacity in a way that is cost effective and power efficient. "At Microsoft, we're questioning every component in the data center, up to and including the roof," Costello said. That includes "eliminating concrete from our data center bills."

"The definition of a datacenter has changed. It's not just bricks and mortar any more, and moving forward, we think it can be a lot more energy efficient," he said.

But vendors building portable data centers today are filling them with equipment that was designed for traditional data centers. "Moving forward, we need to design systems specifically for this form factor. If we look at the containers, that form factor will change over time as well."

Daniel covers the efficiencies in PUE.

They can offer a better "power unit efficiency" ratio than do traditional data centers, he said. PUE is a measure of a data center's power efficiency. If a server demands 500 watts and the PUE of a data center is 3.0, the power from the grid needed to run the server will be 1500 watts, according to a definition from the Green Grid industry consortium.

"We've seen PUE at a peak of 1.3" in modular data centers, Costello said, compared with between 1.6 and 2.0 for a traditional data center.

The containers can accommodate 1,000 watts per square foot, allowing companies to power a lot more servers in a given area, he said. Many companies are unable to add more equipment to their data centers because power supplies and cooling equipment are at maximum capacity. The portable data centers are an alternative to building new facilities or extending old ones.

And, the portable data center.

But he thinks portable data centers will be deployed widely to provide services to end users. "We used to talk about a PC on every desk," he said. "But how about a data center in every town?"

Which reminds me of a conversation when I was talking to Christian Belady at a Uptime Institute event.  Christian said he thinks power for data center is becoming scare. I told him I think what will happen in the future is Microsoft guys will drive around in a town looking for what buildings/businesses have power infrastructure in place. Buy those buildings with good power and networking infrastructure, level the building and put a container data center in the space in months.

Read more

More Details on Dell's XS23 server used in Cloud Computing, HPC, & Containers, designed for high energy efficiency

Dell has more details on its XS23 Cloud Server.  Seems like this is the kind of server's good for containers as well.

XS23 Cloud Server

Tue. May. 13, 2008

There has been some recent press around some of the equipment we’ve developed in our cloud computing group. The core of our business is essentially a consulting and design service and developing new products for customers is a big part of the fun. Because these aren’t mainstream PowerEdge systems, we don’t get the chance to show them off as much as we’d like. Our group has been talking for some time about “optimized designs” for cloud and hyperscale computing without showing what that can really mean, so it’s time to unveil something that’s come out of the lab.  Pictured here is one of our favorites: the XS23.

image

XS23 front – twelve 3.5” SAS or SATA drives; 3 per server

This product was designed for a customer that needed maximum compute density, a healthy amount of local disk and, of course, lowest power draw possible. Our architecture team threw all that in the blender and out came a 2U standard rack mount chassis that houses four dual-socket servers and twelve 3.5” hot plug drives.

image

The energy efficiency design is driven by customer requirements. Yippeee!!! Cloud Computing and HPC customers are driving the requirements for Greener HW designs.

This was expressly designed for an environment with high node failure tolerance - a cloud application. By designing out a lot of the capabilities that weren’t required (like redundant power) we were able to deliver the performance and power profile required. Efficiencies are gained by shared resources - as seen in a lot of general purpose designs available today. We think the key to designing the perfect cloud server is knowing where to stop and also what not to build in. This is a function of each customer’s unique design goals.

Read more

Mike Manos Provides Reasons for Microsoft's use of Containers, writes Response to ComputerWorld Article

For those of you who want to know more about why Microsoft (MSFT) is using Containers, Mike Manos just posted a response to the ComputerWorld article, "6 reasons why Microsoft's container-based approach to data centers won't work."

Mike's post, titled "Stirring Anthills ... A response to the recent Computerworld Article"

The stirring anthills is a good description of how fired up Mike is in his response to accusations Microsoft is not listening to the industry.

Again, I highly suggest you read his post, and here are some nuggets.

 

clip_image001

When one inserts the stick of challenge and change into the anthill of conventional and dogmatic thinking they are bound to stir up a commotion.

That is exactly what I thought when I read the recent Computerworld article by Eric Lai on containers as a data center technology.  The article found here, outlines six reasons why containers won't work and asks if Microsoft is listening.   Personally, it was an intensely humorous article, albeit not really unexpected.  My first response was "only six"?  You only found six reasons why it won't work?  Internally we thought of a whole lot more than that when the concept first appeared on our drawing boards. 

My Research and Engineering team is challenged with vetting technologies for applicability, efficiency, flexibility, longevity, and perhaps most importantly -- fiscal viability.   You see, as a business, we are not into investing in solutions that are going to have a net effect of adding cost for costs sake.    Every idea is painstakingly researched, prototyped, and piloted.  I can tell you one thing, the internal push-backs on the idea numbered much more than six and the biggest opponent (my team will tell you) was me!

...

Those who know me best know that I enjoy a good tussle and it probably has to do with growing up on the south side of Chicago.  My team calls me ornery, I prefer "critical thought combatant."   So I decided I would try and take on the "experts" and the points in the article myself with a small rebuttal posted here:

...

The economics of cost and use in containers (depending upon application, size, etc.) can be as high as 20% over conventional data centers.   These same metrics and savings have been discovered by others in the industry.  The larger question is if containers are a right-fit for you.  Some can answer yes, others no. After intensive research and investigation, the answer was yes for Microsoft.

...

However, I can say that regardless of the infrastructure technology the point made about thousands of machines going dark at one time could happen.  Although our facilities have been designed around our "Fail Small Design" created by my Research and Engineering group, outages can always happen.  As a result, and being a software company, we have been able to build our applications in such a way where the loss of server/compute capacity never takes the application completely offline.  It's called application geo-diversity.  Our applications live in and across our data center footprint. By putting redundancy in the applications, physical redundancy is not needed.  This is an important point, and one that scares many "experts."   Today, there is a huge need for experts who understand the interplay of electrical and mechanical systems.  Folks who make a good living by driving Business Continuity and Disaster Recovery efforts at the infrastructure level.   If your applications could survive whole facility outages would you invest in that kind of redundancy?  If your applications were naturally geo-diversified would you need a specific DR/BCP Plan?   Now not all of our properties are there yet, but you can rest assured we have achieved that across a majority of our footprint.  This kind of thing is bound to make some people nervous.   But fear not IT and DC warriors, these challenges are being tested and worked out in the cloud computing space, and it still has some time before it makes its way into the applications present in a traditional enterprise data center.

As a result we don't need to put many of our applications and infrastructure on generator backup. 

...

In my first address internally at Microsoft I put forth my own challenge to the team.   In effect, I outlined how data centers were the factories of the 21st century and that like it or not we were all modern day equivalents of those who experienced the industrial revolution.  Much like factories (bit factories I called them), our goal was to automate everything we do...in effect bring in the robots to continue the analogy.  If the assembled team felt their value was in wrench turning they would have a limited career growth within the group, if they up-leveled themselves and put an eye towards automating the tasks their value would be compounded.  In that time some people have left for precisely that reason.   Deploying tens of thousands of machines per month is not sustainable to do with humans in the traditional way.  Both in the front of the house (servers,network gear, etc) and the back of the house (facilities).   It's a tough message but one I won't shy away from.  I have one of the finest teams on the planet in running our facilities.   It's a fact, automation is key. 

...

The main point that everyone seems to overlook is the container is a scale unit for us.  Not a technology solution for incremental capacity, or providing capacity necessarily in remote regions.   If I deploy 10 containers in a data center, and each container holds 2000 servers, that's 20,000 servers.  When those servers are end of life, I remove 10 containers and replace them with 10 more.   Maybe those new models have 3000 servers per container due to continuing energy efficiency gains.   What's the alternative?  How people intensive do you think un-racking 20000 servers would be followed by racking 20000 more?   Bottom line here is that containers are our scale unit, not an end technology solution.

And, Mike closes with

I can assure you that outside of my metrics and reporting tool developers, I have absolutely no software developers working for me.   I own IT and facilities operations.   We understand the problems, we understand the physics, we understand quite a bit. Our staff has expertise with backgrounds as far ranging as running facilities on nuclear submarines to facilities systems for space going systems.  We have more than a bit of expertise here. With regards to the comment that we are unable to maintain a staff that is competent, the folks responsible for managing the facility have had a zero percent attrition rate over the last four years.  I would easily put my team up against anyone in the industry. 

I get quite touchy when people start talking negatively about my team and their skill-sets, especially when they make blind assumptions.  The fact of the matter is that due to the increasing visibility around data centers the IT and the Facilities sides of the house better start working together to solve the larger challenges in this space.  I see it and hear it at every industry event.  The us vs. them between IT and facilities; neither realizing that this approach spells doom for them both.  It’s about time somebody challenged something in this industry.  We have already seen that left to its own devices technological advancement in data centers has by and large stood still for the last two decades.  As Einstein said, "We can't solve problems by using the same kind of thinking we used when we created them."

Ultimately, containers are but the first step in a journey which we intend to shake the industry up with.  If the thought process around containers scares you then, the innovations, technology advances and challenges currently in various states of thought, pilot and implementation will be downright terrifying.  I guess in short, you should prepare for a vigorous stirring of the anthill.

Now if we can get someone to get Mike fired up like this once a month, we'll learn a lot more.  Enjoy Mike's post.

Read more