eBay (first Windows Azure platform appliance customer) uses Microsoft Azure Cloud to host Apple iPad sales

eBay and Microsoft both posted press releases on the first customer for Windows Azure private cloud.

eBay and Microsoft Announce Cloud Computing Agreement

Microsoft unveils new Windows Azure platform appliance for cloud computing; eBay signs up as early customer.

WASHINGTON — July 12, 2010 — Microsoft Corp. and eBay [NASDAQ: EBAY] today announced that eBay will be one of the first customers of Microsoft’s new Windows Azure platform appliance for cloud computing. The partnership is a significant joint engineering effort that will couple the innovation and power of the Windows Azure platform appliance with the technical excellence of eBay’s platform — to deliver an automated, scalable, cost-effective capacity management solution.

Microsoft also announced the limited production release of the Windows Azure platform appliance, the first turnkey cloud platform for large service providers and enterprises to deploy in their own datacenters. eBay will incorporate the Windows Azure platform appliance into two of its datacenters to further optimize its platform and achieve greater strategic agility and datacenter efficiency.

Someone made an interesting decision to test Windows Azure to sell Apple iPad's

This partnership follows a successful pilot deployment by eBay of Microsoft’s public Windows Azure platform, which offers eBay the flexibility to deploy certain applications on a public cloud while maintaining the reliability and availability of eBay.com. eBay’s page for iPad listings —http://ipad.ebay.com— is hosted on the public Windows Azure platform.

image

Here is the cross company executive quote trade.

“Microsoft’s focus on and investment in the Windows Azure platform appliance shows it is committed to world-class cloud computing solutions. eBay has the right blueprint for next-generation software-as-a-service-based applications with our platform’s architecture, scale and reliability, ” said James Barrese, eBay vice president of technology. “Joint engineering on the Windows Azure platform appliance with eBay’s massive, high-volume systems allows Microsoft to demonstrate its leadership in this space and helps eBay improve our user experience through a flexible, scalable and cost-effective solution.”

The Windows Azure platform appliance consists of Windows Azure, Microsoft SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. The appliance is optimized for scale out applications and datacenter efficiency across hundreds to tens of thousands of servers.

“eBay has one of the world’s largest ecommerce platforms, and we believe the Windows Azure platform appliance provides the scalability, automation, datacenter efficiency and low cost of operations that eBay requires to meet the needs of its customers worldwide, ” said Bob Muglia, president of Server and Tools Business, Microsoft.

HP and Microsoft announced their Windows Azure partnership for deploying private clouds.  HP includes POD containers and HP networking to provide a complete HP owned solution.

HP and Microsoft to Partner on Windows Azure Built on HP Converged Infrastructure

Collaboration to help transition customers to cloud computing

PALO ALTO, Calif., and REDMOND, Wash. — July 12, 2010 — HP and Microsoft today announced their intention to work together on a Microsoft Windows Azure platform appliance that will enable large enterprise customers to confidently and rapidly adopt cloud-based applications as businesses needs change and grow.

The companies will work together to deliver a complete hardware, software, services and sourcing solution that will accelerate customers’ transition to the Windows Azure platform. Customers will be able to manage the appliance with HP Converged Infrastructure on-premises or choose HP data center hosting services.

Enterprise customers adopting cloud services need a comprehensive approach, including application modernization support, an optimized infrastructure platform and flexible sourcing options. With the new Windows Azure platform appliance, HP and Microsoft will help customers rapidly scale applications, deliver new online services, and migrate Windows and .NET-based applications to the cloud. This latest collaboration extends the $250 million Infrastructure-to-Application initiatives HP and Microsoftannounced in January and will result in HP delivering these offerings:

Data center hosting services. HP Enterprise Services will combine deep systems management expertise, standardized processes and world-class secure data centers to manage the Windows Azure platform appliances for HP customers. HP and Microsoft plan to release a limited production Windows Azure platform appliance for deployment in HP data centers by the end of the year.

Converged infrastructure for Windows Azure. HP’s current position as a primary infrastructure provider for the Windows Azure platform, coupled with HP and Microsoft’s ongoing efforts to optimize Microsoft applications for HP’s Converged Infrastructure through extensive joint engineering and development, will allow HP to deliver an industry-leading cloud deployment experience for its customers. The Converged Infrastructure for the Windows Azure platform appliance will include the following:

HP Networking, which delivers to customers a flexible network fabric that is simpler, higher-performing and more secure, at up to 65 percent lower cost of ownership than competitive solutions.*

HP ProLiant servers, which are highly dense, highly scalable computing platforms that help customers speed application delivery, better utilize IT resources and achieve strong returns on investments.

The appliance can be deployed in HP Performance-Optimized Datacenters (PODs), which deliver improved power and data center capacity as well as rapid data center expansion. HP PODs allow customers to increase capacity without the capital expense of brick-and-mortar construction. They will be used in addition to traditional data center deployments.

Application modernization, migration and integration services for Windows applications.HP’s expertise in complex environments, specific industries, frameworks, processes and resources will help customers modernize, migrate and integrate their applications while balancing costs and speed when adopting the Windows Azure platform.

Read more

Rackforce builds a Green Data Center Stack with Cisco UCS Servers

I had the pleasure of spending 1 1/2 hrs chatting with Brian Fry VP of Sales and Marketing from RackForce and Kash Shaikh sr marketing manager of Cisco's Data Center Switch. There is no way I can capture all we talked about in one blog entry, so let's start with an overall approach that was refreshing and logical to see.

I asked Brian Fry what led him to pick the Cisco UCS solution.  The simple thing that Brian explained is he wanted the least amount of people to support using the least amount of power.  Now if that isn't a path to a Green Data Center, I don't know what is.  Yet, few take this approach.

If you want the least amount of people and power to provide compute where do you start?  At the beginning of the conversation, Brian explained RackForce is on its 4th generation of data centers since 2001.  And, over this time Rackforce has hired their own power and cooling expert to design and run its data centers.

So, a funny part I can't skip is Cisco UCS.

Cisco UCS blade center

Connected with Cisco Nexus switches.

Large Photo

Running in an IBM rack.

IBM XIV SAN

With a modular data center design that can partially roll out Power and Cooling infrastructure up to 10MW to fill the 30,000 sq feet of data center space.

All of this together creates a Green Data Center stack, starting from the hydro-electric power, power and cooling systems, racks, network, servers, to virtualization ready for an OS install.

I am going to write more about RackForce, and need to digest what they are doing to integrate it into other ideas.

Selling the Green Data Center to the CFO is one area I've been thinking about and Brian provide some other good data points.

Read more

Microsoft's Bob Muglia says Cloud Computing provides early feedback

Cloud Computing has many benefits, but here is one you don't hear often.  We run the Cloud for early validation.  CNET has an interview with Microsoft's Bob Muglia.

You mentioned that Microsoft is pretty much doing everything for the cloud first. Does that mean that over time on-premises customers are actually going to be getting technology that's somewhat older, for better and for worse?
Muglia: Well, I think the way to look at it is that we're able to use the cloud to do a lot more of our early validation than we've ever been able to do before. You know, you see us with labs, you know, Live Labs and things like that, being able to take ideas and put them up in the cloud. More and more what you'll see is the beginning of our beta processes will be run for new things up in the cloud, because our ability to get feedback from customers is so much more rapid if customers don't have to deploy the infrastructure themselves. So, there's a set of things that we can do, which will help to reduce our cycle time, and bringing new features to market.

Could Microsoft provide a cloud environment as part of enterprise sales agreements?

I mean, in general our products run on two- to three-year cycles, and it very often takes customers at least that long to deploy them. I actually think the cloud will expedite customers' ability to get our software and our innovations, even if they run it themselves, because it will shorten our cycle for delivery, and also I think customers as they see these things available in the cloud will have a better understanding of the advantages they can get if they deploy it themselves. So, I actually don't think it slows down things at all for our customers that choose on-premises.

Or help customers run their own private clouds.

We hear a lot about this term, private cloud, meaning taking a cloud-like infrastructure and deploying it in one's own data center, taking the idea of a public cloud and having a completely private version of that replicated in someone else's data center. I guess I'm kind of curious what are you hearing the most demand from customers for when they say private cloud.
Muglia: Well, you know, one of the things we've learned is that customers have different views of the term private cloud. And so what we've been talking about is customers' ability to build their own clouds in their own data centers or for partners to be able to build clouds.

But fundamentally we do see a great deal of demand for that, because customers have some very reasonable concerns about their ability to control the environment, and they often have security concerns. So, for many circumstances having a customer build their own cloud is what absolutely makes sense for them, and we're supplying them with the tools and products they need in the form of Windows Server, System Center, and SQL Server to build their own clouds.

The business models for cloud computing is where there new opportunities.  Most focus on an AWS type of model.  But, it is interesting to think down the path of what Bob Muglia suggests as a cloud computing run as part of product development.

BTW, one of the problems Microsoft has is Microsoft Update which was started in the Office team then Windows, is almost always turned off in the Server product.  So, Microsoft gets very little product crash data from Server.  In Azure though they can get all the for known environments.  That in itself is a big help for Microsoft's Server business.

Read more

Building your First Data Center, learn some lessons from Microsoft who say they can build for 50% less

Building your first data center can be a challenge.  Many have tackled this task over the past few years - Microsoft, Yahoo, Intuit, ask.com, eBay, Apple, and Facebook.  Building your first is an opportunity to consolidate your IT loads and reduce costs.  Given the difficulty of getting all the ducks lined up to get the project going, the budget for the first data center can be over $250 million.

DataCenterKnowledge reports on Microsoft's latest Quincy data center.

The new data center is being built next to Microsoft’s existing 470,000 square foot data center in Quincy, which was built in 2007 and is now one of the largest data centers in the world. But the new facility will be dramatically different in both its cost and design. After years of investing up to $500 million in each data center project, Microsoft plans to spend about $250 million or less on each data center going forward.

One trap I have seen many fall into is to build a big data center as the first.  Why?  Well, part of what drives this is data centers are the highest profit margin business for the construction industry and there are plenty of people who will tell you bigger is better.  The analysts will help you justify a $250 million dollar data center is the sweet spot of getting an ROI.

But, a different way of thinking about this problem is to build Ten $25 million data centers instead of one.  The first one may be a bit more than $25 million, but you can cut costs on the next, and the next, then after your third, you realize "hey there is a different way we can be doing this.  Let's change the design.  Build three more, then you go "wow we learned a lot, let's really push for something innovative."  The last three now cost $12.5 million instead of $25 million.

This is what Microsoft has done, but spending $500 million a data center.  They built Quincy 1, San Antonio, Dublin (air side economizer), and Chicago (container).  And the 4th generation data center is next.

Get Microsoft Silverlight

One additional benefit of building a $25 million data center is you don't end up with consultants, designers, and construction companies swarming to get your business.  If you choose an incremental data center design you'll learn a lot on what is real and what is hype.  Google, Microsoft, and Amazon can do this why can't you too?

BTW, another thing Microsoft has done is figured out how to build the 4th generation data center faster than the 1st generation. Part of the reason the first data center is so big is because it was so hard to get the project going.  Speed is important in addition to capabilities.

I've discussed these ideas with a few data center designers, and we have used the metaphor that data centers are designed like Battle tanks.  But not all businesses, so not all data centers should be same and if you have geo redundant SW like Google, Amazon, and Microsoft, it can be more cost effective to build different data center types for the same reason why there are light and heavy tanks.

Which brings up another benefit of the Microsoft 4th generation data center, the design is not in a concrete bunker which means it could be moved much easier if need be.

This next-generation design allows Microsoft to forego the concrete bunker exterior seen in the original Quincy facility in favor of a steel and aluminum structure built around a central power spine. The data centers will have no side walls, a decision guided by a research project in which the company housed servers in a tent for eight months.

What happens if you focused on building iterative data centers with a range of capabilities to adapt to business needs and could be moved if business or power conditions change in a location.  Doesn't this sound like a better way to spend $250 million.  But, the data center ecosystem is not going to promote this idea as it changes their profits and business models.

Microsoft, Google, and Amazon's battle for cloud computing is going to continue to drive some of the most innovative thinking.  And you don't have to wait to start thinking like they do.

Read more

Two different ways to implement geo-redundancy data storage - Infineta and EMC VPlex

James Hamilton discusses inter-datacenter replication and geo-redundancy which was quite easy for me to get my mind wrapped around as the issues discussed have a lot in common to work I did on Microsoft's Branch Office Infrastructure Solution (BOIS) and the issues with WAN.

Inter-Datacenter Replication & Geo-Redundancy

Wide area network costs and bandwidth shortage are the single most common reason why many enterprise applications run in a single data center. Single data center failure modes are common. There are many external threats to single data center deployments including utility power loss, tornado strikes, facility fire, network connectivity loss, earthquake, break in, and many others I’ve not yet been “lucky” enough to have seen. And, inside a single facility, there are simply too many ways to shoot one’s own foot. All it takes is one well intentioned networking engineer to black hole the entire facilities networking traffic. Even very high quality power distribution systems can have redundant paths taken out by fires in central switch gear or cascading failure modes. And, even with very highly redundant systems, if the redundant paths aren’t tested often, they won’t work. Even with incredibly redundancy, just having the redundant components in the same room, means that a catastrophic failure of one system, could possibly eliminate the second. It’s very hard to engineer redundancy with high independence and physical separate of all components in a single datacenter.

With incredible redundancy, comes incredible cost. Even with incredible costs, failure modes remain that can eliminate the facility entirely. The only cost effective solution is to run redundantly across multiple data centers. Redundancy without physical separation is not sufficient and making a single facility bullet proof has expenses asymptotically heading towards infinity with only tiny increases in availability as the expense goes up. The only way to get the next nine is have redundancy between two data centers. This approach is both more available and considerably more cost effective.

The solution James references is from Infineta.

Last week I ran across a company targeting latency sensitive cross-datacenter replication traffic. Infineta Systemsannounced this morning a solution targeting this problem: Infineta Unveils Breakthrough Acceleration Technology for Enterprise Data Centers. The Infineta Velocity engine is a dedupe appliance that operates at 10Gbps line rate with latencies under 100 microseconds per network packet. Their solution aims to get the bulk of the advantages of the systems I described above at much lower overhead and latency. They achieve their speed-up three ways: 1) hardware implementation based upon FPGA, 2) fixed-sized, full packet block size, 3) bounded index exploiting locality, and 4) heuristic signatures.

Infineta provides a technology overview

Technology Overview

Infineta Systems delivers solutions based on several technologies that significantly reduce the amount of traffic running across today’s data center WAN interconnect. Our groundbreaking innovation centers on the patent-pending Velocity Dedupe Engine™, the industry’s first-ever hardware deduplication (“dedupe”) engine. Unlike alternatives, the Velocity Dedupe Engine enables our solutions to maintain the highest levels of data reduction at multi-gigabit speeds while guaranteeing port to port latencies in the few 10s of microseconds. As a result, Infineta’s solutions enable customers to accelerate all data center applications (such as replication and backup) – including ones that are highly latency sensitive. It does so while reducing overall costs incurred by this growing, bandwidth-hungry traffic.

Technology Architecture

Distributed System Architecture

Unlike traditional acceleration solutions that are assembled around monolithic processing environments, Infineta’s solutions are designed from the ground-up around a distributed processing framework. Each core feature set is implemented in a dedicated hardware complex, and they are all fused together with high-speed fabric, guaranteeing wire speed acceleration for business-critical traffic.

Hardware-based Data Reduction

Data reduction is carried out purely in hardware in a pipelined manner, allowing the system to reduce enterprise WAN traffic by as much as 80-90 percent.

Fabric-based Switching

Infineta’s solutions are built on a massive scale, fully non-blocking switch fabric that can make precise switching decisions in the face of sustained traffic bursts.

Multi-gig Transport Optimization

Infineta’s solutions employ multi-core network processors to carry out transport-level acceleration at multi-gigabit speeds. By making key resource decisions at wire-speed, the system is able to maintain up to 10Gbps traffic throughput while working around detrimental WAN characteristics, such as packet loss.

And EMC has VPlex for Virtualized Storage across multiple data centers.

image

News Summary:

  • EMC advances Virtual Storage with industry’s first distributed storage federation capabilities, eliminating boundaries of physical storage and allowing information resources to be transparently pooled and shared over distance for new levels of efficiency, control and choice.
  • Groundbreaking EMC VPLEX Local and VPLEX Metro products address fundamental challenges of rapidly relocating applications and large amounts of information on-demand within and across data centers which is a key enabler of the private cloud.
  • Future EMC VPLEX versions will add cross-continental and global capabilities, allowing multiple data centers to be pooled over extended distances, providing dramatic new distributed compute and service-provider models and private clouds.
Read more