Predicting the future data center by taking Seven Steps back before leaping Seven Steps forward 2006 - 2020

Imagining the future of what data centers will be like in 2020 is hard.  Here is a blog post by LSI’s Rob Ober that takes a look 7 years ago, then predicts 7 years ahead.

Here is a snipped of the 7 years in the past.

And 7 years ago, our forefathers…
It was a very different world. Facebook barely existed, and had just barely passed the “university only” membership. Google was using Velcro, Amazon didn’t have its services, cloud was a non-existent term. In fact DAS (direct attach storage) was on the decline because everyone was moving to SAN/NAS. 10GE networking was in the future (1GE was still in growth mode). Linux was not nearly as widely accepted in enterprise – Amazon was in the vanguard of making it usable at scale (with Werner Vogels saying “it’s terrible, but it’s free, as in free beer”). Servers were individual – no “PODs,” and VMware was not standard practice yet. SATA drives were nowhere in datacenters.

An enterprise disk drive topped out at around 200GB in capacity. Nobody used the term petabyte. People, including me, were just starting to think about flash in datacenters, and it was several years later that solutions became available. Big data did not even exist. Not as a term or as a technology, definitely not Hadoop or graph search. In fact, Google’s seminal paper on MapReduce had just been published, and it would become the inspiration for Hadoop – something that would take many years before Yahoo picked it up and helped make it real.

Which then nicely sets up 7 years out.

7 years from now
So – 7 years from now? That’s hard to predict, so take this with a grain of salt… There are many ways things could play out, especially when global legal, privacy, energy, hazardous waste recycling, and data retention requirements come into play, not to mention random chaos and invention along the way.

Enjoy the post to get you thinking about what could be.

230 people attend 7x24 Exchange Oregon 1st User Group meeting

Many of the data center conferences are finding it harder to get the attendance of end users to their events.  At the same time there is pent up demand for data center users to socialize with their peers.  What to do?

One option is a 7x24 Exchange chapter meeting, but those events tend to be small.  Then some of my Portland friends said they had 200+ people registered for their event on Feb 27, 2014 at Intel’s Jones Farm Campus.

NewImage

I’ve been to the campus and it has a great facility to host 7x24 Exchange.  

Here a couple of pictures from the event which ended up having 230 people attend.

NewImage

NewImage

If the future of Containers is Carbon Fiber, what could be done in a data center with Carbon Fiber

Economist has an article on the possibility of carbon fiber containers.

One idea Dr Lechner proposed is to make containers out of carbon-fibre composites. Such containers would be easier to use, because they would be lighter and also—if designed appropriately—might be folded flat when empty, saving space. Dr Lechner reckons a carbon-fibre container would need to travel only 120,000km (three times around the Earth) to prove cheaper than its steel equivalent. It would also be more secure, because it would be easier to scan without being opened.

Wonder what kind of data center could be built using a carbon fiber container.  It would be lighter.  Drilling holes in carbon fiber would be much more difficult.

Nothing jumps out as why carbon fiber would make sense in a data center.  Unless you go throughout the whole rack and server components and you could probably shave 30 - 50% of the weight which makes shipping a container worth of gear much easier to do.  Carbon fiber could make sense in military scenarios for planes and other areas where weight is a big issue.

A peak into Amazon's approach to servers comes from LSI

here is a blog post by Silicon Angle with Robert Ober.  In this post Robert discusses some of Amazon’s approached to IT hardware.

“The evolutionary direction we’re going in the data center, you can call it many things – you can call it pooling, you can call it disaggregation – but at a large scale, at a rack or multiple racks or a hyperscale data center, you wanna start pulling apart the parts,” he remarks.

Optimizing infrastructure down to the component level has many benefits, both architectural and operational, but Ober considers the improvements in thermal management to be the most notable. The reason, he details, is that processors, DRAM, flash and mechanical disk all have different temperature thresholds that have to be sub-optimally balanced in traditional configurations.

Here is a video you can watch with Robert discussing some of the ideas.  This interview was done at the OCP Summit V.

Develop Future Communication Systems in AWS, Popular and Disruptive

Network Function Virtualization is a hot topic with Mobile World Congress this week.

Given the move to NFV is about creating a Cloud to service networking needs, one of the things you can do is run an IP Multimedia Subsystem in AWS using the Clearwater project.

The standard Clearwater distribution is designed for fast deployment on Amazon Web Services.  You can stand up a large-scale Clearwater deployment on AWS in a couple of hours using the scripts included in the distribution.  Once you’re comfortable with how Clearwater works on AWS, you can adapt it for your own private cloud environment – or you could offer production services from a Clearwater deployment on AWS.

...

Clearwater is IMS in the Cloud.  IMS (the IP Multimedia Subsystem) is the standards-based architecture that has been adopted by most large telcos as the basis of their IP-based voice, video and messaging services, replacing legacy circuit-switched systems and previous generation VoIP systems based on softswitching.  Clearwater follows IMS architectural principles and supports all of the key standardized interfaces expected of an IMS core network.  But unlike traditional implementations of IMS, Clearwater was designed from the ground up for the Cloud.  By incorporating design patterns and open source software components that have been proven in many global Web applications, Clearwater achieves an unprecedented combination of massive scalability and exceptional cost-effectiveness. Project Clearwater is sponsored by Metaswitch Networks.

There is another Cloud NFV effort here.

NewImage

And others are ready with their own NFV open frameworks.

Broadcom Announces Open Network Function Virtualization Platform

Helps Accelerate Deployment of Applications and Cost Benefits of NFV

 

IRVINE, Calif., Feb. 20, 2014 /PRNewswire/ -- Broadcom Corporation (NASDAQ: BRCM), a global innovation leader in semiconductor solutions for wired and wireless communications, today announced its Open Network Function Virtualization (NFV) platform. This platform is designed to accelerate NFV adoption by allowing implementation of applications across multiple system-on-a-chip (SoC) processor solutions based on diverse Instruction Set Architectures (ISA). Broadcom will showcase its mobile innovations at Mobile World Congress in Barcelona, February 24 - 27.  For more news, visitBroadcom's Newsroom

...

Dell Supports Telecommunications Industry Transformation With Industry Partnerships, Leadership in Network Functions Virtualization, New Data Center Offerings, and Customer Success

  • Dell extends collaboration with Red Hat to co-engineer OpenStack-based Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) solutions specifically for the telecommunications industry and teams with Calsoft Labs to deliver NFV and SDN solutions to telecom operators worldwide
  • Dell takes leadership role in CloudNFV consortium to demonstrate and implement an open, cloud-based NFV model