Changing the Behavior of the Data Center System, Modular Construction

At Google’s data center event, I introduced Amazon’s James Hamilton to Skanska’s Jakob Carnemark as I knew these guys would have an interesting conversation discussing data center construction.

One of the ideas where we all connected was modular data center construction. Question, if I picked the right modular infrastructure deployment does it change the behavior of the data center system?  A typical data center construction will project out 5 – 10 years to specify the required capacity, then take 2 – 3 years to design and build.  But, in data centers who can forecast out 5 – 10 years, let alone 1 year accurately?

Too many people build data centers like fortresses.

Building a Stone Fortress

Text copyright © by Lise Hull
Photographs Copyright © by Jeffrey L. Thomas

Above: the West Gatehouse at Rhuddlan Castle

Even today, centuries after they were active in British history, castles demonstrate the majesty, power and wealth of their noble builders. By the end of the 12th century, stone castles became more elaborate, the obsession of several powerful personalities who felt pressure to prove their own value by constructing these towering piles. While Edward I used the stone fortress as an effective means of dominating a rebellious Welsh populace, and gave us several of the most impressive structures in the world, his fortresses also reinforced his status as a wealthy and privileged ruler. The Angevin kings, Henry II, Richard I, John and Henry III, collectively spent tens of thousands of pounds on their castles, in pursuit of reputations as men of incomparable authority, prosperity and quality. It is incredible that the monarchy could afford such building projects, for the financial coffers were limited; the kings were not individuals of unbounded wealth, as they wanted their subjects to believe.

And many data centers have the same human behavior from building fortresses. And, I use the fortress analogy in how a paranoid defense focus can drive issues to protect a person’s territory.

the obsession of several powerful personalities who felt pressure to prove their own value by constructing these towering piles

What happens if you sacrifice the fortress mindset for a modular data center build strategy?

Building capacity in 6 months in smaller power increments.

One good behavior change is application developers and enterprise architects will see a CapEx bill attached to major projects as data center capacity is increased to meet their needs.

It is easy to join in the group think to build the fortress to protect your IT silo.  And, someone else pays the bill.

Could you imagine the look on SW architects/developers faces when they see the CapEx bill for one rack of 25 kW of data center infrastructure?

What is your CapEx for a kW of infrastructure?

Read more

Part 2 – Architect’s Perspective, Microsoft Generation 4 Data Center

David Gauthier and Christian Belady have a new blog post about their architect’s perspective.

Microsoft's Generation 4 Data Center Vision - the Architects' Perspective

Microsoft's Generation 4 Data Center Vision - the Architects' Perspective

By David Gauthier, Data Center Infrastructure Architect and Christian Belady, Principal Power and Cooling Architect, Microsoft Corp.

On Tuesday, December 2, our Global Foundation Services team went public with our Generation 4 Modular Data Center Vision and over the past week a lot of great discussions and questions have been posed from our industry colleagues. Today, we wanted to address some of those questions and share more insight on our Gen 4 plan via a video interview we did with Adam Bomb, a Technical Evangelist at Microsoft's TechNet Edge. 

Some people got the impression that this announcement was solely about a containerized server room rather than a re-thinking of the entire infrastructure. The goal of Gen 4 is to modularize not only the server and storage components, which a number of companies are already doing, but also to modularize the infrastructure, namely the electrical and mechanical systems.  The real innovation is around the commonality, manufacturing, supply chain and integration of these modules to provide a plug-and-play infrastructure along with modularized server environments.  In addition, it is focused on scaling the infrastructure with the business demands, smoothing capital investment, and driving costs down as shown by the following chart.

What are they after?

While we expect these modular innovations to reduce capital investments by 20%-40% or more depending on class, we also expect considerable reductions in operating expenses related to electricity and water consumption. Designing from the start for environmental sustainability has allowed us to focus on using less construction material up front, less energy and water during operation, and also allows us to recycle and reuse components at the end of their useful life. No longer will we be governed by the initial decisions made when constructing the facility. We will have almost unlimited use and re-use of the facility and site. We will also be able to use power in an ultra-fluid fashion moving load from critical to non-critical as use and capacity requirements dictate.

Sounds like they are reinventing what it means to have a Green Data Center.

They have a new video as well.

Read more

Architecture Principles Behind Microsoft’s Gen 4 Data Center – Compartmentalized Flexibility

I was doing some research this weekend on architect and visionaries, browsing content on Ted (Ideas for Sharing). One I found interesting being in the Seattle area is the architect Joshua Prince-Ramus’s presentation on the design of the Seattle City Library.

Part of Joshua’s talk is the rationality of the approach, and how the design came out naturally.  What I found coincidental is I could envision Microsoft’s Mike Manos giving the similar reasoning for how his team came up with the design of the Generation 4 Data Center.

Here are some interesting connections from Joshua’s presentation in regards to how the Seattle City Library was designed.

Modularization.

Instead of its current ambiguous flexibility, the library could cultivate a more refined approach by organizing itself into spatial compartments, each dedicated to, and equipped for, specific duties. Tailored flexibility remains possible within each compartment, but without the threat of one section hindering the others.

image

Our first operation was to “comb” and consolidate the library’s apparently ungovernable proliferation of programs and media. By combining like with like, we identified programmatic clusters: five of stability and four of instability.

Efficiency & optimization

image

 

Each platform is a programmatic cluster that is architecturally defined and equipped for maximum, dedicated performance. Because each platform is designed for a unique purpose, their size, flexibility, circulation, palette, structure, and MEP vary.
The spaces in between the platforms function as trading floors where librarians inform and stimulate, where the interface between the different platforms is organized—spaces for work, interaction, and play.

Flexibility and Scalablity

image

Each platform is a programmatic cluster that is architecturally defined and equipped for maximum, dedicated performance. Because each platform is designed for a unique purpose, their size, flexibility, circulation, palette, structure, and MEP vary.
The spaces in between the platforms function as trading floors where librarians inform and stimulate, where the interface between the different platforms is organized—spaces for work, interaction, and play.
By genetically modifying the superposition of floors in the typical American high rise, a building emerges that is at the same time sensitive (the geometry provides shade or unusual quantities of daylight where desirable), contextual (each side reacts differently to specific urban conditions or desired views), iconic.

Breaking the rules to change the experience.

image

The traditional library presents the visitor with an infernal matrix of materials, technologies, “specialists.” It is an often demoralizing process: a trail of tears through dead-end sections, ghost departments, and unexplained absences.
The Book Spiral liberates the librarians from the burden of managing ever-increasing masses of material. Newly freed, they reunite in a circle of concentrated expertise. The Mixing Chamber is an area of maximum librarian–patron interaction, a trading floor for information orchestrated to fulfill an essential (now neglected) need for expert interdisciplinary help.
The Mixing Chamber consolidates the library’s cumulative human and technological intelligence: the visitor is surrounded by information sources.

Seems like the Architecture Design Principles are a good match.  Here is Microsoft’s criteria for Generation 4 Data Center

  • Scalable
  • Plug-and-play spine infrastructure
  • Factory pre-assembled: Pre-Assembled Containers (PACs) & Pre-Manufactured Buildings (PMBs)
  • Rapid deployment
  • De-mountable
  • Reduce TTM
  • Reduced construction
  • Sustainable measures
  • Map applications to DC Class
  • If you have never been to the Seattle Library here is a tour.

    image 

    Mike Manos is a Chicago native, so I know he appreciates good architecture.  I wonder if he knew these facts about the Seattle City Library, and how closely he followed a patterns of Joshua’s architect firm Rex.

    We design collaborations rather than dictate solutions.
    The media sells simple, catchy ideas; it reduces teams to individuals and their collaborative work to genius sketches. The proliferation of this false notion of "starchitecture" diminishes the real teamwork that drives celebrated architecture. REX believes architects should guide collaboration rather than impose solutions. We replace the traditional notion of authorship: "I created this object," with a new one: "We nurtured this process."
    We embrace responsibility in order to implement vision.
    The implementation of good ideas demands as much, if not more, creativity than their conceptualization. Increasingly reluctant to assume liability, architects have retreated from the accountability (and productivity) of Master Builders to the safety (and impotence) of stylists. To execute vision and retain the insight that facilitates architectural invention, REX re-engages responsibility. Processes, including contractual relationships, project schedules, and procurement strategies, are the stuff with which we design.
    We don't rush to architectural conclusions.
    The largest obstacle facing clients and architects is their failure to speak a common language. By taking adequate time to think with our clients before commencing the traditional design process, it is our proven experience that we can provide solutions of greater clarity and quality. With our clients, we identify the core questions they face, and establish shared positions from which we collectively evaluate the architectural proposals that follow.

    One other piece of trivia.  How did Joshua get involved in the Seattle City Library project? 

    Answer: his mother. seattle pi article.

    On his mother (Marcie Ruskin) being the unsung hero of the Seattle Central Library's design:

    Yes, that's true. She was reading the newspaper on the day before there was a mandatory meeting on the library project in Seattle. She called me, informed me about the library competition and told me about the meeting. Rem was in Korea then, so I went to the airport and flew from Amsterdam to Seattle. I came to the mandatory meeting the next day. ... If OMA had not signed up at the meeting, we would not have been eligible to receive info about the project and continue in the process. The flip side of that is I spent five years worrying my mother would get stoned for involving us in the library project. Now, it seems she's safe (laughs).

    Read more

    CDN, Mathematica, MatLab – Easier to Set up in Amazon Web Services than Corporate Data Center

    Here is a blog entry about the use of Amazon’s new CDN service by Mike Culver.  Mike makes a good point that CDNs are a pain many times, because of the sales channel.  As part of the bad economic times it will be interesting what the long term effects are to sales processes.

    Content Delivery Service Flying High

    Airbus 380 out of HeathrowIt’s fun to look at buzz and activity right after a new Amazon Web Service gets launched – in this case the service I’m thinking about is Amazon CloudFront, which is our new Content Delivery Service. Jeff Barr blogged about CloudFront’s features and benefits when the service launched last week.

    What prompted this particular blog post was a Twitter message (“tweet”) that Jeff saw and forwarded to me. “Thanks to Amazon CloudFront, small websites can take advantage of a CDN. I don't think Photos.aero will spend $10 ‘til November 30.” The post was about www.photos.aero, which is an aircraft enthusiasts’ site. (I’m a pilot, so Jeff knew that I’d be interested.)

    That is indeed amazing! Until Amazon CloudFront came along, setting up content distribution was a real pain, in my opinion. You had to contact the service provider, do the whole “sales cycle” dance, and then wonder if in fact your prices were market price, or whether you signed up to pay a premium. The AWS approach is very egalitarian, and while I am certain that sales folks are nice people, it’s not a scalable approach for the vendor and the fact of the matter is that many technical folks don’t want to put a process between them and deployment.

    Joining the Amazon Web Services effort is Mathematica.

    Wolfram Research announced last week that they will be embracing the Cloud and providing a "Cloud Computing Service" with help ofNimbis Services, Inc

    The Mathematica cloud computing service will provide flexible and scalable access to HPC from within Mathematica, simplifying the transition from desktop technical computing to HPC. "The two largest challenges in using HPC are programming the HPC application itself and ensuring that you can get enough computing power to do the job," says Tom Wickham-Jones, Wolfram Research Executive Director of Kernel Technology. "Mathematica answers the programming challenge by providing an integrated technical computing platform, enabling computation, visualization, and data access. Cloud computing offers consistent access to large-scale computing capabilities.

    A Screenshot from recent demonstration at SC08:

    Mathematica

    And MatLab as well.

    Ec2

    Mathworks released a whitepaper on how to run MATLAB parallel computing products -Parallel Computing Toolbox and MATLAB Distributed Computing Server on Amazon EC2. This step by step guide walks you through the steps of installation, configuration and setting up clustered environments using these licensed products from MathWorks on Amazon EC2. It shows how you can create an AMI with MATLAB products bundled in and run them in the cloud.

    Whitepaper is available free on Mathworks website:

    MATLAB users will learn about the key aspects of using the EC2 service from their desktop MATLAB session and using Parallel Computing Toolbox to send parallel MATLAB computations to the EC2 service.

    System administrators will learn the key technical details required for setting up MATLAB Distributed Computing Server on the EC2 service, including licensing and network setup. They will also learn how to configure their users’ desktops to enable the use of the EC2 service for MATLAB computations.

    What do all of these have in common.  They are all easier to get started than if users had gone to their own IT department.  So, AWS are cheaper and easier to use.

    I am very excited because this is going to open up powerful MATLAB tools to any developer for not only research but also production applications. Students might be able to do their lab exercises without a lab and impress their professors by turning in the assignments before time. Professors will be able to teach courses using MATLAB by "turning on" a switch that creates their "Instant Labs" for the duration of the course without even contacting the College IT department for resources. Enteprises might be able to crunch the complex BI data over the weekend for a monday morning meeting.

    Amazon is probably one of the only data center operators who is growing faster than expected while others are slowing down.

    Read more

    Security + Green IT = ?

    Accenture has a post using the example of Security being a change in behaviors, not just new equipment.

    Security: The Green Glue

    Posted at Oct. 16, 2008 04:10 PM CST

    By Andrew Skinner, UK Data Center Technology & Operations

    Last week I was lucky enough to be invited to present Accenture's point of view on Green IT to an audience at the European Symantec Vision event. The key concept is that green IT is not simply about buying new hardware but should actually be about changing the way in which IT is used within the business. I introduced the topic with the example of a merger (clearly a very topical issue in the financial sector at present) to explain what I mean. The new organization would have two e-mail systems running on two sets of server hardware. Should the organization refresh this hardware to make it more efficient—or should it begin by rationalizing its applications?*

    Andrew is hopeful.

    What became clear was that this move toward the virtual organization will challenge many organizations both technically and socially. The movement of data to processing centers and the accessing of data by a mobile workforce will send palpitations through the security teams within the organization, challenging how security is enabled. For too long we have used the physical walls of the facilities in place of appropriately implemented policies that control and monitor data movement—but the green agenda will challenge this. On one hand we are talking to our clients about supporting remote working, but on the other, organizations do not yet have the ability to control this mobile workforce, what data they are able to access and how much they can see.

    Security solutions will become the "green glue" over the coming years, with the solutions to reduce the impact of IT on the environment reliant on properly implemented security solutions at a corporate level.

    But will Security + Green IT = Green Glue or a New Conflict?

    Many security groups have grow in size and budget over the last ten years. While Green is an effort to be more efficient eliminating waste and reducing the use of resources. Security has been able to implement HW, SW, and new processes for security.

    As I just posted on sacred cows. Security has many Sacred Cows.

    How about if the Security Group Manager also picks up responsibility for Green IT?  Security is one of those services that goes across many groups, requiring collaboration to create better security.  The challenge is to be secure and greener.

    I wrote on the Green + Security + Virtualization topic previously.

    Read more