A day of intense meetings, asking what is the future of data centers & evolutionary economics, a view out of the window

Today was a long day, I was up at 6:15a to catch a bus from Lake Oswego, OR to downtown Portland, then to Portland Airport to meet a cloud computer operations director to introduce him to some ideas that enable adapting to changes if you adopt evolutionary economics ideas.

Evolutionary economics deals with the study of processes that transform economy for firms, institutions, industries, employment, production, trade and growth within, through the actions of diverse agents from experience and interactions, using evolutionary methodology.

The data center is ready for a transformation.  Cloud computing is helping to push things in a direction, but there is much more beyond cloud computing.

The thought experiment we went through is what happens if the data center industry adopts an information sharing methodology as opposed to an information hoarding, accelerating change in the industry, asking tough questions of what problems should a data center be solving.  Being open to discover new ways to look at the problems and ask new questions, driving more innovation.

Here is a bit more explanation of evolutionary economics.

Ideas are articulated in language and thus transported into the social domain. Generic ideas, in particular, can bring about cognitive and behavioral processes, and in this respect they are practical and associated with the notion of ‘productive knowledge’. It is generic ideas that evolve and form causal powers underlying the change. Evolutionary economics is essentially about changes in generic knowledge, and involves transition between actualized generic ideas. Actual phenomena, being manifestations of ideas, are seen as ‘carriers of knowledge’.

Three analytical concepts corresponding to ontological axiomatics are thus:

  • (1) carriers of knowledge,
  • (2) generic ideas as components of a process, and
  • (3) evolutionary-formative causality.

After a long day of intense thinking, I am riding the train back to Seattle.  So glad I didn't drive, so I can get some rest and look at the window, taking some time to reflect.

I'll wait for the cloud computing operations director to write his own blog entry, but that may take a while.  As his head is probably just as tired as mine.

image

Read more

Open Leadership - can it work in the data center where control leadership is the norm?

I was having an interesting e-mail conversation with a data center and how well a local monthly data center event is working to build better relationships with the community of data center operators.  The openness of data center discussion flies in the face of secrecy typical of data centers.  Data Center events try to stimulate open conversation, but given the irregularity of events can you build a relationship by attending?

image

Open Leadership is a new book being released by Charlene Li, co-author of Groundswell.  Josh Bernoff the other co-author of Groundswell writes.

I found some parts of the book a lot more useful or interesting than others. Here are three good parts.

  1. Sandbox covenants. These are the rules organizations set up to determine what sorts of limits and conventions there are on openness. The book includes a link to social media policies of a bunch of corporations, not yet live, but I am looking forward to seeing that. This discussion, in Chapter 5, goes a long way to helping bridge the gap between social media backers within companies and corporate policymakers.
  2. Organizational models for openness. Charlene describes three types of organization: organic, centralized, and coordinated, and shows when each one makes sense. Given all the questions I get these days about organization for social, this is quite relevant.
  3. Leadership mindsets and traits. Chapter 7 classifies leaders according to whether they are optimistic or pessimistic, and whether they are independent or collaborative. Anyone who has ever had a boss will find this instructive. This is a fascinating way to look at leadership.

There are people who are realizing an Open approach is powerful, but difficult for most.  The book will be available on May 24.

As Li explains, openness requires more—not less—rigor and effort than being in control. Open Leadership reveals step-by-step, with illustrative case studies and examples from a wide range of industries and countries, how to bring the precision of this new openness both inside and outside the organization. The author includes suggestions that will help an organization determine an open strategy, weigh the benefits against the risk, and have a clear understanding of the implications of being open. The book also contains guidelines, policies, and procedures that successful companies have implemented to manage openness and ensure that business objectives are at the center of their openness strategy.

I'll post later on the data center social event that is taking an Open approach to Data Center Networking.  And,  I plan on adding a trip to an event to see for myself how it works.

Read more

Will a Google Tablet be the iPad competitor or Netbook? Maybe both - targeting Apple and Microsoft with one device

There is lots of news on Google's Tablet with Verizon.

Verizon, Google Developing iPad Rival

By NIRAJ SHETH

Verizon Wireless is working with Google Inc. on a tablet computer, the carrier's chief executive, Lowell McAdam, said Tuesday, as the company endeavors to catch up with iPad host AT&TInc. in devices that connect to wireless networks.

The work is part of a deepening relationship between the largest U.S. wireless carrier by subscribers and Google, which has carved out a space in mobile devices with its Android operating system. Verizon Wireless last year heavily promoted the Motorola Droid, which runs Google's software.

"What do we think the next big wave of opportunities are?" Mr. McAdam said in an interview with The Wall Street Journal. "We're working on tablets together, for example. We're looking at all the things Google has in its archives that we could put on a tablet to make it a great experience."

These devices are all part of using less energy in consuming devices connected to data centers.

I am amazed the number of people who think they can get an iPad and leave their laptop at home.  Google realizes this opportunity to create the always connected laptop replacement. 

The device may not be perfect, but no laptop is either.  What trade-offs will Google and Verizon make in the device?

Once, someone gets the right device category defined watch the growth of data centers continue as hyper-connected laptop replacements fuel new usage scenarios which play into Google's hands.

Read more

Two different ways to implement geo-redundancy data storage - Infineta and EMC VPlex

James Hamilton discusses inter-datacenter replication and geo-redundancy which was quite easy for me to get my mind wrapped around as the issues discussed have a lot in common to work I did on Microsoft's Branch Office Infrastructure Solution (BOIS) and the issues with WAN.

Inter-Datacenter Replication & Geo-Redundancy

Wide area network costs and bandwidth shortage are the single most common reason why many enterprise applications run in a single data center. Single data center failure modes are common. There are many external threats to single data center deployments including utility power loss, tornado strikes, facility fire, network connectivity loss, earthquake, break in, and many others I’ve not yet been “lucky” enough to have seen. And, inside a single facility, there are simply too many ways to shoot one’s own foot. All it takes is one well intentioned networking engineer to black hole the entire facilities networking traffic. Even very high quality power distribution systems can have redundant paths taken out by fires in central switch gear or cascading failure modes. And, even with very highly redundant systems, if the redundant paths aren’t tested often, they won’t work. Even with incredibly redundancy, just having the redundant components in the same room, means that a catastrophic failure of one system, could possibly eliminate the second. It’s very hard to engineer redundancy with high independence and physical separate of all components in a single datacenter.

With incredible redundancy, comes incredible cost. Even with incredible costs, failure modes remain that can eliminate the facility entirely. The only cost effective solution is to run redundantly across multiple data centers. Redundancy without physical separation is not sufficient and making a single facility bullet proof has expenses asymptotically heading towards infinity with only tiny increases in availability as the expense goes up. The only way to get the next nine is have redundancy between two data centers. This approach is both more available and considerably more cost effective.

The solution James references is from Infineta.

Last week I ran across a company targeting latency sensitive cross-datacenter replication traffic. Infineta Systemsannounced this morning a solution targeting this problem: Infineta Unveils Breakthrough Acceleration Technology for Enterprise Data Centers. The Infineta Velocity engine is a dedupe appliance that operates at 10Gbps line rate with latencies under 100 microseconds per network packet. Their solution aims to get the bulk of the advantages of the systems I described above at much lower overhead and latency. They achieve their speed-up three ways: 1) hardware implementation based upon FPGA, 2) fixed-sized, full packet block size, 3) bounded index exploiting locality, and 4) heuristic signatures.

Infineta provides a technology overview

Technology Overview

Infineta Systems delivers solutions based on several technologies that significantly reduce the amount of traffic running across today’s data center WAN interconnect. Our groundbreaking innovation centers on the patent-pending Velocity Dedupe Engine™, the industry’s first-ever hardware deduplication (“dedupe”) engine. Unlike alternatives, the Velocity Dedupe Engine enables our solutions to maintain the highest levels of data reduction at multi-gigabit speeds while guaranteeing port to port latencies in the few 10s of microseconds. As a result, Infineta’s solutions enable customers to accelerate all data center applications (such as replication and backup) – including ones that are highly latency sensitive. It does so while reducing overall costs incurred by this growing, bandwidth-hungry traffic.

Technology Architecture

Distributed System Architecture

Unlike traditional acceleration solutions that are assembled around monolithic processing environments, Infineta’s solutions are designed from the ground-up around a distributed processing framework. Each core feature set is implemented in a dedicated hardware complex, and they are all fused together with high-speed fabric, guaranteeing wire speed acceleration for business-critical traffic.

Hardware-based Data Reduction

Data reduction is carried out purely in hardware in a pipelined manner, allowing the system to reduce enterprise WAN traffic by as much as 80-90 percent.

Fabric-based Switching

Infineta’s solutions are built on a massive scale, fully non-blocking switch fabric that can make precise switching decisions in the face of sustained traffic bursts.

Multi-gig Transport Optimization

Infineta’s solutions employ multi-core network processors to carry out transport-level acceleration at multi-gigabit speeds. By making key resource decisions at wire-speed, the system is able to maintain up to 10Gbps traffic throughput while working around detrimental WAN characteristics, such as packet loss.

And EMC has VPlex for Virtualized Storage across multiple data centers.

image

News Summary:

  • EMC advances Virtual Storage with industry’s first distributed storage federation capabilities, eliminating boundaries of physical storage and allowing information resources to be transparently pooled and shared over distance for new levels of efficiency, control and choice.
  • Groundbreaking EMC VPLEX Local and VPLEX Metro products address fundamental challenges of rapidly relocating applications and large amounts of information on-demand within and across data centers which is a key enabler of the private cloud.
  • Future EMC VPLEX versions will add cross-continental and global capabilities, allowing multiple data centers to be pooled over extended distances, providing dramatic new distributed compute and service-provider models and private clouds.
Read more

With Cheap Plentiful Natural Gas, Should Data Centers be sited near Natural Gas supply for lower cost and carbon footprint?

WSJ has an article on Shale Gas.

Shale Gas Will Rock the World

Huge discoveries of natural gas promise to shake up the energy markets and geopolitics. And that's just for starters.

By AMY MYERS JAFFE

There's an energy revolution brewing right under our feet.

Over the past decade, a wave of drilling around the world has uncovered giant supplies of natural gas in shale rock. By some estimates, there's 1,000 trillion cubic feet recoverable in North America alone—enough to supply the nation's natural-gas needs for the next 45 years. Europe may have nearly 200 trillion cubic feet of its own.

The author is convinced this will change the energy industry which data centers are a part of.

I have been studying the energy markets for 30 years, and I am convinced that shale gas will revolutionize the industry—and change the world—in the coming decades. It will prevent the rise of any new cartels. It will alter geopolitics. And it will slow the transition to renewable energy.

The author touches on thoughts on renewable energy.

Then, I think we still need to invest in renewables—but smartly. States with renewable-energy potential, such as windy Texas or sunny California, should keep their mandates that a fixed percentage of electricity must be generated by alternative sources. That will give companies incentives and opportunities to bring renewables to market and lower costs over time through experience and innovation. Yes, renewables may seem relatively more expensive in those states as shale gas hits the market. And, yes, that may mean getting more help from government subsidies. But I don't think the cost would be prohibitive, and the long-term benefits are worth it.

Still, I don't believe we should set national mandates—which would get prohibitively expensive in states without abundant renewable resources. Instead of pouring money into subsidies to make such a plan work, the federal government should invest in R&D to make renewables competitive down the road without big subsidies.

Is natural gas supply a new criteria for data center sites running cogeneration plants or fuel cells?

Read more