How Network Functions Virtualization (NFV) Greens the Data Center, by using standard Server Power Management

Part of the point of NFV is the lower power use vs. the current state of equipment.  How is this done?

in the first paper on NFV here is the part that explains how the power savings will be achieved.

Reduced energy consumption by exploiting power management features in standard servers
and storage, as well as workload consolidation and location optimisation. For example,
relying on virtualisation techniques it would be possible to concentrate the workload on a
smaller number of servers during off-peak hours (e.g. overnight) so that all the other servers
can be switched off or put into an energy saving mode.

The number of telecom equipment that goes into lower power mode is probably amazingly low.

Besides saving power this same feature makes it easier for maintenance operations.

Option to temporarily repair failures by automated re-configuration and moving 
network workloads onto spare capacity using IT orchestration mechanisms. This 
could be used to reduce the cost of 24/7 operations by mitigating failures 
automatically.

 

 

 

 

A New Option for moving out of AWS, Forsythe's Data Center in 1,000 sq ft increments

Moving out of AWS can save a lot of money. Huh?  Yes, here is one public disclosure from Moz’s CEO.

Building our private cloud

We spent part of 2012 and all of 2013 building a private cloud in Virginia, Washington, and mostly Texas.

This was a big bet with over $4 million in capital lease obligations on the line, and the good news is that it's starting to pay off. On a cash basis, we spent $6.2 million at Amazon Web Services, and a mere $2.8 million on our own data centers. The business impact is profound. We're spending less and have improved reliability and efficiency.

Our gross profit margin had eroded to ~64%, and as of December, it's approaching 74%. We're shooting for 80+%, and I know we'll get there in 2014.

 

 

 

So you want to move out of AWS, you dread the task of finding something the right size.  A cage in a colocation facility.  Seems too old school.  A wholesale pod?  Too big and you aren’t ready to jump into managing your own electrical and mechanical infrastructure.

How about 1,000 sq ft of data center space configured exactly the way you want?  Need more, get another 1,000 sq feet.  This is what Forsythe Data Centers has announced with its latest data center, offering the solution in the middle of this table.

NewImage



“Forsythe’s facility offers the flexibility and agility of the retail data center market, in terms of size and shorter contract length, with the privacy, control and density of large-scale, wholesale data centers,” said Albert Weiss, president of Forsythe Data Centers, Inc., the new subsidiary managing the center. He is also Forsythe Technology’s executive vice president and chief financial officer.

I got a chance to talk to Steve Harris and the flexibility for customers to have multiple suites designed exactly to support their gear is a dream come true those who know one size fits all usually means you wasting money somewhere.  You could have one suite that is just for storage, tape backup and other gear that is more sensitive to heat.  The high temperature gear could be right next to the storage suite.  You could have higher level of redundancy for some equipment and less for others in another suite.

And just like the cloud, your ability to add is so much easier than I need to move to a bigger cage.  Just add another suite.

How much power do you want per rack?  What’s a suite look like?

NewImage

Oh yeh and the data center is green too.

The facility is being designed to comply with U.S. Green Building Council LEED certification standards for data centers and to obtain Tier III certification from the Uptime Institute, a certification currently held by approximately 60 data centers in the U.S. and 360 worldwide, few of which are colocation facilities.

The transformation from Hardware to Software based Operations, AT&T's Network Transformation

I started my career in manufacturing and distribution logistics, then moved to hardware, and eventually operating systems and other software.  Most of what drove the changes in what I do is got bored and was looking to learn new things.  But, most people don’t like to change, they like predictability of what needs to be done. 

In AT&T’s Domain 2.0 document is a long list of transition they plan on making going from a hardware approach to a software approach.  

I don’t know about you, but I like the right side of the list much better than the left side.  The left side is easier from a micro management of what needs to be done, but it misses the customer focus which dominates the right side.  

NewImage

AT&T announces its Embracing Cloud Principles for its Network

AT&T announced its User-Defined Network Cloud which is kind of puzzling.  So, the current network is a non-user defined specialized equipment environment where people (mostly men) picked their favorite equipment in self serving perspectives thinking of their jobs and users should trust these people to be the experience they wanted? This was the old way, but technology is moving too fast, and users expectations are growing.  Here is a graphic that illustrates the change AT&T is making.

NewImage

NFV aims to address these problems by evolving standard IT virtualization technology to consolidate
many network equipment types onto industry standard high volume servers, switches and storage that
can be located in data centers, network PoPs or on customer premises. As shown in Figure 2, this
involves the implementation of network functions in software, called VNFs, that can run on a range of
general purpose hardware, and that can be moved to, or instantiated in, various locations in the
network as required, without the need for installation of new equipment.

The document that has this graphic is here.

Here is another graphic that shows the change.

NewImage

Here is the blog post from AT&T’s John Donovan.  I think if John had added these simple graphics to his blog post it would have communicated much more clearly what AT&T is doing.

I found this information thanks to GigaOm’s Kevin Fitchard post.

Software is eating the mobile network, too, as AT&T begins its journey into the cloud

 

6 HOURS AGO

2 Comments

cloud-cell-tower
photo: Gigaom Illustration
SUMMARY:

AT&T is taking the first steps toward transforming its network into a data center. It’s not touching the cellular network — at least not yet — but it will start virtualizing its mobile core and application infrastructure.

Cloud is not a Panacea, Yin and Yang, Public Cloud and Private

The way some people talk about the Cloud it is a Panacea.

a remedy for all ills or difficulties :  cure-all

Many people have built marketing initiatives and customers are ready to buy the Cloud believing it is the Panacea for their IT issues.  

Myself I have tried to argue that the Cloud has limits, and others will say no they can point to customers who have been successful being in the Cloud.

My latest attempt is to try and discuss the Yin and Yang concept.

In Chinese philosophy, the concept of yin-yang (simplified Chinese阴阳traditional Chinese陰陽pinyinyīnyáng), which is often called "yin and yang",[1][2][3][4] is used to describe how opposite or contrary forces are interconnected and interdependent in the natural world; and, how they give rise to each other as they interrelate to one another. Many natural dualities (such as light and dark, high and low, hot and cold, fire and water, life and death, and so on) are thought of as physical manifestations of the yin-yang concept.

Some may think their data center was the dark days and the solution in AWS is the light.

NewImage

The Yin and Yang is drawn to show even in the light their is a bit of dark and in the dark is a bit of light.

Even when you look at AWS which is the epitome of Public Cloud it has bits of private in it.  The data centers are built or leased by Amazon.com and there are no public disclosures on those data centers.  The equipment in the data center is a guarded a secret.  The BIOS, Processors, RAM, HD, Network, and Storage Systems are all private.

What is the change management process for APIs?  Is it in control of the public or does amazon.com make the decisions on when they will make changes?

The strength of the Public Cloud is following a retail model to address consumer needs.  Where IT has gone to dark side is where they think they can dictate to users what their needs are.

If the internal IT group is customer driven and customers have options, then there is not as much a reason to go the Public Cloud.