Comparing Microsoft's VP of Cloud Infrastructure to Google's VP of Data Centers via LinkedIn Profiles

Microsoft has put a new VP in charge of its Cloud Infrastructure group retiring the role of VP of Global Foundation Services.  GFS’s logo looked like this.

NewImage

Global Foundation Services (GFS) is the engine that powers Microsoft's cloud services. Learn more.

When I Google Search “Microsoft Global Foundation Services” what shows is Microsoft Cloud Platform with little trace of Global Foundation Services and the words Global Foundation Services (GFS) are gone.

NewImage

So the changes have started in Microsoft’s data center group.  What changes are there in the future?

One way to look at what the future will be like is to compare the new Microsoft VP's public profile vs. a competitor.  I could pick Amazon as competitor, but Google is bigger in terms of a data center presence.  So let’s look at Microsoft’s Suresh Kumar, VP of Cloud Infrastructure and Operations vs. Google’s Joe Kava, VP of Data Centers.  The below is from their LinkedIn profiles as of Oct 21, 8:30p.  I am referencing the date and time of this post as things may change as profile get modified.  2 days ago Sumar’s picture was this.

sureshSuresh Kumar, via LinkedIn

 

 

 

 

 

 

Now on LinkedIn Suresh’s photo is below.

NewImage

Both Suresh and Joe have 500+ connections.

On Suresh’s profile his top skill at 27 in e-commerce.  Joe’s top skill at 117 is Strategy.

Joe has 66 for Data Centers.  Suresh has 0.

Here is Suresh’s top 10 skills.

NewImage

Here is Joe’s top 10 skills.

NewImage

The one area where Suresh and Joe are close is 11 and 14 for Cloud Computing.  

NewImageSuresh

NewImageJoe

When you look at the above numbers who would you choose to build your Cloud/Data Center Infrastructure?  This has been an interesting way to look at two different executives using LinkedIn profiles.  With fresh eyes I went and looked at my skills listed on my LinkedIn profile.  You may want to as well and think about how your skills are listed.

Oh the other area Suresh and Joe are equal is it looks like both of them now have photos that their corporate PR groups say is OK to have on a public facing site.

NewImage

Joe Kava, via LinkedIn

 

 

Two Ways to Save Server Power - Google (Tune to Latency) vs. Facebook (Efficient Load Balancing)

Saving energy in the data center is more than a low PUE.  Using 100% renewable power while wasting energy is not a good practice.  I’ve been meaning to post on what Google and Facebook have done in these areas for a while and have been staring at these open browser tabs for a while.

1st is Google in June 2014 shared its method of turning down the power consumption of a server as low as they could as long as it met performance latency.  The Register covered this method.

Google has worked out how to save as much as 20 percent of its data-center electricity bill by reaching deep into the guts of its infrastructure and fiddling with the feverish silicon brains of its chips.

In a paper to be presented next week at the ISCA 2014 computer architecture conference entitled "Towards Energy Proportionality for Large-Scale Latency-Critical Workloads", researchers from Google and Stanford University discuss an experimental system named "PEGASUS" that may save Google vast sums of money by helping it cut its electricity consumption.

NewImage

The Google paper is here.

We presented PEGASUS, a feedback-based controller

that implements iso-latency power management policy for

large-scale, latency-critical workloads: it adjusts the powerperformance

settings of servers in a fine-grain manner so that

the overall workload barely meets its latency constraints for user

queries at any load. We demonstrated PEGASUS on a Google

search cluster. We showed that it preserves SLO latency guarantees

and can achieve significant power savings during periods

of low or medium utilization (20% to 40% savings). We also es-

tablished that overall workload latency is a better control signal

for power management compared to CPU utilization. Overall,

iso-latency provides a significant step forward towards the goal

of energy proportionality for one of the challenging classes of

large-scale, low-latency workloads.

Facebook in Aug 2014 shared Autoscale its method of using load balancing to reduce energy consumption.  Gigaom covered this idea.

The social networking giant found that when its web servers are idle and not taking user requests, they don’t need that much compute to function, thus they only require a relatively low amount of power. As the servers handle more networking traffic, they need to use more CPU resources, which means they also need to consume more energy.

Interestingly, Facebook found that during relatively quiet periods like midnight, while the servers consumed more energy than they would when left idle, the amount of wattage needed to keep them running was pretty close to what they need when processing a medium amount of traffic during busier hours. This means that it’s actually more efficient for Facebook to have its servers either inactive or running like they would during busier times; the servers just need to have network traffic streamed to them in such a way so that some can be left idle while the others are running at medium capacity.

Facebook posts on Autoscale here.

Overall architecture

In each frontend cluster, Facebook uses custom load balancers to distribute workload to a pool of web servers. Following the implementation of Autoscale, the load balancer now uses an active, or “virtual,” pool of servers, which is essentially a subset of the physical server pool. Autoscale is designed to dynamically adjust the active pool size such that each active server will get at least medium-level CPU utilization regardless of the overall workload level. The servers that aren’t in the active pool don’t receive traffic.

Figure 1: Overall structure of Autoscale

We formulate this as a feedback loop control problem, as shown in Figure 1. The control loop starts with collecting utilization information (CPU, request queue, etc.) from all active servers. Based on this data, the Autoscale controller makes a decision on the optimal active pool size and passes the decision to our load balancers. The load balancers then distribute the workload evenly among the active servers. It repeats this process for the next control cycle.

Intelligent Cooling Algorithms, Google Data Center and Honda Manufacturing

Saving energy is something everyone knows is good when you can maintain environmental conditions for equipment.  Here are two different examples.  One is Google in a data centers with its servers and another from Honda in its painting process.  

First Google’s Joe Kava explains what was done to improve the cooling system performance.

2nd is Honda explaining what it did to save 25% energy consumption in its cooling system for painting processes.

At the 1:21 mark the Honda Engineer explains the energy savings. http://youtu.be/bKLPXvZytIs?t=1m21s

NewImage

This Honda video was so good it won an award.

Honda's Environmental Short Film Series Receives National Award Recognition

Telly Award recognizes outstanding "Green/Eco Friendly" video content

6/24/2013 12:00:00 PM

 

 

The first film in Honda's Environmental Short Film Series, Paint by Numbers, has been awarded two Telly Awards in the Green/Eco-Friendly and Social Responsibility categories. The Telly Awards, now in its 34th season, honors the best film and video productions, groundbreaking online video content, and outstanding local, regional, and cable TV commercials and programs.

Honda's Environmental Short Film Series highlights some remarkable initiatives - dreamed up and developed by Honda associates - that fulfill the company's vision for reducing its environmental impact and creating a sustainable future. Paint by Numbers, the first film in the series, tells the story of how Honda engineer Shubho Bhattacharya was inspired to develop technology to reduce the energy needed to operate the auto body painting system at Honda's manufacturing plant in Marysville, Ohio. Auto body painting accounts for the most energy use in Honda's production process. With the help of his fellow associates, Bhattacharya conceived Honda's Intelligent Paint Technology, which has cut Honda's North American manufacturing CO2 emissions by about 10,000 metric tons per year.

"Honda associates are always dreaming up innovative ways to reduce our environmental impact," said Marcos Frommer of Honda North America, Inc., one of the producers of the film series. "Short films are a great way to share our associates' sustainability initiatives, and we're honored to receive this recognition from the Telly Awards."

Google Expands to 5 Data Center Locations in Europe, 5 is a magic number

Google posted on data center expansion on Tues Sept 23.  It is late Weds Sept 24 so the announcement is old news.  There are two paragraphs that are interesting.

There will be 5 Google data centers operating in Europe.  5 is a minimum to have extremely high availability.  Some my think 2 or 3 is the right number.  When you think through operations 5 is a much better number than three.  The Five are Dublin, Hamina, St Ghislain, Eemshaven (rented) and Eemshaven (owned).

This will be Google’s fourth hyper efficient facility in Europe. Importantly, demand for Internet services remains so strong that the new building does not mean a reduction in expansion elsewhere. Our expansion will continue in Dublin in Ireland, in Hamina in Finland, and in St. Ghislain in Belgium. Our existing rented datacenter facility in Eemshaven also will continue to operate.

Google makes the statement there is expansion in all the owned facilities - Dublin, Hamina, and St. Ghislain.

The other paragraph says that Google has new data centers designs it is building.

The new Dutch data centre will benefit from the latest designs in cooling and electrical technology. It will be free-cooled - taking advantage of natural assets like cool air and grey water to keep our servers cool. Our data centers use 50% less energy than a typical datacenter - and our intention is to run this new facility on renewable energy.

Intent is to run the facility on renewable energy.  Yippeee!!!  Go Green Data Centers

Any more details?  Nope.  Google is growing fast.  Is it growing faster than the rest?  I would say yes.