Google Announces LATAM Green Data Center Construction, Where are the other two?

Google announced its first LATAM data center in Chile.

Quilicura, Chile

Located in a community just outside of Santiago, Chile, this will be Google's first data center in Latin America.

Read more about our Chile data center

Hello Chile!

In September 2012, we announced our plans to build a data center in the municipality of Quilicura, near Santiago, Chile. We’re now beginning construction, and we plan to start bringing the facility online by the end of 2013.

Building this data center in Chile is an exciting step for us. As Internet usage in Latin America grows, people are looking for information and entertainment, new business opportunities and better ways to connect with friends and family near and far. We’re building this data center to make sure that our users across Latin America and the world have the fastest and most reliable access possible to all of Google’s services.

We are also really excited about the facility itself. We estimate that our long-term investment in this data center will reach $150 million USD and will be one of the most efficient and environmentally friendly in Latin America, built to the same high standards we use around the world. Once operational, the data center will employ up to 20 people in a variety of full-time and contractor roles, including computer technicians, engineers and catering and security staff.

NewImage

The news coverage is pretty good, but I haven't found anyone who asks the questions, where are the other two.   What other two?  Google building reliable Web 2.0 services tries to have three sites for availability.

One of the more interesting site is emol.com's coverage.  Google's Dan Costello was the data center spokesman and makes the good statement for a Green Data Center, being energy efficient with air cooling and low water use.

As revealed Dan Costello, director of data center operations for Google, the company's growth has made each day receive 3 billion searches, to climb 10 years of video to YouTube and 5 million businesses use Google Apps services . The data center facility will allow users Chilean and Latin America have better access to all services of the company, "fast and reliable" and promoting the migration to cloud computing.

 

Costello noted that Chile's temperate climate is ideal for the operation of the center as it will allow the cooling air through the atmosphere and water use will not affect the drinking water system Quilicura.

 

The data center of Gauteng will join other eight centers installed in the northern hemisphere (North America and Europe), plus four others that are currently under construction. Is expected to be ready by mid-2013.

So, where are the other two?  Well, maybe three.  I've known about this for a while, but didn't have a good reason to blog it.  How, just look at the Google job postings.

 
Operations Program Manager, Geo Operations. Location: São Paulo. Team: Program Management. Apply now. This position is based in Sao Paulo, Brazil. ...
www.google.com/.../operations-program-manager-geo-operations-sao-paulo. html
 
Data Center Program Manager. Location: Belo Horizonte. Team: Program Management. Apply now. This position is based in Latin America. ...
www.google.com/.../data-center-program-manager-latin-america-2.html
 
Data Center Program Manager. Location: Lima. Team: Program Management. Apply now . This position is based in Latin America. ...
www.google.com/.../data-center-program-manager-latin-america-7.html

When you map theses locations it looks like this.

NewImage

Doesn't that make sense for a Google presence in LATAM vs. this?

NewImage

Note, the other LATAM data center locations do not mean that Google will construct a data center.  The spaces could be in wholesale data center space.

NASA uses Android phones for satellites, maybe machine control systems are next

NASA has figured out that using an Android Phone is a cheaper way to build satellites.

 

Aug 27, 2012 - 10:30AM PT

 

Google in Space: NASA powers mini-satellites with Android phones

BY 

NASA is experimenting with new satellites that use off-the-shelf electronics to cut down on costs. At the heart of its new nanosatellite is a Google Nexus smartphone, which has both the processing power to run the orbiter and the sensors it needs to perform its mission.

NASA PhoneSat 1 testing

Today’s smartphone has many times the processing power of all the used computers during the Apollo moon landings. So why not use the smartphone to control a spacecraft? That’s the approach NASA is taking in latest project, which uses off-the-shelf to electronics, including a Nexus One Android phone, in the construction of a new nanosatellite.

As Android's growth continues, it seems that some time soon Android devices will go into the machine control systems.  Wouldn't it be nice if the same developers who work on mobile phones could develop for machine control systems.  Maybe your data center in the future will run on Android.

Which do you like? Google's 5 data center best practices or Microsoft's list of 10

How you present information can make a big difference on the perception.  Here are two different ways to present which are fundamentally the same ideas from Google and Microsoft.

It is fairly obvious which one is more user friendly.

Microsoft released its update 10 Business Practices for Environmentally Sustainable Data Centers and posted a document here.

Google has their Data Center Best Practices with a list of 5.

1. Measure PUE

You can’t manage what you don’t measure, so characterize your data center’s efficiency performance by measuring energy use. We use a ratio called PUE - Power Usage Effectiveness - to help us reduce energy used for non-computing, like cooling and power distribution. To effectively use PUE it’s important to measure often - we sample at least once per second. It’s even more important to capture energy data over the entire year - seasonal weather variations have a notable affect on PUE.

2. Manage airflow

Good air flow management is fundamental to efficient data center operation. Start with minimizing hot and cold air mixing by using well-designed containment. Eliminate hot spots and be sure to use blanking plates for any unpopulated slots in your rack. We’ve found a little analysis can pay big dividends. For example, thermal modeling using computational fluid dynamics (CFD) can help you quickly characterize and optimize air flow for your facility without many disruptive reorganizations of your computing room. Also be sure to size your cooling load to your expected IT equipment, and if you are building extra capacity, be sure your cooling approach is energy proportional.

3. Adjust the thermostat

Raising the cold aisle temperature will reduce facility energy use. Don’t try to run your cold aisle at 70F; set the temperature at 80F or higher — virtually all equipment manufacturers allow this. For facilities using economizers (we strongly recommend it), running elevated cold aisle temperatures is critical as it enables more days of “free cooling” and more energy savings.

4. Use free cooling

“Free cooling” is removing heat from your facility without using the chiller. This is done by using low temperature ambient air, evaporating water, or using a large thermal reservoir. Chillers are the dominant energy using component of the cooling infrastructure; minimizing their use is typically the largest opportunity for savings. There is no one ‘right’ way to free cool - but water or air-side economizers are proven and readily available.

5. Optimize power distribution

Minimize power distribution losses by eliminating as many power conversion steps as possible. For the conversion steps you must have, be sure to specify efficient equipment transformers and power distribution units (PDUs). One of the largest losses in data center power distribution is from the uninterruptible power supply (UPS); be sure to specify a high efficiency model. Also keep as high a voltage as close to the load as feasible to reduce line losses.

 
The Microsoft blog post is not as simple as Google’s.

Microsoft’s Top 10 Business Practices for Environmentally Sustainable Data Centers

Posted by Global Foundation Services in
Data Centers, Efficiency and Sustainability
blog author image

Dileep Bhandarkar Ph.D.,

Distinguished Engineer,

Global Foundation Services

Microsoft recognizes the tough challenges that data center and IT managers face today as they struggle to support their businesses in the face of rising costs and uncertainty about the future. But the fact is - being “lean and green” is good for both the business and the environment.

It isn’t always easy to know where to begin in moving to greener and more efficient operations. With that in mind-we are sharing our updated Top Ten Best Business Practices for Environmentally Sustainable Data Centers white paper. In this rapidly changing environment it is important that we all continually reassessed and share our best practices with each other. For this reason, senior members of Microsoft’s Global Foundation Services (GFS) team have pooled their key learnings in this white paper.

As you’ll read in the list of best practices we’ve compiled, companies can make major gains by providing incentives to your team to reduce energy consumption and drive greater efficiencies across the entire data center and employing a wide range of practices that can collectively add up to significant gains. Microsoft has been using these practices for several years now and has found that in addition to helping to improve environmental sustainability, they make best use of our resources and help us stay tightly aligned with our core strategies and business goals. 

Microsoft’s top ten best practices for creating sustainable data centers are based on some basic principles: 

Effective resource utilization matters.Energy efficiency is an important element in Microsoft business practices, but equally important is the effective use of resources deployed. We eliminate features that are not essential for operating the services. This principle drives our efforts to right size our servers based on application requirements. Virtualization also improves server utilization by consolidating multiple instances of an application on the same hardware. Our data center designs offer various levels of redundancy to meet the resiliency needs of the different applications.

blog080912-1 

Standardization reduces variability and improves agility and costs, while reducing errors.  A major initiative in Microsoft data centers involves standardizing the platform. High degree of variability in the infrastructure can increase costs. Standardizing on a small set of servers, network equipment and data center technologies can drive economies of scale, and reduce support costs. Custom deployments are more error prone and expensive.

A holistic approach to total cost of ownership is essential.It is tempting to make purchase decisions based on acquisition costs, but often support and operating costs can be a dominant factor over the life of the equipment. The total cost of ownership should be evaluated against the value proposition of the equipment purchased. For example, consider the cost/performance of your servers instead of just performance. Make sure that reducing costs in one aspect of the operation does not increase cost somewhere else. Spending more on a higher efficiency power supply can reduce the total cost of ownership!

blog080912-2

… 

Google's Data Center Infrastructure lives by the Gospel of Speed

It is interesting when some people say I have the best data center.  Some are trying to building the cheapest data center.  But, best and cheap don't necessarily drive the right behaviors.  What should you focus on?  What do the businesses need?  Do they care if the data center is the best or cheapest around?  What they do see other than outages is how fast things work every second of every day.

Google's Urs Hoelzle has a post on Think With Google on The Google Gospel of Speed.

The Google Gospel of Speed

‘Fast is better than slow’ is a cornerstone of Google’s philosophy. Here, search guru and SVP of Infrastructure Urs Hoelzle, explains why.

SHARE

LET US KNOW WHAT YOU THINK

Pick a query, any query. ‘Weather, New York City.’ ‘Nineteenth-century Russian literature.’ ‘When is the 2012 Super Bowl?’ Now type it into a Google search box. As you type, we predict the rest of your query, comb through billions of web pages, rank the sites, images, videos, and products we find, and present you with the very best results. The entire process takes, in many cases, less than a tenth of a second – it’s practically instant.

If it isn’t, we’ll suffer. Our research shows that if search results are slowed by even a fraction of a second, people search less (seriously: A 400ms delay leads to a 0.44 percent drop in search volume, data fans). And this impatience isn’t just limited to search: Four out of five internet users will click away if a video stalls while loading. But even though the human attention span has become remarkably fickle, much of the web remains slow. The average web page takes 4.9 seconds to load – in a world where fractions of a second count, that’s an eternity.

Who wants the the best or cheapest if it is slow.

‘Fast is better than slow’ has been a Google mantra since our early days, and it’s more important now than ever. The internet is the engine of growth and innovation, so we’re doing everything we can to make sure that it’s more Formula 1 than Soap Box Derby. Speed isn’t just a feature, it’s the feature.

One of the reasons I like the post is it inspires the team.

“At Google, we don’t plan on stopping until the web is instant, so that when you click on a link the site loads immediately, or a video starts without delay. What amazing things could happen then?”

Are you inspired when executives say we want cheaper data centers?  Or we want the best?

It is easy for Google to compare their speed vs. Facebook, Amazon, Microsoft, Apple, Tecent, Baidu, Weibo, and others.  Don't you think Google has their competitors up on the dashboards as well?

It’s why we have live performance dashboards on big screens in many of our engineering offices, so that teams can see latency levels across our services. It’s why, a few years ago, when we failed to live up to our principles and things started to slow down, we called ‘Code Yellow!’ and directed engineers and product managers on major product teams to drop what they were doing and work on making stuff faster. Speed is simply part of our engineering culture.

pre-IPO Video gives hints of Facebook's strategy

I was talking to my neighbors enjoying the evening breeze and she asked what I thought of the future of Facebook.  I told her the number 1 issue is Google is laser focused on beating Facebook, and that is Facebook's biggest challenge.  Why?  Because Facebook is Google's top competitor for Ad Dollars.

HBR has an interesting article on the right way to run an IPO show and of course chooses to poke at Facebook.

The Right Way to Run an IPO Road Show

Over the past 17 years I've worked with hundreds of executives to raise billions of dollars
— from private equity to hedge funds to IPOs. I've seen road shows done right, but I've also seen every mistake in the book.

One part that HBR digs in at Facebook is on pre IPO video.

Procrastinating creates not only a very stressful environment but ultimately a show that is not as well-conceived, customized to the audience, and polished as it must be. If you need proof just look at Facebook's stale video pitch, which was scrapped on the second day of its road show amidst widespread complaints from important institutional investors that it left them little time for their key questions, and was boring to boot.

I watched the video and found it was obvious why Facebook bought Instagram.

 

 

The data center related topic is brought up around the 27 minute mark.

NewImage

I don't know about you, but watching the video the easiest person to watch was Chris Cox, VP of Product.

Business Insider called Cox “a triple threat -- an engineer who can build company-defining products, an operator who can recruit and manage good people, and a long-term strategic thinker,” and named Cox number 2 on its list of 10 Rock Star Tech Execs You’ve Never Heard Of.[5] He is also known for his focus on bringing people and technology together. “Technology does not need to estrange us from one another,” Cox told Wired. “The physical reality comes alive with the human stories we have told there.”[6]

Cox envisions a future in which what your friends recommend on social networks plays a bigger role in what you buy, do, or watch on TV. He told The Wall Street Journal that he believes there will be a time “when you turn on the TV, and you see what your mom and friends are watching, and they can record stuff for you. Instead of 999 channels, you will see 999 recommendations from your friends."[1]