Google’s next Strategic Data Center Purchase in NYC?

Google is rumored close to purchasing 111 8th Ave.


Google Near Purchase of NYC Landmark Building at 111 Eighth Ave.

BySAM GUSTINPosted 6:22 PM 10/27/10

Last month,we told you that the gargantuan 111 Eighth Ave., a building which occupies an entire city block in Chelsea, and which is home to Google's (GOOG) New York headquarters -- is for sale.
Now, it appears that the likely buyer is none other than Google itself. Rumored sale price? A cool $2 billion,accordingto theNew York Post. 111 Eighth Ave. is the former Port Authority headquarters and one of the city's largest buildings, at nearly 3 million square feet.
It also happens to be one of the East Coast's key "telecom hotels" -- centralized locations where groups of communications and networking firms hook up their hardware. Google is already the largest tenant, leasing 500,000 square feet over three floors.

NYDaily says the price may go as high as $2.9 Billion.

Google reportedly to pay four fold increase on $2.9 billion 111 Eighth Avenue building

BY NICOLE CARTER
DAILY NEWS STAFF WRITER

Wednesday, October 27th 2010, 5:22 PM

The building is reportedly the fourth largest office building in the City.

111eigth.com

The building is reportedly the fourth largest office building in the City.

Google is apparently ogling Chelsea’s 111 Eighth Ave. building ... for a mind-blowing $2.9 billion.

Given its carrier hotel status this could be Google’s most expensive data center asset.

Google reportedly already rents 550,000 square feet of space in the building. Because the building is equipped for high-tech businesses, other interested buyers are plenty and include foreign sheiks and wealthy locals, the Observer reports.

Read more

Do you have an Elephant and Pig in your data center? Hadoop momentum continues

I am sure most of your have heard of Hadoop.

I've started studying Hadoop and its adoption in data centers.  Google started the effort with its MapReduce and Google File System.

Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license.[1] It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google's MapReduce and Google File System (GFS) papers.

Why should you care about Hadoop? Look at who the users are - Amazon Web Services, Adobe, AOL, Baidu, eBay, Facebook, Google, Hulu, IBM, LinkedIn, Quantcast, Rackspace, Twitter, and Yahoo.

Yahoo! is proud of being the largest Hadoop user.  Here is their 2009 #'s 25,000 nodes.

image

And, 2010 38,000 servers for 170 PB of storage

image

Apache Pig is a platform for analyzing the large data set.

Pig

Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.

At the present time, Pig's infrastructure layer consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop subproject). Pig's language layer currently consists of a textual language called Pig Latin, which has the following key properties:

Read more

Google has the most Internet Traffic and Data Centers and Servers

Arbor Networks reports on Google’s network traffic.

Google Sets New Internet Traffic Record

by Craig Labovitz

In their earnings call last week, Google announced a record 2010 third-quarter revenue of $7.29 billion (up 23% from last year). The market rejoiced and Google shares shot past $615 giving the company a market cap of more than $195 billion.

This month, Google broke an equally impressive Internet traffic record — gaining more than 1% of all Internet traffic share since January. If Google were an ISP, as of this month it would rank as the second largest carrier on the planet.

Only one global tier1 provider still carries more traffic than Google (and this ISP also provides a large portion of Google’s transit).

In the graph below, I show a weighted average percentage of Internet traffic contributed by the search / mobile OS / video / cloud giant. As in earlier posts, the Google data comes from 110+ ISPs around the world participating in ATLAS. The multiple shaded colors represent different Google ASN and reflect ongoing global traffic engineering strategies.

googletraffic

If you count caching they are even bigger.

Google now represents an average 6.4% of all Internet traffic around the world. This number grows even larger (to as much as 8-12%) if I include estimates of traffic offloaded by the increasingly common Google Global Cache (GGC) deployments and error in our data due to the extremely high degree of Google edge peering with consumer networks.

Google has more traffic, more data centers and servers than anyone else.

How high can Google go?

Read more

MacRumors speculates on Apple’s Data Center

MacRumors speculates on what Apple’s future data center plans are.


Apple's NC Data Center Plot Larger Than Originally Thought

Wednesday October 27, 2010 10:19 AM EST
Written by Eric Slivka

Ongoing investigations over at All Things Digital have revealed that Apple's new data center that is set to open "any day now" in Maiden, North Carolina may be the site of even grander plans than the potential doubling in size discovered late last week. According to that earlier research, Apple's initial proposal to representatives of Catawba County where the project is located included a schematic showing two adjacent data centers that would appear to total on the order of one million square feet, with only one of those buildings having been constructed so far.


Apple's 70-acre parcel across Startown Road from existing data center

New research from All Things Digital indicates, however, that Apple's plans may even extend beyond that planned one million square-foot facility on 183 acres, as the company also owns 70 acres across the street from that site.

The scuttlebutt around Maiden is that the company intends to use it for office space. But that seems unlikely.
A more plausible explanation is that this parcel, too, will be used for data center space.

Read more

Facebook posts on its Data Center Efficiency Project

Facebook data center engineering’s Jay Park posts on what Facebook presented at SVLG Data Center Efficiency Summit.

Optimizing Data Center Energy Usage

by Jay Park on Wednesday, October 20, 2010 at 8:32am

When it comes to optimizing data centers for energy usage, the minutest changes can have significant impact. Facebook’s growth over the years has expanded our data center footprint greatly, and we've learned many lessons and applied some of the industry’s best practices to make our data centers much more efficient, saving us money and using less energy. At the Silicon Valley Leadership Group's Data Center Efficiency Summit last week, we shared these lessons and the new strategies we've implemented with the data center community at large so they too can utilize these techniques, multiplying the energy savings and environmental protection across the infrastructure of many other companies.  

Based on this graph

 

A 9% improvement in IT load for a 276 KW savings means the IT critical capacity was 3 megawatts.  Assuming low power servers with around 6,000 servers per megawatt, the servers in the environment are 18,000.

Jay discussed saving 3 watts per server.

We discovered that the server fans were spinning faster than necessary, so we worked with the server manufacturers to optimize their fan speed control algorithm while keeping temperatures within the recommended range. For each server, this saves up to 3 watts and requires less air (up to 8 cubic feet per minute), which quickly adds up in a 56,000 square foot facility.

3 watts per server is 54,000 watts.  With 56,000 sq ft and 3 MW of power, the power is only 50 watts per sq ft which fits with this low density image below.  Note the amount of open space.

The inlet temperature is not mentioned in the post, but I recall that Jay said 68 to 72 degrees which fits with the raise in return temperature.

In the end, we raised the temperature for each CRAH unit's return air to 81 degrees Fahrenheit from 72 degrees Fahrenheit.

The group I was sitting with during Facebook’s presentation wasn’t overly impressed, but with 50 watts sq, ft, 3 megawatt IT load, leasing a facility (not owning), the Facebook engineering group most likely had a very short ROI payback, and wanted to keep their capital investment to a minimum.

Read more