4 MW vs. 12 MW - CPU vs. hybrid CPU/GPU solution, which is greener?

Nvidia has a press announcement regarding World's Fastest Super Computer.


NVIDIA Tesla GPUs Power World's Fastest Supercomputer


Half the Size, Lower Power and 50% Faster Than World's Top Supercomputer

The Tianhe-1A Supercomputer, located at National Supercomputer Center, Tianjin
The Tianhe-1A Supercomputer, located at National Supercomputer Center, Tianjin

SANTA CLARA, CA -- (Marketwire) -- 10/28/2010 -- Tianhe-1A, a new supercomputer revealed today at HPC 2010 China, has set a new performance record of 2.507 petaflops, as measured by the LINPACK benchmark, making it the fastest system in China and in the world today(1).

Tianhe-1A epitomizes modern heterogeneous computing by coupling massively parallel GPUs with multi-core CPUs, enabling significant achievements in performance, size and power. The system uses 7,168 NVIDIA® Tesla™ M2050 GPUs and 14,336 CPUs; it would require more than 50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone.

One of the points in the press release is the power comparison of a an all CPU vs. the Hybrid CPU/GPU solution.

More importantly, a 2.507 petaflop system built entirely with CPUs would consume more than 12 megawatts. Thanks to the use of GPUs in a heterogeneous computing environment, Tianhe-1A consumes only 4.04 megawatts, making it 3 times more power efficient -- the difference in power consumption is enough to provide electricity to over 5000 homes for a year.

Read more

Google’s next Strategic Data Center Purchase in NYC?

Google is rumored close to purchasing 111 8th Ave.


Google Near Purchase of NYC Landmark Building at 111 Eighth Ave.

BySAM GUSTINPosted 6:22 PM 10/27/10

Last month,we told you that the gargantuan 111 Eighth Ave., a building which occupies an entire city block in Chelsea, and which is home to Google's (GOOG) New York headquarters -- is for sale.
Now, it appears that the likely buyer is none other than Google itself. Rumored sale price? A cool $2 billion,accordingto theNew York Post. 111 Eighth Ave. is the former Port Authority headquarters and one of the city's largest buildings, at nearly 3 million square feet.
It also happens to be one of the East Coast's key "telecom hotels" -- centralized locations where groups of communications and networking firms hook up their hardware. Google is already the largest tenant, leasing 500,000 square feet over three floors.

NYDaily says the price may go as high as $2.9 Billion.

Google reportedly to pay four fold increase on $2.9 billion 111 Eighth Avenue building

BY NICOLE CARTER
DAILY NEWS STAFF WRITER

Wednesday, October 27th 2010, 5:22 PM

The building is reportedly the fourth largest office building in the City.

111eigth.com

The building is reportedly the fourth largest office building in the City.

Google is apparently ogling Chelsea’s 111 Eighth Ave. building ... for a mind-blowing $2.9 billion.

Given its carrier hotel status this could be Google’s most expensive data center asset.

Google reportedly already rents 550,000 square feet of space in the building. Because the building is equipped for high-tech businesses, other interested buyers are plenty and include foreign sheiks and wealthy locals, the Observer reports.

Read more

Do you have an Elephant and Pig in your data center? Hadoop momentum continues

I am sure most of your have heard of Hadoop.

I've started studying Hadoop and its adoption in data centers.  Google started the effort with its MapReduce and Google File System.

Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license.[1] It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google's MapReduce and Google File System (GFS) papers.

Why should you care about Hadoop? Look at who the users are - Amazon Web Services, Adobe, AOL, Baidu, eBay, Facebook, Google, Hulu, IBM, LinkedIn, Quantcast, Rackspace, Twitter, and Yahoo.

Yahoo! is proud of being the largest Hadoop user.  Here is their 2009 #'s 25,000 nodes.

image

And, 2010 38,000 servers for 170 PB of storage

image

Apache Pig is a platform for analyzing the large data set.

Pig

Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.

At the present time, Pig's infrastructure layer consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop subproject). Pig's language layer currently consists of a textual language called Pig Latin, which has the following key properties:

Read more

Google has the most Internet Traffic and Data Centers and Servers

Arbor Networks reports on Google’s network traffic.

Google Sets New Internet Traffic Record

by Craig Labovitz

In their earnings call last week, Google announced a record 2010 third-quarter revenue of $7.29 billion (up 23% from last year). The market rejoiced and Google shares shot past $615 giving the company a market cap of more than $195 billion.

This month, Google broke an equally impressive Internet traffic record — gaining more than 1% of all Internet traffic share since January. If Google were an ISP, as of this month it would rank as the second largest carrier on the planet.

Only one global tier1 provider still carries more traffic than Google (and this ISP also provides a large portion of Google’s transit).

In the graph below, I show a weighted average percentage of Internet traffic contributed by the search / mobile OS / video / cloud giant. As in earlier posts, the Google data comes from 110+ ISPs around the world participating in ATLAS. The multiple shaded colors represent different Google ASN and reflect ongoing global traffic engineering strategies.

googletraffic

If you count caching they are even bigger.

Google now represents an average 6.4% of all Internet traffic around the world. This number grows even larger (to as much as 8-12%) if I include estimates of traffic offloaded by the increasingly common Google Global Cache (GGC) deployments and error in our data due to the extremely high degree of Google edge peering with consumer networks.

Google has more traffic, more data centers and servers than anyone else.

How high can Google go?

Read more

MacRumors speculates on Apple’s Data Center

MacRumors speculates on what Apple’s future data center plans are.


Apple's NC Data Center Plot Larger Than Originally Thought

Wednesday October 27, 2010 10:19 AM EST
Written by Eric Slivka

Ongoing investigations over at All Things Digital have revealed that Apple's new data center that is set to open "any day now" in Maiden, North Carolina may be the site of even grander plans than the potential doubling in size discovered late last week. According to that earlier research, Apple's initial proposal to representatives of Catawba County where the project is located included a schematic showing two adjacent data centers that would appear to total on the order of one million square feet, with only one of those buildings having been constructed so far.


Apple's 70-acre parcel across Startown Road from existing data center

New research from All Things Digital indicates, however, that Apple's plans may even extend beyond that planned one million square-foot facility on 183 acres, as the company also owns 70 acres across the street from that site.

The scuttlebutt around Maiden is that the company intends to use it for office space. But that seems unlikely.
A more plausible explanation is that this parcel, too, will be used for data center space.

Read more