Facebook's low power storage data center

Facebook has shared more details with Wired.com on its 3rd data center in Prineville.

The plan is to use the building to house a brand-new type of low-power, deep-storage device that Facebook engineers will cook up over the next six to nine months. They’re designing a hard-disk storage server that powers off when it’s not in use, says Tom Furlong, vice president of site operations at Facebook. “It’s going to sit in a dedicated building that is optimized to support this device that we don’t need to access very often.”

What will this building be like? Boxy and quiet, with rows of low-powered machines clicking on and off, says Furlong.

Olympics show a future with lots of mobile viewing

GigaOm has a post on the viewership of Xfinity customers.

By the numbers: How the Olympics helped to take multi-screen mainstream

On how many different devices did you watch the Olympic games? If you’re anything like the average Xfinity customer, the answer is between two and three – and that’s not even counting your living room TV. Welcome to the first multi-screen Olympics.

Olympics

Comcast released an astonishing piece of data this week: The average Xfinity customer who viewed live streams of the the games online authenticated 2.4 devices. It’s worth noting that this is in addition to millions of TV screens used to watch the London games; those 2.4 devices are just mobile phones, tablets and PCs. In other words: Millions of people used not one or two, but three to four screens to watch the Olympics!

The data that got my attention though is the mobile use by iOS and Android.

 

Facebook's 3rd data center in Prineville Data Center is different than the rest, backup DC

A standard rule for many is to have offsite backup.  But, when you have as much data as Facebook that would mean shipping such a huge quantity of tapes or HD that it would be a logistics nightmare.  And, a WAN connection couldn't be big enough for the flow into Facebook.

GigaOm's Katie Fehrenbacher reports that the the 3rd new data center in Prineville is actually a deep storage facility.

The building, which will potentially be 84,000 square feet, will be filled with disc or flash storage and will act as the “backup to the backup to the backup,” storage for the facility’s data, explained Facebook’s Ken Patchett.

This method makes sense, and I actually use it at home/office as well.  Whenever I touch my parallels environment on my Mac the whole VM needs to backed up which can be 20 - 30 GB.  This change gets streamed to a Drobo-FS from my home to my office which is a separate building connected by one gigabit ethernet.  Backing up this much data regularly to the cloud over my 5 megabit uplink would be so painful and take all day or more vs. an hour or two depending on how well the wireless connection works.

Will on site backup be more of a standard?  Google, Facebook, Amazon, Apple, and Microsoft most likely do this.  It makes a lot of sense for hospitals with the size of imaging data.  Financials need to backup offsite for regulatory issues.

Google's Data Center Infrastructure lives by the Gospel of Speed

It is interesting when some people say I have the best data center.  Some are trying to building the cheapest data center.  But, best and cheap don't necessarily drive the right behaviors.  What should you focus on?  What do the businesses need?  Do they care if the data center is the best or cheapest around?  What they do see other than outages is how fast things work every second of every day.

Google's Urs Hoelzle has a post on Think With Google on The Google Gospel of Speed.

The Google Gospel of Speed

‘Fast is better than slow’ is a cornerstone of Google’s philosophy. Here, search guru and SVP of Infrastructure Urs Hoelzle, explains why.

SHARE

LET US KNOW WHAT YOU THINK

Pick a query, any query. ‘Weather, New York City.’ ‘Nineteenth-century Russian literature.’ ‘When is the 2012 Super Bowl?’ Now type it into a Google search box. As you type, we predict the rest of your query, comb through billions of web pages, rank the sites, images, videos, and products we find, and present you with the very best results. The entire process takes, in many cases, less than a tenth of a second – it’s practically instant.

If it isn’t, we’ll suffer. Our research shows that if search results are slowed by even a fraction of a second, people search less (seriously: A 400ms delay leads to a 0.44 percent drop in search volume, data fans). And this impatience isn’t just limited to search: Four out of five internet users will click away if a video stalls while loading. But even though the human attention span has become remarkably fickle, much of the web remains slow. The average web page takes 4.9 seconds to load – in a world where fractions of a second count, that’s an eternity.

Who wants the the best or cheapest if it is slow.

‘Fast is better than slow’ has been a Google mantra since our early days, and it’s more important now than ever. The internet is the engine of growth and innovation, so we’re doing everything we can to make sure that it’s more Formula 1 than Soap Box Derby. Speed isn’t just a feature, it’s the feature.

One of the reasons I like the post is it inspires the team.

“At Google, we don’t plan on stopping until the web is instant, so that when you click on a link the site loads immediately, or a video starts without delay. What amazing things could happen then?”

Are you inspired when executives say we want cheaper data centers?  Or we want the best?

It is easy for Google to compare their speed vs. Facebook, Amazon, Microsoft, Apple, Tecent, Baidu, Weibo, and others.  Don't you think Google has their competitors up on the dashboards as well?

It’s why we have live performance dashboards on big screens in many of our engineering offices, so that teams can see latency levels across our services. It’s why, a few years ago, when we failed to live up to our principles and things started to slow down, we called ‘Code Yellow!’ and directed engineers and product managers on major product teams to drop what they were doing and work on making stuff faster. Speed is simply part of our engineering culture.

3D printing a building, why not a data center?

TED has a video on a 3D building method.

The professor presenting this is an Industrial Engineer.  Which could be the reason why I have talked about the concept of robotics to build data centers as well.  Have you?

Behrokh Khoshnevis is a professor of Industrial & Systems Engineering and is the Director of Manufacturing Engineering Graduate Program at the University of Southern California (USC). He is active in CAD/CAM, robotics and mechatronics related related research projects that include the development of novel Solid Free Form, or Rapid Prototyping, processes (Contour Crafting and SIS), automated construction of civil structures, development of CAD/CAM systems for biomedical applications (e.g., restorative dentistry, rehabilitation engineering, haptics devices for medical applications), autonomous mobile and modular robots for assembly applications in space, and invention of technologies in the field of oil and gas. His research in simulation has aimed at creating intelligent simulation tools that can automatically perform many simulation functions that are conventionally performed by human analysts. His textbook, "Discrete Systems Simulation", and his simulation software EZSIM benefit from some aspects of his research in simulation. He routinely conducts lectures and seminars on invention and technology development.

He is a Fellow member of the Society for Computer Simulation and a Fellow member of the Institute of Industrial Engineering. He is a senior member of the Society of Manufacturing Engineers. His website: http://www-rcf.usc.edu/~khoshnev/