Dell Modular Data Center Tour running Microsoft Bing Maps

Barton George has a post of Dell’s Ty Schmitt and Mark Bailey giving a tour of Dell’s modular data center in Longmont, CO running Bing Maps.

A Walk-through of Dell’s Modular Data Center

In my last entry I featured a video with the Bing Maps imagery team. In it they talked about why they went with Dell’s Modular Data Center (MDC) to help power and process all the image data they crunch. For a deeper dive and a look at one of these babies from the inside join Ty Schmitt and Mark Bailey in the following video as they walk you through the MDC and how it works.

Here is the YouTube video of the tour.

Illogic or Logic of building your own servers

Facebook is creating more visibility for build your own servers.

The logic of the approach is in this Bloomberg article.

Dell Loses Orders as Facebook Do-It-Yourself Servers Gain: Tech

By Ian King and Dina Bass - Sep 11, 2011 9:01 PM PTMon Sep 12 04:01:00 GMT 2011

When Facebook Inc. set out to build two new data centers, engineers couldn’t find the server computers they wanted from Dell Inc. (DELL) or Hewlett-Packard Co. (HPQ) They decided to build their own.

“We weren’t able to get exactly what we wanted,” Frank Frankovsky, Facebook’s director of hardware design, said at a conference on data-center technology last month.

Will this logic be successful is countered by ZDNET “between the lines”

Facebook DIY servers really poaching from Dell, HP, IBM? It's too early to tell

By Larry Dignan | September 12, 2011, 3:50am PDT

Summary: Do-it-yourself servers designed by Facebook are allegedly poaching server sales from HP, IBM and Dell, but the data is inconclusive at best.

And Larry makes an excellent point which I totally agree on.

Bloomberg reports that do-it-yourself servers used by the likes of Facebook, Google and Microsoft in data centers threaten Dell, HP and IBM. When I saw the headline, I got excited. Why? I thought there would be some quantification in it. Aside from the fact that 20 percent of the server market is customized, it’s unclear how many orders Dell, HP and IBM were really losing. There aren’t any concrete examples or figures to back up the premise.

You can argue the Logic of build your own servers or the Illogic of build your servers.  But from what I am hearing and seeing looking at other indicators of what is going on.  The momentum to build your own servers is growing.  Companies are putting the infrastructure in place to get it their way.

Hold the ‘Pickles’

“People want to be able to build it their way,”Frankovsky said at the Dell-Samsung Chief Information Officer Forum in Half Moon Bay, California. “They kind of want a Burger King: ‘I don’t like pickles -- why do I have to have pickles?’”

Building your own servers is a niche that is addressing a problem the Server industry has seen in the lack of R&D which results in a commoditization of the servers and lack of innovation.

Do you see Google going back to buying servers from OEMs?

For the same reason Apple has changed smartphones, tablets, and computers by integrating SW and HW, isn’t it logical to integrate SW and HW design in the data center?

Think Carnegie and Vertical Integration.

One of the earliest, largest and most famous examples of vertical integration was the Carnegie Steel company. The company controlled not only the mills where the steel was made, but also the mines where the iron ore was extracted, the coal mines that supplied the coal, the ships that transported the iron ore and the railroads that transported the coal to the factory, the coke ovens where the coal was cooked, etc. The company also focused heavily on developing talent internally from the bottom up, rather than importing it from other companies.[1] Later on, Carnegie even established an institute of higher learning to teach the steel processes to the next generation.

Many of the companies looking to build their own servers use open source so they have control of the supply chain of SW.  They want control of hardware.

Missouri passes Bill creating Tax Exemptions for Data Centers

KY3 reports on Missouri passing a job creation bill that benefits data centers.

The bill also creates state and local sales and use tax exemptions for new and expanding data centers and permits donation lease agreements between municipalities and data center projects

The government document is here.

DATA CENTERS

The act authorizes state and local sales and use tax exemptions for new and expanding data centers and permits donation lease agreements between municipalities and data center projects.

This is good news for the Grass Fed Data Center folks in Missouri who are championing a biomass Green Data Center effort.

Blogging vs. Editorial Process, speed & quantity vs. quality

i have spent a bunch of time in the Publishing Industry, and remember in 1986 using Aldus Pagemaker 1.0 on a Mac.  Companies that were in my regular discussions were Altsys, Macromedia, Quark, Adobe, and a bunch of other high end publishing & printing technologies.  In 1994 I was a renegade and developed Verdana as the first TrueType font where screen readability was the priority, not print.  Leaving behind print changes what you can do.  Being a blogger is different than print as you focus on speed & quantity vs. quality.

What happens when you leave behind the Print Editorial Process?  Businessweek has a good post on Blogs vs. Magazine processes.

We're proud here of the work we do as a team to lift the level of each story. But what a slog. It's unthinkable for the blog world. Consider the path of a story as it winds its way through our system.

I looked at a draft of the story over the weekend, suggested changes, and spent nine hours editing it yesterday. (Usually two people share this job, but this week we're short-handed.) Then I sent it to the copy desk. There, people who are new to the story read it to see if it makes sense, if the thinking is logical, the context clear, the grammar and spelling ok, the names and titles correct. Meantime, some facts, such as names and Web addresses, are checked by a researcher. The copy desk sends the story, with questions, back to the writer and me. At the same time, the top editors of the magazine have a chance to read the story and suggest changes of their own. Potentially contentious or delicate stories are often sent upstairs to a McGraw-Hill lawyer, who might suggest further adjustments.

Today we work answering the questions, clearing up doubts, filling in holes, and cutting the story to fit on the page.

Then, wouldn't you know, the story goes back to the desk. They edit again--mostly proofreading, making sure questions have been answered, and writing display language this time around--and put it on a literal sheet of paper. Then that paper is circulated back to us. We read it and make fixes, and then carry it to the close desk, where editors make the final changes and push the button to send it to the printing press.

Much of this process is a good idea when you are thinking sending content to a printer.  But, what about a blog post?  Note this post was written in 2005, and describes a blog process.

The editorial process of blogging is far simpler. We write, we publish. This takes our journalism into a new sphere, but carries inherent risks. How do we handle them? First, we reduce risk by avoiding the sorts of stories that require heavy editing. We don't blog investigative pieces, for example, or heavy financial analysis. Second, we consult our gut. If it looks risky, we'll push it toward the more edited BW Online or the magazine. Finally, when we make mistakes--which we do--we aim to correct them quickly and ask for your understanding. We're into something new, and all of us, you and I, are only coming to understand it as we create it.

Much of the editorial process was constrained by the print process.  Blogging is constrained by speed of internet and data center software.  Quality is important, but do end users care about the intangible qualities.

Much of what gets the traffic is the fastest most relevant, and it is hard to beat that with better quality that is later, and shows up days later.

 

Facebook partners with Open Data Center Alliance to contribute Server and Data Center Designs

Facebook Open Computer Project and Open Data Center Alliance announced a partnership at Intel Developer Forum.

INTEL DEVELOPER FORUM, San Francisco, Sept. 13, 2011 – The Open Data Center Alliance (ODCA) today announced a collaboration with the Open Compute Project (OCP) on system and data center specifications to drive adoption of efficient data center and infrastructure design; spur rapid hardware innovation; and encourage greater openness and industry collaboration.

When I received the press release I let the PR firm know I was at Intel Developer Forum (IDF), and I could talk to the ODCA executive.  I received a call in 3 minutes and said they had a slot in 25 minutes.  I went upstairs and there was ODCA Board of Director’s Chairman, Marvin Wheeler, and Frank Frankovsky, a founding member of the Open Compute Project and director of technical operations at Facebook.

image

It feels like I have a reoccurring meeting to run into Frank – OCP summit in Palo Alto, GigaOm Structure in SF, IDF in SF, and OCP in NYC.  So, it was easy to quickly get down to what the partnership  is delivering.  If you look at ODCA models.

image

You may notice there is no server or data center models.  As ODCA is focused on the cloud.

We envision an IT industry that gives both users and suppliers a simpler, more secure, more efficient path to cloud computing.

With many people thinking of private clouds as well as public clouds, it would make sense for there to be a reference point to compare public clouds vs. private clouds.  Whose hardware would you use to build a private cloud that is willing to open source its designs?  This is where Facebook’s Open Compute Project fills a need, grounding the cloud to the reality of implementation on server hardware in a real data center.

Everyone has full access to these specifications. We want you to tell us where we didn’t get it right and suggest how we could improve. And opening the technology means the community will make advances that we wouldn’t have discovered if we had kept it secret.

Server Technology

Open Compute servers are designed to be efficient, inexpensive and easy to service. They’re also vanity free, with no extra plastic and significantly fewer parts than traditional servers.

Data Center Technology

Designed in tandem with our servers, the data center maximizes mechanical performance and thermal and electrical efficiency. It accepts 277 volts of AC, so more energy makes it from the grid to the data center to server components.

Both OCP and ODCA have  an end user focused leadership and take an open source approach which makes for a natural partnership between the organizations.  ODCA will present at the OCP event in NYC.

Members of both organizations are engaging in joint projects initially focused on rack-scale infrastructure; ultra-efficient server and storage designs; and scalable, open systems management. Additional details on these projects will be announced at the OCP Summit on October 27.