EMC's Innovation group builds thermal profile data center robot for $200

One of my readers, Vivek sent a link to a cool robot used in the data center to collect thermal profile data.  You could have permanent thermal sensors in your data center or have someone wander around to collect data or send a robot around.  Having the robot go around 24x7x365 a year seems like a good choice, and given it is built on iRobot you have a clean floor too. :-)

NewImage

NewImage

This idea is to build a low cost platform to monitor environmental parameters in a data center. We initially  planned to take an arduino with DS18B20 temp sensors around & build a temperature map of the data center. But we need to take care of the indoor location information as well with this method. That looked tedious & error prone. It is a good thermal detector but not good to build a thermal map. So we brainstormed with our team and some one joked about putting it on a Roomba & driving it around. The idea looked frugal because either you can put hundreds of sensors in your data center or take few sensors & walk around. Both are different in technical perspective but the later approach which is very low cost & good enough for quick data center cooling fixes.

I had a chance to have an e-mail discussion with Vivek and one of questions I had is how he knows where the robot is in the data center.  The answer.  They know the start point, and they know the wheel movement which then creates a path of where the robot is.  But, if the robot is kicked, then the location is unknown.  
 
Seems like this is a good project for a summer intern to try at your data center.
 
Here is a video of the robot.

 

Rankings of Top Respected companies with big data center footprints

Harris Interactive has a poll on the top respected companies.  What I thought be interesting is out of these top rank companies where are the data centers in this midst.

NewImage

The following compares are some of the big players in data centers and it is interesting to think about how data centers play a role in their business.

Amazon.com #1

Apple #2

Google #4

Microsoft #15

Dell #26

IBM #28

HP #34

Verizon #36

AT&T #39

Facebook #42

Are you ready for the Pacific NW Megaquake?

With all the news about Sandy in the Northeast, many learned whether their data centers could ride out a 100 yr flood natural disaster.

In the Pacific NW, the big risk is a megaquake.

NewImage

Oregon Live estimates the financial impact to just Oregon to be $32bil.

The next great Cascadia subduction-zone earthquake will kill thousands in Oregon and cause at least $32 billion in economic losses unless preparations are radically overhauled, a state panel says.

When, not if, the magnitude-9.0 quake strikes -- let alone an accompanying tsunami -- Oregon will face the greatest challenge in its history, the state earthquake commission said in a 290-page draft report released Monday to The Oregonian.

Now, you may think that Eastern Washington and Eastern Oregon are a safe distance from the threat of a Tsunami.  But, when a quake this big hits it affects all parts of a the infrastructure.  Pacific NW fiber cables could be broken, water lines break, electrical systems have cascading failures, and diesel fuel is under federal management.

Transmission towers may topple into the river, blocking ships. Fires, landslides and explosions will proliferate. Hydrants and sprinkler systems won't work.

There will be no water or sewer service, no electricity and no ATMs, telephones, television, radio or Internet. Willamette River bridges will be impassable. Food will soon run out.

Responding to the disaster will be difficult, experts found, because of a sort of emergency gridlock. To restore phone service, crews will need restored electricity. To bring back power, workers will require repaired roads and bridges. To fix highways, crews will need restored fuel delivery and distribution.

One issue though is the cascadia earthquakes are spaced out by hundreds of years.

Earthquake magnitude

The Cascadia subduction zone can produce very large earthquakes ("megathrust earthquakes"), magnitude 9.0 or greater, if rupture occurs over its whole area. When the "locked" zone stores up energy for an earthquake, the "transition" zone, although somewhat plastic, can rupture. Great Subduction Zone earthquakes are the largest earthquakes in the world, and can exceed magnitude 9.0. Earthquake size is proportional to fault area, and the Cascadia Subduction Zone is a very long sloping fault that stretches from mid-Vancouver Island to Northern California. It separates the Juan de Fuca and North American plates. Because of the very large fault area, the Cascadia Subduction Zone could produce a very large earthquake. Thermal and deformation studies indicate that the locked zone is fully locked for 60 kilometers (about 40 miles) downdip from the deformation front. Further downdip, there is a transition from fully locked to aseismic sliding.[6]

In 1999, a group of Continuous Global Positioning System sites registered a brief reversal of motion of approximately 2 centimeters (0.8 inches) over a 50 kilometer by 300 kilometer (about 30 mile by 200 mile) area. The movement was the equivalent of a 6.7 magnitude earthquake.[7] The motion did not trigger an earthquake and was only detectable as silent, non-earthquake seismic signatures.[8]

Great Earthquakes
estimated yearinterval
2005 source[9]2003 source[10](years)
about 9 pm, January 26, 1700 (NS) 780
780-1190 CE 880-960 CE 210
690-730 CE 550-750 CE 330
350-420 CE 250-320 CE 910
660-440 BCE 610-450 BCE 400
980-890 BCE 910-780 BCE 250
1440-1340 BCE 1150-1220 BCE unknown

[edit]Earthquake timing

The last known great earthquake in the northwest was the 1700 Cascadia earthquakeGeological evidence indicates that great earthquakes may have occurred at least seven times in the last 3,500 years, suggesting a return time of 300 to 600 years. There is also evidence of accompanying tsunamis with every earthquake, and one line of evidence for these earthquakes is tsunami damage, and through Japanese records of tsunamis.[11]

 

Big Data Changing Healthcare, AI is better and cheaper than your DR

GigaOm's Derrick Harris has a post on how some Dr's using AI were able to come up with cheaper and better treatment than your Dr.  This could increase the data center's used in healthcare.

 

Credit: Indiana University

Credit: Indiana University

Specifically, Bennett and Hauser found via a simulation of 500 random cases that their model decreased the cost per unit of outcome change to $189 from the $497 without it, an improvement of 58.5 percent. They found their original model improved patient outcomes by nearly 35 percent, but that tweaking a few parameters could bring that number to 41.9 percent.

I've been spending a fair amount of time at Hospitals, not because I am sick, but because of the potential projects to be more efficient.

Even though there are some people who would see this as bad, there are also many who see the potential.

The idea behind the research, carried out by Casey Bennett and Kris Hauser, is simple and gets to the core of why so many people care so much about data in the first place: If doctors can consider what’s actually happening and likely to happen instead of relying on intuition, they should be able to make better decisions.

Making the Data Center Talk - Panel at DCD NYC Mar 12, 2013

Datacenter Dynamics NYC Mar 13, 2013 has a panel that you should stick around for.

PANEL: MAKING THE DATA CENTER TALK - DCIM EVOLUTION AND IMPLEMENTATION

 

 

At its core the purpose of Data Centre Infrastructure Management is to provide a more thorough view of the operations that drive DC costs and functionality. Hardly surprising then that the market has seen an explosion in DCIM solutions coming to market. However, has this resulted in a catch-all term that creates confusion rather than clarity? Has the end-user been left behind as vendors find ever more classes solutions, metrics and monitoring? Our panel of seasoned experts will draw on their wealth of experience to clarify what DCIM really is and how you can gain a substantial ROI with the right strategy.

 

Tamara Budec
Vice President Critical Systems and Engineering Americas, Goldman Sachs

Chris Crosby
CEO, Compass Datacenters

Nic Bustamante
Manager of Data Center Operations Engineering, Microsoft

Glen Neville
Director of Engineering, Deutsche Bank

Paul Fox
Executive Director of Enterprise Data Centers Operations & Engineering, Morgan Stanley