Remember when high availability data centers like military ones had satellite dishes. With the growth of satellite connectivity dishes may be coming back to be part data centers. WSJ reports on Google buying a satellite-imaging startup.
The Skybox team will initially work with Google's Maps business. Google Maps uses images from roughly 1,000 sources currently. Most of these images of the Earth are updated every few months or years. If Skybox can help Google update this information daily, it could help people respond to incidents, such as disasters, more quickly and help direct responses.
However, longer term, Skybox's technology may also help with Google's goal of spreading Internet access more widely.
"Skybox's mission is about more than just imaging," said David Cowan, partner at Bessemer Venture Partners, which invested in Skybox. "Skybox is disrupting how satellites are deployed in space and that has implications for the types of global communication challenges that Google plans to address."
Mobile devices are the electronic device people carry more than anything else, but when you have no cel coverage your device is only as good as the stuff you have downloaded that doesn’t require a data connection.
Someone asked the question what if you could be a Cell Network in a helicopter? You can’t provide cell coverage, but you can find people who are lost. Range Networks posts on this idea in Iceland.
That is what you get with coast guard helicopters flying about with an OpenBTS-based solution on board, scouring the Icelandic highlands for (extremely) lost souls during large-scale search & rescue missions.
Rögg of Reykjavik, led by technical director Baldvin Hansson, has created a complete system using OpenBTS and Range's SDR1 for a helicopter-mounted network which can pick up cell phone signals up to 35 km away, map them on iPad tablets, and lead the crew to swoop in and rescue someone while the up to 500-person search party is still pulling on its boots. They call it Norris for short, the Norris Positioning System officially. (But nothing to do with GPS - they use the timing advance value from the GSM connection to map the location.)
It's not just faster, it's better -- they used to fly around and...look! Any rain, snow or fog usually meant nothing to see, so they would ground the Super Puma helicopter and send everybody slogging. Now they have a tool which makes a fast rescue under even inclement conditions possible.
Wired posts on how an OpenBTS cel network can be a small fraction of a proprietary solution.
Range has already brought GSM service–the same type of network that carries voice calls and text messages elsewhere in the world–to Macquarie Island, a small island just outside the Antarctic Circle. This is preferable to walkie talkies or Wi-Fi because it provides wider coverage while using less energy. And although the network has a satellite uplink to connect it with the rest of the world, it doesn’t depend on satellites for local communications, which is essential to the safety of field researchers.
GSM networks like the one on the island usually cost about a million dollars to build, says Range Networks CEO Ed Kozel. But Range is able to bring the technology to Antarctica for just a few thousand dollars using an open source platform called OpenBTS, short for Open Base Transceiver Station. All you need to run a GSM network with OpenBTS is radio software and an off-the-shelf Linux server. “The legacy infrastructures are why most operators are so expensive to run, but we took a clean slate approach,” Kozel explains.
Gigaom Research has a webinar on June 25, 2014 on Making Sense of Sensor data.
Maybe you’ve heard of the Internet of Things, and maybe you’re skeptical. But this isn’t just about thermostats and personal pedometers. It’s about fleet optimization, supply chain management, container shipping, manufacturing, sentiment analysis, and fraud prevention, too.
Analysis of streaming data focuses on determining not just the “what and why,” but also the “what’s next.” By combining sensor data with historical data, even deeper insights can be extracted, equipment breakdowns averted, money saved and efficiencies gained.
After spending the last few months intensely discussing a range of technologies in the data center industry something was bothering me. I understood what their technology did, but as I kept asking about performance and other operating issues I wasn’t getting answers I wanted. The simple think I want to know is “how well does this technology work.” If someone uses it what are the issues they will run into. By solving one set of problems, what new problems do they pick up?
Telling me what customers you have as references tells me you have done a good job selling your service, but that doesn’t mean it works well. Sometimes the people who make the purchasing decisions are far removed from the operating issues. Being able to have conversations with operations staff is one of the ways to get to the truth. Even if you have a nice looking report I’ll still be suspicious.
Hearing from someone who uses a technology in operations is one of the most credible sources. As an option push the vendor to answer, “how well does this work?” And when they tell you how it works. Repeat, I know how it works. I want to know how well it works, operates.
It was predictable that with Google sharing its use of Machine Learning in a mathematical model of a mechanical system that others would say they can do it too. DCK has a post on Romonet and Vigilent being other companies that use AI concepts in data centers.
Google made headlines when it revealed that it is using machine learning to optimize its data center performance. But the search giant isn’t the first company to harness artificial intelligence to fine-tune its server infrastructure. In fact, Google’s effort is only the latest in a series of initiatives to create an electronic “data center brain” that can analyze IT infrastructure.
...
One company that has welcomed the attention around Google’s announcement is Romonet, the UK-based maker of data center management tools.
...
Vigilent, which uses machine learning to provide real-time optimization of cooling within server rooms.
Google has been using Machine Learning for a long time and uses it for many other things like their Google Prediction API.
What is the Google Prediction API?
Google's cloud-based machine learning tools can help analyze your data to add the following features to your applications:
Customer sentiment analysis
Spam detection
Message routing decisions
Upsell opportunity analysis
Document and email classification
Diagnostics
Churn analysis
Suspicious activity identification
Recommendation systems
And much more...
Here is a Youtube video from 2011 where Google is telling developers how to use this API.
Learn how to recommend the unexpected, automate the repetitive, and distill the essential using machine learning. This session will show you how you can easily add smarts to your apps with the Prediction API, and how to create apps that rapidly adapt to new data.
So you are all pumped up to get AI in your data center. But, here are two things you need to be aware of that can make your projects harder to execute.
First the quality of your data. Everyone has heard garbage in - garbage out. But when you create machine learning systems the accuracy of data can be critical. Google’s Jim Gao, their data center “boy genius” discusses one example.
Catching Erroneous Meter Readings
In Q2 2011,Google announced that it would include natural gas as part of ongoing efforts to calculate PUE in a holistic and transparent manner [9]. This required installing automated natural gas meters at each of Google’s DCs. However, local variations in the type of gas meter used caused confusion regarding erroneous measurement units. For example, some meters reported 1 pulse per 1000 scf of natural gas, whereas others reported a 1:1 or 1:100 ratio. The local DC operations teams detected the anomalies when the realtime, actual PUE values exceeded the predicted PUE values by 0.02 - 0.1 during periods of natural gas usage.
Going through all your data inputs to make sure the data is clean is painful. Google used 70% of its data to train the model and 30% to validate the model. Are you that disciplined? Do you have a mechanical engineer on staff who can review the accuracy of your mathematical model?
Second, the culture in your company is an intangible to many. But, if you have been around enough data center operations staff, their habits and methods are not intangible. They are real and what makes so many things happen. Going back to Google’s Jim Gao. He had a wealth of subject matter expertise on machine learning and other AI methods in Google. He had help deploying the models from Google staff. And he had the support of the VP of data centers and the local data center operations teams.
I would like to thank Tal Shaked for his insights on neural network design and implementation. Alejandro
Lameda Lopez and Winnie Lam have been instrumental in model deployment on live Google data centers.
Finally, this project would not have been possible without the advice and technical support from Joe Kava,
as well as the local data center operations teams.
Think about these issues of data quality and the culture in your data center before you attempt an AI project. If you dig into automation projects it is rarely as easy as when people thought it would be.