Can Carbon Relay deliver AI efficiency like what Google's Data Center group uses?

The media covers Foxconn’s back of Carbon Relay. Here is tech crunch’s post.

Taiwanese technology giant Foxconn International is backing Carbon Relay, a Boston-based startup emerging from stealth today, that’s harnessing the algorithms used by companies like Facebook and Google for artificial intelligence to curb greenhouse gas emissions in the technology industry’s own backyard — the datacenter.

According to LinkedIn the founder Matt Provo has been with the company since Aug 2015.

Carbon Relay has on its web site a graph that shows how its model matches the actual PUE

Screen Shot 2019-01-30 at 14.01.40.png


Which looks a lot like Google’s predictive accuracy which is in this paper.

Screen Shot 2019-01-30 at 14.02.33.png

I don’t know Matt Provo or anyone at Carbon Fiber. You can see the team on this page which has their LinkedIn profiles. From taking a quick look I don’t see any mechanical engineers or data center operations people.

Google’s AI/ML energy efficiency project was headed up by Jim Gao who i do know. Jim is a mechanical engineer from UC Berkeley. Go Bears! I also have my engineering degree from Cal, but long before Jim went there. Jim had years of working in Google’s data center group and started down the path of machine learning and he had one of the biggest sources of training data, Google’s data centers. Which may explain why Jim’s predictive models look more accurate than Carbon Relay.

Jim published his latest findings as part of work in Alphabet’s Deepmind where is now a Team Lead.

Screen Shot 2019-01-30 at 14.09.40.png

So can Carbon Relay’s 14 people deliver a solution as good as Deepmind’s Jim Gao? Jim has gone through the painstaking efforts to get clean accurate data from systems. There are so many small details. I love the one example where Jim ran the model to be the most energy efficient, so it turned off all the systems bringing energy use to 0. And Jim has overcome the resistant to change from a well trained data center operations staff to trust a computer model.

Looking at the number of technical team on the Carbon Relay project I am reminded how the first models Jim ran could be performed on one PC. Time will tell if Carbon Relay can deliver on data center efficiency, but even if they have a technical solution getting clean data from all the BMS environments and executing a model that is used is so hard.

The paper that Jim published has Amanda Gasparik on the paper. Got curious looked her up on LinkedIn as she is senior data center mechanical engineer. Been at Google 5 years. 8 years as Nuclear Electronics Technician for US Navy. Masters System engineering and Bachelor’s mechanical engineer.

Add another DeepMind PhD Research Engineer and you have three people who have a broad range of skills that impress me much more than Carbon Relay.

Women in Cloud event at Microsoft Campus building 33, Jan 26, 2019

There is a Women in Cloud event on Jan 26, 2019 at the Microsoft Campus in Redmond.

Tickets are $99-200 and you can find more information here.

It is great to see this event by put together for its first time.

7 years ago I moderated a panel discussion with some top people in capacity planning and asset management which happened to be all women and it has been great to see more and more organizations support Women in the data center industry.

I hope this event is the first of many more to come. Best of luck to the event organizers.

Ever wonder why no one explains what all the data is in 4 TB a day for autonomous cars

It is covered widely that autonomous cars use TBs a day of data. I use a TB of data a month and that data is used across a wide range of videos, downloads, web browsing, and I can look at statistics on where that data is used and where it goes.

Stacey On IOT writes on the mobile edge and goes over the autonomous car use case and where will the edge be.

Moving 4 terabytes of data across 400 cars generates 1.6 petabytes of data a day, which is an obscene amount of data to transfer over mobile networks. Thus, edge processing is already taking place to produce insights required to drive the car, on the car itself.

In addition to much of that data being processed on the car, Vijay said Uber is also creating storage depots with fat connections that can handle the uploads of multiple cars. In areas where a cluster of self-driving cars may overwhelm the network, he suggested use of a mobile data center packed into what looked like a minivan to help process the data.

Part of me wonders if Uber and others talk about all the data for autonomous cars as a way to get others to waste their time. If they are generating 4TB a day now, they aren’t uploading those images now. It would seem like they are collecting lots of data now and running these cars in an instrumented mode.

Flight test instrumentation (FTI) is monitoring and recording equipment fitted to aircraft during flight test. It is mainly used on experimental aircraft, prototype aircraft and development aircraft - both military and civil, and can monitor various parameters from the temperatures of specific components to the speed of the engines. This may be displayed in the cockpit or cabin to allow the aircrew to monitor the aircraft in flight, and is usually recorded to allow the data to be analysed later.

A small modular data acquisition unit

FTI typically monitors between 10 and 120000 parameters - for example temperatures inside the cargo and pressure distribution along the wing. FTI sources could be temperature sensors, pressure probes, voltages, current transducers, potentiometers, strain gauges, aircraft data buses or cameras.[1] These sensors may be digitalized and acquired by the data acquisition system. A cockpit audio recording may also be included. A telemetry transmitter may be added to the FTI to allow real-time monitoring of the tests from a ground-station.

-https://en.wikipedia.org/wiki/Flight_test_instrumentation

But when planes fly get approved for production they don’t run in this heavily instrumented mode. They did need the instrumentation data though as part of certification to fly.

If we had details on what is in the 4TB of data you would know how much is needed to drive the car vs what is needed to certify the car.

5G lower latency times more improvements come from back-end than antenna transmission

Everyone says 5G will improve latency, but 4G and 5G are both electromagnetic spectrums that travel at the speed of light. So where is the latency improvement coming from. Nokia has a paper on low latency that goes into lots of details.

Below is a table and graphic that shows you were the times are in a latency calculation.

When you look at transmission and frame alignment you get about 3ms of time savings. Looking at UE and BTS which are user equipment and base station times you get 5 ms of time savings.

So there is actually more time saved in latency in the backend equipment than in transmission.

nokia.jpg

Ok since I brought up the transmission latency I might as well keep going. If you want to look at where the latency savings in transmission the current standards require a connection to be established to send data, switching between idle and connected modes. In 5G a connection is established and you can suspend and resume which is much quicker.

This applies to the same spectrum with only changing 4G and 5G infrastructure. So the new spectrum is not an issue. It is the transmission, connections, and backend that improve the speeds.

Screen Shot 2018-12-17 at 11.23.38.png

5G is the first network designed for Data First and voice is a much lower priority

Arstechnica has a good article on why buying a 5G smartphone may not be a good move for a while. One of the main ideas to get in your head is 5G is has multiple spectrums.

Below is a graphic from the Arstechnica article showing the millimeter wave spectrum that 5G adds on top of 4G.

24-1.jpg

So sounds good lots more spectrum, but you thought you had problems with 4G LTE coverage. mmwave 5G is going to be worse for coverage in a building or in a car. To get 5G coverage you will need an antenna outside or an internal antenna system connected to the network.

Still a big confused?

How about this as a way to understand 5G. The 4G LTE spectrum will reach you from 10 miles away. mm microwave can be as short as 1,000 ft of an antenna. Handing off between that many antennas on a voice call would be hard.

One study I haven’t seen yet even though the 5G latency is dramatically better than 4G what is the latency and throughput impact when you jump from one 5G antenna to another?

5G will cover LTE, 802.11, and mmMicrowave technologies. Do you think will be addressed in 2019. I don’t think so.