5G requires more power for higher speed and that means more heat for 5G systems

ArsTechnica has a good article on 5G speeds and a barrier during the summer. Summer barrier? It is hotter in many places of the world this summer and 5G is hotter than 4G LTE.

Bottom line:

"This persistent overheating behavior just makes me more confident in recommending that consumers wait to buy a 5G phone."

The devices are bigger, hotter, more expensive, and have less battery life than their 4G counterparts.

Any data center person knows when there are higher performing systems there is going to be a need for more cooling. Have not seen a new 5G phone cooling system yet, so that means the phone will just get hotter as a heat sink and when it gets too hot it needs to throttle back or turn off.

And at the 5G cellular tower you can imagine the power consumption and heat will increase as well. Can well imagine that 5G infrastructure could be a magnitude higher in power consumption if it was easy to add power and cooling, but it is not.

The overall power consumption of 5G could be significant enough to limit its growth and adoption.

Ever wonder why no one explains what all the data is in 4 TB a day for autonomous cars

It is covered widely that autonomous cars use TBs a day of data. I use a TB of data a month and that data is used across a wide range of videos, downloads, web browsing, and I can look at statistics on where that data is used and where it goes.

Stacey On IOT writes on the mobile edge and goes over the autonomous car use case and where will the edge be.

Moving 4 terabytes of data across 400 cars generates 1.6 petabytes of data a day, which is an obscene amount of data to transfer over mobile networks. Thus, edge processing is already taking place to produce insights required to drive the car, on the car itself.

In addition to much of that data being processed on the car, Vijay said Uber is also creating storage depots with fat connections that can handle the uploads of multiple cars. In areas where a cluster of self-driving cars may overwhelm the network, he suggested use of a mobile data center packed into what looked like a minivan to help process the data.

Part of me wonders if Uber and others talk about all the data for autonomous cars as a way to get others to waste their time. If they are generating 4TB a day now, they aren’t uploading those images now. It would seem like they are collecting lots of data now and running these cars in an instrumented mode.

Flight test instrumentation (FTI) is monitoring and recording equipment fitted to aircraft during flight test. It is mainly used on experimental aircraft, prototype aircraft and development aircraft - both military and civil, and can monitor various parameters from the temperatures of specific components to the speed of the engines. This may be displayed in the cockpit or cabin to allow the aircrew to monitor the aircraft in flight, and is usually recorded to allow the data to be analysed later.

A small modular data acquisition unit

FTI typically monitors between 10 and 120000 parameters - for example temperatures inside the cargo and pressure distribution along the wing. FTI sources could be temperature sensors, pressure probes, voltages, current transducers, potentiometers, strain gauges, aircraft data buses or cameras.[1] These sensors may be digitalized and acquired by the data acquisition system. A cockpit audio recording may also be included. A telemetry transmitter may be added to the FTI to allow real-time monitoring of the tests from a ground-station.

-https://en.wikipedia.org/wiki/Flight_test_instrumentation

But when planes fly get approved for production they don’t run in this heavily instrumented mode. They did need the instrumentation data though as part of certification to fly.

If we had details on what is in the 4TB of data you would know how much is needed to drive the car vs what is needed to certify the car.

Google shares its observations on Best Practices for AR

AR is a hot topic and Google has a post where they share their observations on best practices.

“From our own explorations, we’ve learned a few things about design patterns that may be useful for creators as they consider mobile AR platforms. For this post, we revisited our learnings from designing for head-mounted displays, mobile virtual reality experiences, and depth-sensing augmented reality applications. First-party apps such as Google Earth VR and Tilt Brush allow users to explore and create with two positionally-tracked controllers. Daydream helped us understand the opportunities and constraints for designing immersive experiences for mobile. Mobile AR introduces a new set of interaction challenges. Our explorations show how we’ve attempted to adapt emerging patterns to address different physical environments and the need to hold the phone throughout an entire application session.”

It’s a good summary of issues that are kind of obvious when you start down the path of building solutions.

1 week with iPhone X, studying Face Tracking with ARKit

It’s been a week with my iPhone X and I am drilling into technical details of what iOS 11 does with iPhone X. One of the big features is the facial recognition. Curious I watched an Apple Developer video on “Face Tracking with ARKit”  https://developer.apple.com/fall17/601

 

FullSizeRender.jpg

Everyone focus on the 1st one. Facial recognition and doing things like Animoji  

With expressions tracking you could use the Face Tracking as UI input.  

Here is where ARFaceAnchor can be used to show position and orientation of the users face, 3D topology, facial expression. Everyone is a bit different in how they reach, but they are in general consistent in their facial expressions. 

 

FullSizeRender.jpg

If you don’t think you can do this check out this list of facial tracking shapes. 

 

FullSizeRender.jpg

In operations lighting has a direct impact to the quality, accuracy, and speed of work. And, you could use Facial Tracking to get a reading of the Lighting in the environment. 

 

FullSizeRender.jpg

And last with the microphone support for Animoji you can use the same method to capture audio as well. 

 

FullSizeRender.jpg

Microsoft focuses on 3 areas for Mobile, not including the youth

Microsoft announced a change in its focus on Mobile and three market segments.

We plan to narrow our focus to three customer segments where we can make unique contributions and where we can differentiate through the combination of our hardware and software. We’ll bring business customers the best management, security and productivity experiences they need; value phone buyers the communications services they want; and Windows fans the flagship devices they’ll love.

In the above there is no mention of the youth market.  Teenagers and college students are some of the most intense mobile users and Microsoft isn't targeting those users. What is missing are the apps that the youth market uses.  Microsoft's strategy is enterprise which many will want office.  Given Office apps are on iOS and Android, the value Microsoft is providing is in management, security and productivity.  Management and Security sound like the rallying cry for Blackberry.

The kids of parents who work for Microsoft have been more and more convincing their parents they want an iPhone, not a Windows phone.  Why?  Because the apps.  Microsoft won the battles of DOS and Windows vs. others with availability of apps.  The losers where OSs like CP/M that couldn't compete with the lack of apps.

Microsoft making layoffs in the summer is turning into an annual event.  Microsoft wrote off $7.8bil and the stock didn't budge.

Microsoft Corp. plans to cut as many as 7,800 jobs and write down about $7.6 billion on its Nokia phone-handset unit, wiping out nearly all of the value of a business it acquired just 14 months ago.
— http://www.bloomberg.com/news/articles/2015-07-08/microsoft-to-cut-7-800-jobs-as-it-restructures-phone-business