Take a test to measure your statistical and risk literacy

Running data centers and IT is so many times about making decisions related to risks. BBC has an article on risk and uncertainty.

Whether you are a doctor trying to decide whether to trial a new treatment, a CEO trying to forecast business post-Brexit, or you simply want to know how to interpret the weather forecast, the capacity to weigh up different potential outcomes is essential for good decision-making.

Unfortunately, many people are surprisingly bad at this. Luckily, a very short test – called the Berlin Numeracy Test – now allows you to assess your ability to cope with risk and uncertainty.

Before you read on, you might want to try the test yourself. It takes just five minutes to complete and at the end you will discover how your own “risk literacy” compares to the average person.
Click here to measure your risk literacy
— http://www.bbc.com/future/story/20180814-how-we-should-think-about-uncertainty

Here is a test the BBC article references. Dr. Edward Cokely is the sponsor of the research.

What is the purpose of this research? The purpose of this research is to advance the science for informed decision making, which aims to make information about the risks and consequences of decisions more intuitive and understandable for diverse decision makers.
— https://ousurvey.qualtrics.com/jfe/form/SV_1XMoRYdGvZ7GdlH

I took the test at 3a in the morning when I couldn't go back to sleep. Pleasantly I got a perfect score. The study cautions me I should still take care and may want to double check your calculations or seek additional advice when ic comes to important decisions involving risks and statistics.

These are good words of advice for data center projects. It would be interesting if everyone took the test to give them feedback on their statistical and risk literacy.

Google's data center AI puts safety first just like airplanes fly-by-wire while saving 30% of cooling energy

Google has a post on its latest application of AI from the Deepmind group to its data center cooling systems. The tech media nicely covered the post. Here is a full coverage link.



Google choose to emphasize the system was designed for safety as indicated in the title of their post.

Safety-first AI for autonomous data center cooling and industrial control
— https://www.blog.google/inside-google/infrastructure/safety-first-ai-autonomous-data-center-cooling-and-industrial-control/

The following graphic illustrates the safety principles used like a fly-by-wire system. 

While traditional mechanical or hydraulic control systems usually fail gradually, the loss of all flight control computers immediately renders the aircraft uncontrollable. For this reason, most fly-by-wire systems incorporate either redundant computers (triplex, quadruplex etc.), some kind of mechanical or hydraulic backup or a combination of both. A “mixed” control system with mechanical backup feedbacks any rudder elevation directly to the pilot and therefore makes closed loop (feedback) systems senseless.[1]
— https://en.wikipedia.org/wiki/Fly-by-wire#Efficiency_of_flight

It has been a pleasure watching Jim Gao make progress with with AI in data center cooling and you can bet there will be much more coming.

Perspective on the Edge Data Center from Dell, Part 3

It can be hard to get a perspective on how companies are developing their edge data centers. Reading through the websites and listening to sales pitches talk about the current offering explains the current, but rarely provides a perspective.

One way to get a perspective is to look at what has been presented in the past. Thanks to Youtube it can be easy to find the history.

Here is one example from Dell's data center group.

Here is a video from 2017 with Ty Schmitt and Mark Bailey discussing their edge solution.

Now shift to 2018 and here is Mark with another video.

If you are looking at an edge solution check out the youtube videos from the edge data center team and watch what they have been saying over the years. These videos can provide a perspective that the website and sales person neglect to explain.

Networking the Wireless and Edge/Micro Data Centers, Part 2

Almost everyone in the data center industry is looking to move to the Edge. What is the role of the edge data center in the overall system? The answer comes when you ask what does the network look like in the future.

This video is from Cisco for its Intent Based Networking initiative.

Much of what Cisco markets is what the Telcos want for the future of its cellular network especially getting ready for 5G which is a market for the edge data center.

When you have a network strategy to transform your operations then it becomes clearer how edge data centers fit.

Starting a Series of blog posts on Wireless and edge/micro data centers, Part 1

In 2010 I wrote about containers being put at Cell Tower sites. Over the past couple of years there has been lots of excitement about edge/micro data centers.

one interesting pain point for why cell site IT infrastructure needs to be improved is the sites have a PUE of 2.0. https://www.zdnet.com/article/what-is-5g-everything-you-need-to-know/

Cooling and the costs associated with facilitating and managing cooling equipment, according to studies from analysts and telcos worldwide, account for more than half of telcos' total expenses for operating their wireless networks. Global warming (which, from the perspective of meteorological instrumentation, is indisputable) is a direct contributor to compound annual increases in wireless network costs. Ironically, as this 2017 study by China's National Science Foundation asserts, the act of cooling 4G LTE equipment alone may contribute as much as 2 percent to the entire global warming problem.




China Mobile's breakdown of its annual capital and operational expenditures for maintaining one 3G base station. 

(Image: China Mobile)

To fund 5G deployments is a strategy to dramatically reduce the cost of cell site infrastructure.

Moving BBU processing to the cloud eliminates an entire base transmission system (BTS) equipment room from the base station (BS). It also completely abolishes the principal source of heat generation inside the BS, making it feasible for much, if not all, of the remaining equipment to be cooled passively — literally, by exposure to the open air. The configuration of that equipment could then be optimized, like the 5G trial transmitter shown above, constructed by Ericsson for Japan’s NTT DOCOMO. The goal for this optimization is to reduce a single site’s power consumption by over 75 percent.

What’s more, it takes less money to rent the site for a smaller base station than for a large one. Granted, China may have a unique concept of the real estate market compared to other countries. Nevertheless, China Mobile’s figures show that rental fees with C-RAN were reduced by over 71 percent, contributing to a total operational expenditure (OpEx) reduction for the entire base station site of 53 percent.
— https://www.zdnet.com/article/what-is-5g-everything-you-need-to-know/

With the power consumption problem of cell sites and the drive to change the cell site hardware infrastructure to be cloud based supporting a range of 40km, how many edge data centers are needed for a given area?

Having fewer cloud cell sites supporting multiple towers looks like the direction. When I wrote about containers at cell sites in 2010 I also imagined a container supporting multiple cell towers.

Some people get excited about low latency being on the edge. Urs Hoelzle at one of the last Structure events made the observation that people are over estimating the business value of latency. Will users pay for sub 5 ms latency or is 10 ms fine. Light travels 300,000 meters (186 miles) in 1 millisecond.