Example applying abstraction to IT asset management to make it more powerful

Years and years ago I went to the IAITAM with friends from 3 of the big 5 data center companies (GAFAM) who all work on asset management. IAITAM is the International Association of IT Asset Managers. At the conference I realized that what most of the presentations and users were focused on was how to count assets, record, and report on a regular schedule to align with financial systems. This isn't quite what I think of asset management, but it is a critical part.

A couple of weeks ago I was watching a great presentation by Cheng Lou "on The Spectrum of Abstraction" and Barbara Liskov "The Power of Abstraction"

JavaScript and the React community have evolved over the years through all the ups and downs. This talk goes over the tools we've come to recognize, from Angular, Ember and Grunt, all the way go Gulp, Webpack, React and beyond, and captures all these in a unifying mental framework for reasoning in terms of abstraction levels, in an attempt to make sense of what is and might be happening.
Barbara Liskov, Electrical Engineering and Computer Science, MIT, MA This lecture has been videocast from the Computer Science Department at Duke. The abstract of this lecture and a brief speaker biography is available at: http://research.csc.ncsu.edu/colloquia/seminar-post.php?id=308

So what happens if you apply the ideas that Cheng and Barbara shared on asset management? How do you apply abstraction to asset management? Start with abstraction in software engineering. https://en.wikipedia.org/wiki/Abstraction_(software_engineering) 

Break the asset management problem. One part is counting things. You could count the number of HP DL380 in an area, but since you are counting DL380s for a depreciation schedule you need to identify individual DL380s not just the total number so you need a unique ID system. Most would apply an asset tag. Maybe a bar code. Thinking they are being more advanced by using RFID tags. The flaw with this method is if you apply the asset tags incorrectly, they fall off, or make data entry error in the original record creation it can be extremely hard to catch the error and you are counting incorrectly for the life of the assets. So let's abstract the asset identification problem to be a virtual asset ID that can automated and is near perfect in its identification method. Oh and make it so there is a REST API to identify an asset programmatically.

If you can do all the above, then change the way assets are counted to be run by microservice. Counting each DL380 uniquely ID in a network means it is in a given space. If you know the network, then you know the location. If you had a perfect accounting of network cables, then you can determine location based on cables and network ports.

Now you may say this is way too hard for your legacy environment which is why people get stuck with asset tags and having people walk down the aisles reading bar codes. The Art of Abstraction can be applied to parts of the problem. if you started on Jan 1, 2017 with all new assets, then at least those can be counted with an abstraction approach. Wouldn't you like to know there are areas of the data center where you have automated asset management? I would.

This is my plan to change asset management with abstraction. I left out of some details because this post would get way too long, but you get the idea.

Studying Logistics to Gain Insight - WSJ Logistics Report

From my early days studying Industrial Engineering and Operations Research at UC Berkeley I enjoyed the logistics and optimization. Why? I guess it was applied mathematics to real world problems. My dad was an engineer and I liked problem solving.

Logistics touches so many things in life. Things moving through space and time.

My daily read is WSJ's Logistics Report. http://www.wsj.com/news/logistics-report

Didn't know that Google has a patent for self-driving delivery trucks.

Google filed a patent for self-driving delivery trucks. (Quartz) http://qz.com/613277/google-wants-to-deliver-packages-from-self-driving-trucks/

A new patent awarded to Google today suggests that the search giant is looking into developing self-driving delivery trucks, just as Amazon readies its autonomous delivery drone fleet.
Google’s patent outlines what it calls an “autonomous delivery platform” for delivery trucks. The trucks would be fitted with a series of lockers that could potentially be unlocked with a PIN code sent to the person waiting for the delivery before the truck arrives at their location. The patent also suggests the locker could be unlocked by a customer’s credit card, or an NFC reader. After the package is dropped off, the truck will continue on to its next delivery point, or return to the depot to pick up more packages.

Asset Management is broken almost everywhere

A couple of years ago I went to an IT Asset Management conference and I thought I would learn how asset management worked in the industry.  What I quickly realized is something was amiss.  Almost all of these people were not even close to being technical.  Talking to some of the people and watching the presentations I eventually discovered that the vast majority of the approach was targeted at the person who eventually was given the job of asset management.  And, this person was given no tools and no budget with limited staff.  They had Excel spreadsheets type of databases and they didn't really understand the assets themselves and how they interact.  Very few system architects understand how the hardware and software works together, the task for a bookkeeper type of person to understand an asset is pretty much impossible.

I was reading Chris Crosby's post on gov't in action.

Good news. Three years into their five-year data center consolidation project the federal government just found out they had 3,000 more data centers than they originally thought. Boy, you just can’t slide anything past these guys. As a side note, this should make all of you worried about the NSA tracking your phone calls feel better, since if it takes 3 years to find 3,000 buildings, odds are they aren’t going to find out about your weekly communiqués to Aunt Marge–or even “Uncle” Bookie—any time soon. Obviously, this new discovery is going to have some impact on the project, but I think the first question we have to ask is just where have these data centers been hiding all this time?

If you want more details on the Federal Gov'ts discovery.

The one group that is pointed out as on top of the inventory is DHS.

The Department of Homeland Security is one of the few agencies that seems to have a handle on its data center inventory. Powner and Coburn praised DHS as the gold standard for data center consolidation because the agency has successfully tracked its data center inventory, how many have been consolidated and how much has been saved by consolidating facilities.

The DHS is one of the newest and most funded gov't agencies.

In fiscal year 2011, DHS was allocated a budget of $98.8 billion and spent, net, $66.4 billion.

So there are groups who do have a handle on their data center inventory, but many have the benefit of big budgets and newer organizations than the rest.

There is something fundamentally wrong with the way asset management is done.  And once you see why it is broken, then you can start to figure out how to fix it.

An example of the wrong approach is the person who is charge of Asset Manager has a long list of items to track for an asset, creating a long form that almost no one understands including themselves. Many times that long form then is used in data entry, reports, and effects the performance of the database.  What is missing is an overall way to reconcile the errors entered into the system.  What kind of errors, a simple thing like location of the asset. Someone enters into the long form the location of an asset.  Data accepted, move on to the next boring data entry.  When errors are made, they accumulate as there a lack of regular reconciliation methods to catch errors.  Annual audits are performed, but the audits themselves have errors.

Most people only care about a very limited of information about an asset if at all.  Yet, filling out an asset management form can be one of the most tedious mind numbing tasks in IT.  Who wants to do that.  Oh, there is someone in finance, purchasing, or operations who has that responsibility.  Tell them they have the asset management job.

Being asset manager is a thankless job at most companies.  Yet how much could be done if you really knew with 100% accuracy of when assets where installed, brought on line, repaired, and how the physical presence affected the overall performance of the system.  Asset management is really important to a few companies.  Who?  I would say that Google understands it best as they are one of the youngest and have the most servers.  Also, since they build their own servers, they need to understand every asset.  They asset management the components.  Something only no one does.  Well Facebook manages the server components as well, because they have a Frank Frankovsky who is ex-Dell and understands logistics.

What both Google and Facebook understand is IT Asset Management is part of the overall logistics to deliver IT services.  How can you not know with near 100% accuracy what you have and how it performs?  

How many Data Center Experts are confident, but wrong?

I missed Daniel Kahneman speaking in Seattle by a day, so I am going back and looking at videos and articles.  The Seattletimes has an article on his talk.

Exploring how we truly think

I had a feeling Daniel Kahneman was going to be interesting. My gut was right, but it isn't always, and that was the point of his talk. A lot of our thinking is messed up, but we don't know it unless we slow down and examine what our brains are doing. That's not easy to do. Kahneman is a Princeton psychologist (emeritus), who won the 2002 Nobel Prize in economics for pioneering work showing that people don't always make rational financial decisions.

Seattle Times staff columnist

I had a feeling Daniel Kahneman was going to be interesting.

My gut was right, but it isn't always, and that was the point of his talk. A lot of our thinking is messed up, but we don't know it unless we slow down and examine what our brains are doing.

That's not easy to do.

Kahneman is a Princeton psychologist (emeritus), who won the 2002 Nobel Prize in economics for pioneering work showing that people don't always make rational financial decisions.

Economists thought we did, that we weighed the facts and acted in our own best interests, but people are more complicated than that.

How familiar does this sound to a potential problem that gets swept under the rug?

When he was a young psychologist, Kahneman was put in charge of evaluating officer candidates in the Israeli Defense Force. He and his team put candidates through an exercise and saw immediately who was a leader, who was lazy, who was a team player and so on.

Much later, they got data from the soldiers' actual performance, and it turned out his team's predictions were all wrong. The experts were absolutely confident, but wrong.

Even experts take a bit of information and believe it can predict more about a person than is possible.

System 1, he said, "is a machine for jumping to conclusions." System 2 is supposed to monitor System 1 and make corrections, but System 2 is lazy.

We think we are actively evaluating then acting, but most of the time we act on unexamined input from System 1.

What is an example, think about the assumptions made on hardware purchases and how often do people go back and evaluate the true performance of the hardware after deployment?