Recently, I’ve been thinking about why we haven’t come all that far in machine learning and artificial intelligence over the previous decade. Today, Keen Browne, Bonsai’s Co-founder and Head of Product, summed it up for me in a concise manner: “The tools really suck and are meant for scientists and mathematicians, not for people who work for the line of business.”

The reason I needed this wakeup call is simply that I’ve been thinking about the problem the wrong way. It baffled me that there is so little progress in terms of mainstreaming machine learning (ML) and artificial intelligence (AI), and I believed that this was because AI and ML just aren’t far enough advanced to be applied to many real-life problems in a meaningful manner. To me, this has always been a technology challenge, more than anything else.

However, there could be much more truth to Keen’s statement “…because the tools really suck…” than to the belief that “AI/ML technology just isn’t that far advanced yet.” When you look at Amazon’s excellent Echo/Alexa or IBM’s Jeopardy-playing Watson and compare these two to what we see in terms of intelligence in IT operations management (or in general enterprise software), we notice a dramatic gap.

In other words, why are these super-smart platforms not able to help plan and manage IT operations or help my sales guy decide which prospect should be bothered with what type of proposals and when? Of course all this is possible, but why is it not done?

The reason is that today, in order to leverage ML and AI, you have to be a programmer (there are millions) who also is a data scientist (there aren’t many). While IBM has started making AI/ML development tools more accessible through the Bluemix/Watson SDKs available for all major development languages, my own experience from last weekend showed me why an SDK with a few code samples is simply not enough. As I started training Watson, I noticed three things:

1)    ML/AI is a black box: I start using the Python library for Bluemix/Watson, but as someone who doesn’t have extensive implementation experience in this arena, I don’t know exactly what’s going on when Watson dissects my training documents. This weekend, I frequently caught myself thinking “oh, fascinating results, but why was this text passage picked as a highly-ranked result, while another one wasn’t?”

2)    ML/AI works at the macro level: I typically train the ML/AI engine by selecting a large number of questions and then providing answers to these questions to help the sophisticated algorithms identify the underlying patterns specific to my domain (basically, the machine learns “my way of looking at things”). What I cannot do is break down the problem into segments and then see and fine-tune how each segment influences the decision flow.

3)    I can’t predict the outcome: Often, when I talk to developers about the incredible potential of AI/ML, they asked me the same questions: “Are you sure this applies to our problem? What if we spend a ton of time on this and then it turns out the results aren’t meaningful, or they are the same or worse than a traditional keyword-based search?” Of course, many projects that could benefit from AI/ML are discarded because of this uncertainty.

So why am I fascinated by Bonsai? Because Bonsai aims to be for ML/AI what SQL Server is for databases: a simplification layer that makes ML/AI transparent to developers and ultimately to the end user. Today, when you run an ML/AI POC, you often see faces of wonder and disbelief or in the best case, faces of fascination. The reason is that the human brain cannot do massively parallel in-memory processing of millions of abstract data points, basically correlating incredible amounts of data by throwing even more incredible amounts of hardware at the problem. That’s why our good old traditional brains cannot easily evaluate whether or not AI/ML results make sense.

Bonsai’s approach could be described as “Microservices for AI/ML,” where developers deconstruct the problem into its individual components and then provide the Bonsai engine with data (text and visual) that enables the runtime to simulate what actions need to be taken to achieve the desired outcome. Of course, like all other ML/AI approaches, this follows the “trial and error” principle; however, there are three core differences that address the above problems:

1)    Bonsai lets the developer break down the reasoning process into individual components and sub-components.

2)    Bonsai lets the developer see intermediary results each step of the way.

3)    Bonsai benefits from a “mental model” of which factors should influence the outcome. The human will provide this model and Bonsai will just ignore it if it’s wrong.

An added benefit to this transparent step-by-step approach is that the individual components of the designed decision process can be shared, even in a trained state, and then reused for different projects. You could call this “microservices” for ML/AI. You could also call it “ML/AI for the masses” since this new approach is a lot easier to learn than the traditional monolithic black box concept.

Here’s an example:

Next, I’ll look at how this can be applied to IT operations management in hybrid cloud environments.