close x


Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form


Answering the Question Why: Explainable AI

Jans Aasman, AiTHORITY

Investments in AI may well hinge upon such visual methods for demonstrating causation between events analyzed by Machine Learning.

The statistical branch of Artificial Intelligence has enamored organizations across industries, spurred an immense amount of capital dedicated to its technologies, and entranced numerous media outlets for the past couple of years. All of this attention, however, will ultimately prove unwarranted unless organizations, data scientists, and various vendors can answer one simple question: can they provide Explainable AI?

Although the ability to explain the results of Machine Learning models—and produce consistent results from them—has never been easy, a number of emergent techniques have recently appeared to open the proverbial ‘black box’ rendering these models so difficult to explain.

One of the most useful involves modeling real-world events with the adaptive schema of knowledge graphs and, via Machine Learning, gleaning whether they’re related and how frequently they take place together.

When the knowledge graph environment becomes endowed with an additional temporal dimension that organizations can traverse forwards and backwards with dynamic visualizations, they can understand what actually triggered these events, how one affected others, and the critical aspect of causation necessary for Explainable AI.

Correlation Isn't Causation

As Judea Pearl’s renowned The Book of Why affirms, one of the cardinal statistical concepts upon which Machine Learning is based is that correlation isn’t tantamount to causation. Part of the pressing need for Explainable AI today is that in the zeal to operationalize these technologies, many users are mistaking correlation for causation—which is perhaps understandable because aspects of correlation can prove useful for determining causation. In ascending order of importance, an abridged hierarchy of statistical concepts contributing to Explainable AI involves:

  • Co-occurrence: This basic Machine Learning precept indicates how often certain events occur together. For example, Machine Learning results might show that peanut-allergy symptoms have a high co-occurrence with asthma or other health conditions.
  • Correlation: Partially influenced by co-occurrence, correlation predominantly means there is a relationship between events. Significantly, it doesn’t denote what that relationship is.
  • Causation: This concept is essential to Explainable AI in that it illustrates why events occurred, or what caused them. For instance, findings might show that web page color, rather than product placement, is causative for upselling e-commerce customers.

Causation is the foundation of Explainable AI. It enables organizations to understand that when given X, they can predict the likelihood of Y. In aircraft repairs, for example, causation between events might empower organizations to know that when a specific part in an engine fails, there’s a greater probability for having to replace cooling system infrastructure.

Causation In Time

There’s an undeniable temporal element of causation readily illustrated in knowledge graphs so when depicting real-world events, organizations can ascertain which took place first and how it might have affected others. This added temporal dimension is critical in establishing causation between events, such as patients having both HIV and bipolar disorder. In this domain, deep neural networks and other black-box Machine Learning approaches can pinpoint any number of interesting patterns, such as the fact that there’s a high co-occurrence of these conditions in patients.

When modeling these events in graph settings alongside other relevant events—like what erratic decisions individual bi-polar patients made relating to their sexual or substance abuse activities—they might differentiate various aspects of correlation. However, the ability to dynamically visualize the sequence of those events to see which took place before what and how that contributed to other events is indispensable to finding causation.

Read the full story on AiTHORITY

more posts