Explainable AI: What it is, why it’s important and the impact it has on businesses and society.

By: Eloy Gonzales and Josh Pranlal
Published on September 2, 2020

Artificial Intelligence (AI) systems are playing an increased role in our daily lives, making it paramount, more than ever, that we can trust the decisions and outcomes being produced by these systems.

The applications are varied and widespread; from self-driving cars to chess playing computers and chatbots. But how confident can we be with a machine deciding who to hire? Can we rely on a medical diagnosis made by a computer? And how do we ensure facial recognition systems are bias-free?

The emerging field of Explainable AI (XAI) aims to address overall transparency in the machine decision-making process, ensuring systems are well understood by human stakeholders (referred to as ‘interpretability’). After all, and at least for now, real people hold the ‘check books’ and budgets and are the ones approving the deployment of AI systems to solve their real-world challenges.

For several years, researchers and scientists have been grappling with the trade-off between accurate AI systems and the ability to understand those systems. For now, we can’t have our cake and eat it too.

Some of the simpler Machine Learning (ML) models, as outlined in this Forbes article, are reasonably explainable and include a degree of  transparency in how they come to conclusions. However, the more powerful AI systems or programs, which stand to provide more benefit to humanity, tend to hide their internal processes (known as black-box models), making it very difficult to unravel why something didn’t go as planned.

Explainable AI – some quick definitions

AI or ML ‘interpretability’ can refer to a few terms, such as “White-Box models”, “Fairness” and “Explainable ML”.

Interpretability generally means our ability to explain results in a way that makes sense to humans. However, practically speaking  it refers to how self-explanatory and/or transparent the system is, as outlined in the report by P. Hall and N. Gill.

White-box models is a term used to define powerful AI systems that, unlike their black-box counterparts, can be directly explained or interpreted by humans. There are also advanced tools that can interpret black-box models for us, which we’ll cover later in this series.

Fairness of AI systems is as the term implies, where it’s important that decisions or predictions are not reasonably or unduly bias toward or against specific groups or demographics.

Why it’s important AI systems are explainable: social motivations

AI has broad reach and potential to impact our lives as more and more businesses adopt these technologies. However, as with any technology we must be able to trust its usage as well as be aware of potential risks.

The aforementioned Forbes article discusses the importance of XAI  and the notion that for humans to trust how decisions are being made (i.e. accuracy of the systems), we need to be able to understand how AI decisions are made and have the ability to stop/control/monitor these systems.

For humans to trust an AI or ML system, the resulting decisions should be accurate and make common sense. As outlined in ‘Interpretable Machine Learning' by Christoph Molnar, the more a machine’s decision affects a person’s life, the more important it is for the machine to explain its behavior.

If an AI system decision appears to have gone wrong, we must be able to understand the inner workings behind it so we can learn, satisfy human curiosity and avoid misleading data-driven insights.

Furthermore, the data used by AI systems to learn how to perform (for example photos of cats to help it distinguish a cat from a dog) are often susceptible to cyber-attacks and hacking, so this adds another consideration for establishing and maintaining trust in AI systems.

Without the techniques mentioned above (interpretable models, explanation and fairness techniques) it can be quite challenging to see whether the training data or input data has been compromised, or whether the system outputs or results have been hacked or modified intentionally.

For users of computer-based AI decision systems, explanations behind the decisions made are the most requested feature, likely more commonly requested by those perfectionists among us. That’s not to say however, that all real-life applications need to be explainable and interpretable. In instances where there’s no significant impact on people’s lives, such as movie or holiday recommendations where a thorough understand of the workings is not necessary.

Similarly, where the problem is well studied and has common acceptance in the public domain, such as image-to-text systems with many years of industry experience, they clearly work and don’t need explanation.

On the flip side, these AI systems are expected to make objective, data-driven decisions. Often in critical situations such as medical diagnosis and employment. In these potentially life-changing scenarios, interpretability is required for objectivity and accuracy is needed for unbiased decision-making.

Commercial considerations

Finally, many companies have, are, and will be using AI systems for revenue-generation applications, and with that comes multiple commercial motivations for ensuring interpretability and trust.

The general principles of applied ML are common across industries, but some industries apply it differently to what is commonly portrayed in public media, and in this case usually there are competitive advantages or intellectual property rights at play.

However, it is generally agreed that the more transparent a company is with its predictive models, the better its public image. It also improves trustworthiness toward developing newer, more robust and potentially more accurate systems over time, therefore improving confidence in both company image and capability.

For more traditional industries such as banking, insurance and healthcare, some countries have legal policies to provide regulatory frameworks around the creation, usage and application of interpretable, fair and transparent models with the aim to be analyzed by government regulators.

Furthermore, the costs associated with AI systems go beyond financial expenses, but also toward potential reputational impacts related to the accuracy and fairness of a system’s predictions. For example, where a system seems to act in a discriminatory way or in violation of customer privacy, reputation and financial performance are at risk, particularly if published in the media.

Interpretable models, explanation and fairness tools can all help mitigate risks in real-world applications.

Designing ML applications in this way, can enable significant impact to society, while maintaining trust and improving public sentiment toward future applications—all covered in this blog series.

Sign up for our
monthly newsletter

You may also be interested In…

This three-part series delves into their learnings, insights and tips on the subject of Explainable AI.

Download
The “Dopamine Effect”: The psychology behind personalized marketing

The fifth and final blog in our series covering the changing face of consumer engagement examines the “dopamine effect” – how it plays into the psychology behind personalized marketing and how big data can help brands leverage this reaction.

Read More

This guide is a catalog of the innovative stuff happening in grocery – what you should be aiming for. Understand the psyche behind customer behavior, so you can start building tech into your CX and catch this train.

Download