Tech/Engineering, Innovation Series

An in-depth analysis of Explainable AI

Ritesh Singh, Software Engineer

Explainable AI Explained


Artificial intelligence powered by machine learning and deep learning has gone through some unprecedented changes over the last decade. We have seen widespread industry adoption across various domains, including retail, technology, healthcare, science. Consequently, the complexity of these systems becomes incomprehensible even for AI experts, resulting in black boxes that are impossible to understand. Users need to trust the systems, understand the errors they make, and the reasoning behind their predictions. That’s precisely where explainable AI comes into play. In this blog, we will explore what Explainable AI (XAI) is, why it is essential, and its principles.

So, what is Explainable AI?

A typical machine learning workflow, from training to deployment in a production environment, looks something like this:

A typical machine learning workflow

 

We use data to train a model through a specific learning process. This learning process results in a prediction function that can be fed inputs. It outputs a prediction in a particular interface, which the final user sees and interacts with.

As an example, take a classification problem for a loan approval system. We could use an artificial neural network, or a decision tree, to an SVM or a boosting model for our prediction function to train models.

After the function has learned, we can then feed it new inputs, which in our case are new loan criteria for a specific user, and it will return a prediction about whether the loan should be approved for that particular user or not. Finally, a user, in our case the bank, sees the output, makes decisions, and starts specific actions. As we can see from the user’s concerns, the problem here is that the prediction comes with no explanation.

Before deploying an ML model, we are aware of possible errors (as we try to reduce error during the learning process), but this only gives us a specific number which is either a loss scale or quantifies a certain numerical distance between the labels in the data point and the predicted labels before deployment. Can we trust this error above everything?

Our ML model can have a low error and still have a particular bias or make intriguing predictions from time to time. It is a good idea to understand these weird predictions from a business perspective as they might provide valuable insights. It could also mean there is something wrong with our ML model.

All these above questions are reasons that make explainable AI so desirable in the current era. Let’s explore how it fits within our previous example.

ML model


This time we use a new and different learning process to learn a function associated with an explainable model. This means this function can give us a prediction and explain the reasons to make that prediction. The new explanation interface displays additional reasoning that can provide our users with some insights into why it made such a prediction.

Importance of XAI

Since the mainstream adoption of AI, it has been reasonable to train ML models with a black box approach as long as they provide accurate predictions. But now, the focus is turning towards the transparency and explainability of the ML models being used. The user wants to know why the models come up with the predictions they produce. This transition in the approach is a result of the following:

  • Understanding what happens when ML models make predictions can help speed up the widespread adoption of these systems. It makes users increasingly comfortable with the technology and removes the magical veil which seems to surround AI. 
  • For some sectors like insurance or banking, company-level or even legislative restrictions make it compulsory that the ML models that these companies use be explainable.
  • In some critical areas such as medicine, where AI can have a significant impact on improving quality of life, it is fundamental that users trust the models without a hint of doubt.
  • Explainable models can help users better use the outputs by making them have even more impact on their research/business or decision making.

There are more reasons, and the topic of explainable AI is broad and varied. Previously, the typical mindset was to have accurate results, so ML models have excellent performance. Some ML models usually perform better than others while also having different interpretability levels. For example, a random forest model usually outperforms a decision tree model, but the former is much harder to explain than the latter. The following figure shows a comparison of different known algorithms. We can see that as performance increases, explainability tends to decrease.

a comparison of different known algorithms

Principles of XAI

The four principles of explainable AI are:

Explanation

An AI system should be capable of explaining its output, with evidence to support the explanation. The type of explanation that a model provides depends on the end-users of the system, and include:

  • Explanations that are designed to benefit end-users by informing them about outputs. An example is a model that processes loan applications and provides reasons why the model approved or denied a loan.
  • Explanations designed to gain trust and acceptance in society. The loan application example may fall into this category if the descriptions are detailed enough to help the users understand why certain decisions were made, even if the findings are not what the user wants (e.g. denial of a loan application).

Meaningful

An AI system explanation is meaningful to a user if it understands the explanation given by the model. Users can grasp the reasons provided by systems. If the recipient understands the model’s explanations, the model satisfies this principle.

This principle does not infer that there is a one-size-fits-all answer. For a model, various explanations may be required for different groups of users. The meaningful principle helps explanations to be customized according to each user group.

Explanation accuracy

An AI system should be able to describe how it arrived at its decision output. This principle focuses on explanation accuracy, which is not the same as decision accuracy — one doesn’t necessarily imply the other.

Knowledge limits

An AI system should operate within its knowledge limits. This is to prevent inaccurate outcomes that may arise when the ML model is outside of its boundaries.

To satisfy this principle, a system should identify (and declare) its knowledge limits. This helps maintain trust in the system’s outputs and reduces the risk of misleading or incorrect decisions. Consider a system built to classify fish species — as part of an aquaculture system. For instance, let’s assume the model is provided with some debris (dog’s image). The ML model should indicate that it did not identify any fish rather than producing misleading identification.

To achieve the principles of XAI emerged a suite of ML techniques that produce models offering an acceptable trade-off between explainability and predictive utility, and enabling humans to understand, trust, and manage the emerging generations of AI models. Among the emerging techniques, two frameworks have been widely recognized as the state-of-the-art in XAI:

  • The Lime framework, introduced by (Ribeiro et al., 2016)
  • SHAP (Shapley Additive Explanation) values, introduced by (Lundberg and Lee, 2017)

Lime and SHAP are surrogate model techniques to open black box machine learning models. The mechanism that these surrogate models follow is, instead of understanding how an AI model has given a particular score to all the data points, it tries to understand the variable contribution in that particular data point based on which the model gave that particular score. It tries to locally interpret with respect to a single data point rather than the whole AI model. They tweak the input slightly and test the changes in prediction. This change has to be small so that it still matches closely with the original data point.

The advantages of Lime and SHAP are that they have the ability to simplify complex models like random forests, etc., by giving a score to each data point and breaking down the contributions of each variable. The approach that these surrogate models use is based on approximation. 

Conclusion

Explainable AI is a new and growing approach that aids users/businesses in comprehending the consequences and decisions that their AI technology suggests. With the constant advancements and applications of new technologies, adapting to and understanding these changes is essential. Many sectors will require explainable AI to comprehend AI and ML systems’ insights, forecasts, and predictions. Now is an ideal time to embrace and seize the opportunities of explainable AI.

Looking to learn more about the intricacies of AI and ML, and similar cutting-edge applications for cloud backup and data protection? Visit the Innovation Series section of Druva’s blog archive.