The Fault in AI Predictions: Why Explainability Trumps Predictions

Article

By

Vinay Kumar

January 23, 2023

The last few years have seen tectonic shifts in fields of artificial intelligence and machine learning. There have also been plenty of examples where models failed and model predictions have created troubling outcomes, creating stumbling blocks to adopting AI/ ML—especially for mission-critical functions and in highly regulated industries. For example, research shows that even though algorithms predict the future more accurately than human forecasters, forecasters decide to use a human forecaster over a statistical algorithm. This phenomenon—which we call algorithm aversion—is costly and it is important to understand its causes. This gave rise to Explainable AI (XAI).

What is XAI?

In machine learning, Explainability (XAI) refers to understanding and comprehending the model’s behaviour from input to output. It resolves the ‘black box’ issue by making models transparent. Explainability covers a larger scope of explaining technical aspects, demonstrating impact through a change in variables, how much weightage the inputs are given, and more. In addition, it is needed to provide the much-needed evidence backing the ML model’s predictions to make it trustworthy, responsible and auditable.

The main goal of Explainability is to understand the model, and it lays out how and why a model has given a prediction. There are two types of Explainability:

  • Global Explainability, which focuses on the overall model behaviour, providing an overview of how various data points affect the prediction.
  • Local Explainability, which focuses on individual model predictions and how the model functioned for that prediction.

How is XAI relevant to different stakeholders?

Fundamentally, any user of these models needs additional explanations to understand how the model worked to arrive at that prediction. The depth of such explanations varies with the criticality of the prediction, background, and influence of that user. For example, in loan underwriting use cases, users are typically Underwriters, Customers, Auditors, Regulators, Product Managers and Business Owners. Each of them needs different explanations of how the model worked but the depth of these explanations varies from an underwriter to a regulator.

Most commonly used XAI techniques are understandable only to an AI expert. Therefore, a growing need for simplistic tools and frameworks for the rapid adoption of AI is similar to the need for a more straightforward framework for Explainability.

Builders: DS/ ML teams

ML engineers and data scientists are the builders of automated predictive systems. They work with volumes of data to optimise the model decision-making. Hence, they need to monitor the model and understand the system’s behaviour to improve it, ensure consistency in model performance, flag performance outliers to uncover retraining opportunities and ensure that there is no underlying bias within the data. Explainable AI helps them answer the most crucial questions like:

  • Is there bias in the data?
  • What has worked in the model and what hasn’t?
  • How can one improve the model performance?
  • How should one modify the model?
  • How can one be informed about model deviation in production?

Maintenance: Production/Software Engineering teams

IT/ Engineering teams need to ensure that the AI system runs effectively, gain deep insights into its everyday operations, and troubleshoot any issues that arise. Using Explainable AI equips them to stay on top of crucial questions like:

  • Why has this issue occurred? What can be done to fix it?
  • How can one enhance operational efficiency?

Users: Experts/Decision makers/Customers

Users are the end consumers of the model predictions. Explainable AI helps them uncover if their goals are being met, how the model uses the data, or why the model made a particular prediction in a simple, interpretable format. For example, in underwriting, if a new case is classified as ‘high risk’, the underwriter will have to understand how and why the model arrived at a decision, the fairness of the decision, and if the decision complies with regulatory guidelines. Explainable AI helps such end users get insights on:

  • How did the model arrive at this decision?
  • How is the input data being used for decision-making?
  • Why does the case fall in this category? What can be done to change it?
  • Has the model acted fairly and ethically?

Owners: Business/Process/Operations owners

Business or Process owners need to understand the model behaviour and analyse its impact on the overall business. They must look at multiple aspects such as refining strategy, enhancing customer experiences, and ensuring compliance. Explainable AI equips them with comprehensive model visibility to track bias, gain Explainability, increase customer satisfaction, and visualise the business impact of predictions along with the following:

  • How is the system arriving at this decision?
  • Are the desired goals being met?
  • What variables are considered and how?
  • What are the acceptable and unacceptable boundary limits on this transaction?
  • How can this AI decision be defended by a regulator or customer?

Risk managers: Audit/Regulators/Compliance

Regulators and Auditors need the trust and confidence that risks are under control. Explainable AI provides them with information on the model’s functions, fairness, possible biases and a clear view of failure scenarios while ensuring that the organisation is practising responsible and safe AI and meeting regulatory/compliance requirements.

  • Is there an underlying bias in the model?
  • Is this prediction fair?
  • How can one trust the model outcome?
  • How can we ensure consistency in the model in production?
  • What are the influencing factors in decisions and learning?
  • How to manage the usage risk of AI?

While Explainability has become a prerequisite, justifications for prediction accuracy are just as important. A prediction can be accurate but is it also correct? Hence, accuracy is not enough; evidence is required.

Why is Explainability challenging to attain?

AI systems are inherently complex. Developing, studying, and testing systems for production is complex, and maintaining them in production is significantly more challenging—explaining them accurately in a way that is understandable and acceptable by all stakeholders poses a different challenge altogether!

AI explainability challenges

Explanations: Highly contextual, usually ‘lost in translation’

The explanations need to be understood not only by AI experts but all stakeholders. But, perhaps unsurprisingly, the complex nature of the systems is usually understandable exclusively to AI experts. Typically, Data science and ML teams can understand these explanations. But when relating these explanations in the business sense, they often need help in translation.

Let’s take the current explainability approaches – almost all of them use feature importance as an explanation. But how does a user or an underwriter or doctor, or a risk manager understand this feature’s importance? How is it aligned with business expertise? For example, for a given prediction, an underwriter might think ‘Occupation’ is the top feature in a particular transaction to decide whether to approve or reject a loan. But the XAI method used by the data science team might not mention ‘Occupation’ among the top ten features. This affects the confidence in the model.

Accuracy of explanations

Is any XAI method enough to make an AI solution acceptable? The answer depends on the sensitivity of the use case and the user. While minimal XAI is enough for less sensitive use cases, as the cases become sensitive and high-risk, one can not simply use any ‘XAI’ method.

For sensitive use cases, wrong explanations can create more harm than no explanation!

Going back to the loan underwriting example—let’s say you used a traditional XAI method like LIME to figure out how your models have worked and used feature importance as output. Unfortunately, LIME produces different outputs for different perturbations. So, when there is an audit by the regulator or internally, the Explainability for a case may need to align or be consistent, creating trust challenges in the system and overall business.

Humans are biased to trust the path of the ‘Nexus trail of evidence’

When interacting with the AI models, all stakeholders turn to the ‘Builders’ (Data Science/ML teams) to investigate the source or origin of the explanation. The stakeholders rely on the information that the builders share, with little to no access to the AI model. If there is a need to further analyse an explanation or the evidence to find the root cause of learning and validate the decision, developing a dynamic nexus trail is very complex. Humans do carry intrinsic baggage of learning methods. They tend to trust decision trees with branches aligned with their expectations, and its model learning is chaotic in hindsight but may concur with global learning.

Diversity of metrics

While there are various tools to explain or interpret AI models, they only focus a fraction of a subset of what defines an accurate, sufficient explanation without capturing other dimensions. An effective, in-depth explanation will require combining various metrics like reviewing different types of opacity, analysing of various XAI approaches (since different approaches can generate different explanations), ensuring consistent user studies (can be inconsistent because of UI phrasing, visualisations, specific contexts, needs, and more), and developing standard metrics ultimately.

Explainability risks

AI explainability comes with risks. As mentioned earlier, poor/incorrect explanations will hurt the organisation badly. Delinquent elements or competitors can exploit them, raising privacy risks, especially with mission-critical decisions. Organisations need to be prepared with practical measures to mitigate these risks.

While everyone focuses on model manufacturing, the right product teams have started emphasising the fundamentals of good AI Solutions. XAI is the 101 feature to achieve it. However, the vision of achieving trustworthy AI is incomplete without Explainability. The idea that Explainability will provide insights into understanding model behaviours is, however, currently only serving the needs of AI experts. To achieve truly explainable, scalable and trustworthy AI, Explainability should be incorporated in a way that works across different domains, objectives and stakeholders.

Increased clarity on regulations has also made regulated industries start looking at XAI more seriously and re-evaluate the currently deployed models along with the risk of using them in production. As more users experiment and validate the XAI templates, we could soon see good templates for each use case. AutoML + AutoXAI can scale the adoption exponentially in such a scenario and still achieve responsible and trustworthy AI.

This article is published at Analytics India Magazine. Refer: https://analyticsindiamag.com/the-fault-in-ai-predictions-why-explainability-trumps-predictions/

SHARE THIS

Subscribe to AryaXAI

Stay up to date with all updates

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Discover More Articles

Explore a curated collection of in-depth articles covering the latest advancements, insights, and trends in AI, MLOps, governance, and more. Stay informed with expert analyses, thought leadership, and actionable knowledge to drive innovation in your field.

View All

See how AryaXAI improves
ML Observability

Get Started with AryaXAI

The Fault in AI Predictions: Why Explainability Trumps Predictions

Vinay KumarVinay Kumar
Vinay Kumar
January 23, 2023
The Fault in AI Predictions: Why Explainability Trumps Predictions
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The last few years have seen tectonic shifts in fields of artificial intelligence and machine learning. There have also been plenty of examples where models failed and model predictions have created troubling outcomes, creating stumbling blocks to adopting AI/ ML—especially for mission-critical functions and in highly regulated industries. For example, research shows that even though algorithms predict the future more accurately than human forecasters, forecasters decide to use a human forecaster over a statistical algorithm. This phenomenon—which we call algorithm aversion—is costly and it is important to understand its causes. This gave rise to Explainable AI (XAI).

What is XAI?

In machine learning, Explainability (XAI) refers to understanding and comprehending the model’s behaviour from input to output. It resolves the ‘black box’ issue by making models transparent. Explainability covers a larger scope of explaining technical aspects, demonstrating impact through a change in variables, how much weightage the inputs are given, and more. In addition, it is needed to provide the much-needed evidence backing the ML model’s predictions to make it trustworthy, responsible and auditable.

The main goal of Explainability is to understand the model, and it lays out how and why a model has given a prediction. There are two types of Explainability:

  • Global Explainability, which focuses on the overall model behaviour, providing an overview of how various data points affect the prediction.
  • Local Explainability, which focuses on individual model predictions and how the model functioned for that prediction.

How is XAI relevant to different stakeholders?

Fundamentally, any user of these models needs additional explanations to understand how the model worked to arrive at that prediction. The depth of such explanations varies with the criticality of the prediction, background, and influence of that user. For example, in loan underwriting use cases, users are typically Underwriters, Customers, Auditors, Regulators, Product Managers and Business Owners. Each of them needs different explanations of how the model worked but the depth of these explanations varies from an underwriter to a regulator.

Most commonly used XAI techniques are understandable only to an AI expert. Therefore, a growing need for simplistic tools and frameworks for the rapid adoption of AI is similar to the need for a more straightforward framework for Explainability.

Builders: DS/ ML teams

ML engineers and data scientists are the builders of automated predictive systems. They work with volumes of data to optimise the model decision-making. Hence, they need to monitor the model and understand the system’s behaviour to improve it, ensure consistency in model performance, flag performance outliers to uncover retraining opportunities and ensure that there is no underlying bias within the data. Explainable AI helps them answer the most crucial questions like:

  • Is there bias in the data?
  • What has worked in the model and what hasn’t?
  • How can one improve the model performance?
  • How should one modify the model?
  • How can one be informed about model deviation in production?

Maintenance: Production/Software Engineering teams

IT/ Engineering teams need to ensure that the AI system runs effectively, gain deep insights into its everyday operations, and troubleshoot any issues that arise. Using Explainable AI equips them to stay on top of crucial questions like:

  • Why has this issue occurred? What can be done to fix it?
  • How can one enhance operational efficiency?

Users: Experts/Decision makers/Customers

Users are the end consumers of the model predictions. Explainable AI helps them uncover if their goals are being met, how the model uses the data, or why the model made a particular prediction in a simple, interpretable format. For example, in underwriting, if a new case is classified as ‘high risk’, the underwriter will have to understand how and why the model arrived at a decision, the fairness of the decision, and if the decision complies with regulatory guidelines. Explainable AI helps such end users get insights on:

  • How did the model arrive at this decision?
  • How is the input data being used for decision-making?
  • Why does the case fall in this category? What can be done to change it?
  • Has the model acted fairly and ethically?

Owners: Business/Process/Operations owners

Business or Process owners need to understand the model behaviour and analyse its impact on the overall business. They must look at multiple aspects such as refining strategy, enhancing customer experiences, and ensuring compliance. Explainable AI equips them with comprehensive model visibility to track bias, gain Explainability, increase customer satisfaction, and visualise the business impact of predictions along with the following:

  • How is the system arriving at this decision?
  • Are the desired goals being met?
  • What variables are considered and how?
  • What are the acceptable and unacceptable boundary limits on this transaction?
  • How can this AI decision be defended by a regulator or customer?

Risk managers: Audit/Regulators/Compliance

Regulators and Auditors need the trust and confidence that risks are under control. Explainable AI provides them with information on the model’s functions, fairness, possible biases and a clear view of failure scenarios while ensuring that the organisation is practising responsible and safe AI and meeting regulatory/compliance requirements.

  • Is there an underlying bias in the model?
  • Is this prediction fair?
  • How can one trust the model outcome?
  • How can we ensure consistency in the model in production?
  • What are the influencing factors in decisions and learning?
  • How to manage the usage risk of AI?

While Explainability has become a prerequisite, justifications for prediction accuracy are just as important. A prediction can be accurate but is it also correct? Hence, accuracy is not enough; evidence is required.

Why is Explainability challenging to attain?

AI systems are inherently complex. Developing, studying, and testing systems for production is complex, and maintaining them in production is significantly more challenging—explaining them accurately in a way that is understandable and acceptable by all stakeholders poses a different challenge altogether!

AI explainability challenges

Explanations: Highly contextual, usually ‘lost in translation’

The explanations need to be understood not only by AI experts but all stakeholders. But, perhaps unsurprisingly, the complex nature of the systems is usually understandable exclusively to AI experts. Typically, Data science and ML teams can understand these explanations. But when relating these explanations in the business sense, they often need help in translation.

Let’s take the current explainability approaches – almost all of them use feature importance as an explanation. But how does a user or an underwriter or doctor, or a risk manager understand this feature’s importance? How is it aligned with business expertise? For example, for a given prediction, an underwriter might think ‘Occupation’ is the top feature in a particular transaction to decide whether to approve or reject a loan. But the XAI method used by the data science team might not mention ‘Occupation’ among the top ten features. This affects the confidence in the model.

Accuracy of explanations

Is any XAI method enough to make an AI solution acceptable? The answer depends on the sensitivity of the use case and the user. While minimal XAI is enough for less sensitive use cases, as the cases become sensitive and high-risk, one can not simply use any ‘XAI’ method.

For sensitive use cases, wrong explanations can create more harm than no explanation!

Going back to the loan underwriting example—let’s say you used a traditional XAI method like LIME to figure out how your models have worked and used feature importance as output. Unfortunately, LIME produces different outputs for different perturbations. So, when there is an audit by the regulator or internally, the Explainability for a case may need to align or be consistent, creating trust challenges in the system and overall business.

Humans are biased to trust the path of the ‘Nexus trail of evidence’

When interacting with the AI models, all stakeholders turn to the ‘Builders’ (Data Science/ML teams) to investigate the source or origin of the explanation. The stakeholders rely on the information that the builders share, with little to no access to the AI model. If there is a need to further analyse an explanation or the evidence to find the root cause of learning and validate the decision, developing a dynamic nexus trail is very complex. Humans do carry intrinsic baggage of learning methods. They tend to trust decision trees with branches aligned with their expectations, and its model learning is chaotic in hindsight but may concur with global learning.

Diversity of metrics

While there are various tools to explain or interpret AI models, they only focus a fraction of a subset of what defines an accurate, sufficient explanation without capturing other dimensions. An effective, in-depth explanation will require combining various metrics like reviewing different types of opacity, analysing of various XAI approaches (since different approaches can generate different explanations), ensuring consistent user studies (can be inconsistent because of UI phrasing, visualisations, specific contexts, needs, and more), and developing standard metrics ultimately.

Explainability risks

AI explainability comes with risks. As mentioned earlier, poor/incorrect explanations will hurt the organisation badly. Delinquent elements or competitors can exploit them, raising privacy risks, especially with mission-critical decisions. Organisations need to be prepared with practical measures to mitigate these risks.

While everyone focuses on model manufacturing, the right product teams have started emphasising the fundamentals of good AI Solutions. XAI is the 101 feature to achieve it. However, the vision of achieving trustworthy AI is incomplete without Explainability. The idea that Explainability will provide insights into understanding model behaviours is, however, currently only serving the needs of AI experts. To achieve truly explainable, scalable and trustworthy AI, Explainability should be incorporated in a way that works across different domains, objectives and stakeholders.

Increased clarity on regulations has also made regulated industries start looking at XAI more seriously and re-evaluate the currently deployed models along with the risk of using them in production. As more users experiment and validate the XAI templates, we could soon see good templates for each use case. AutoML + AutoXAI can scale the adoption exponentially in such a scenario and still achieve responsible and trustworthy AI.

This article is published at Analytics India Magazine. Refer: https://analyticsindiamag.com/the-fault-in-ai-predictions-why-explainability-trumps-predictions/

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.