Explainability (XAI) techniques for Deep Learning and limitations
10 Min Read
November 26, 2024
Explainable AI (XAI) is essential for bridging the gap between complex, black-box models and human understanding, building trust and enabling effective AI deployment. This blog explores key explainability techniques for deep learning models, the limitations of current methods, and the data-specific challenges that must be addressed to build more reliable and interpretable AI systems.
1. Visualization-Based Methods
These methods aim to visually illustrate how model inputs influence outputs, helping users grasp the internal decision-making process of complex models.
A. Backpropagation-Based Methods
Backpropagation-based methods trace input influences back through the network layers to highlight data components most relevant to the output. They can leverage the network’s structure and specific layers to generate explanations. Common techniques include:
- Activation Maximization: Generates images that maximize neuron activations, revealing patterns in the data that trigger certain model responses.
- Class Activation Maps (CAM) and Grad-CAM: Visualize regions of input data (often images) that strongly contribute to the output. Grad-CAM builds heatmaps, highlighting important areas of an image for specific predictions.
- Layer-Wise Relevance Propagation (LRP): Distributes relevance scores from the output layer back to the input, pinpointing critical features. Assigns relevance scores to input features by propagating the output backwards. The technique is used for deep architectures like CNNs and RNNs.
- DeepLIFT and Integrated Gradients: Compare the sample to a reference and calculate the impact of changes on output. DeepLIFT computes differences in activation between components, while Integrated Gradients integrate gradients along a path from the reference to the sample.
B. Perturbation-Based Methods
These methods approach the model as a “black box,” altering or removing parts of the input to observe how the output changes. Perturbation-based techniques include:
- Occlusion Sensitivity: Perturbs (or removes) specific parts of the input to test the model's sensitivity to those changes.
- Representation Erasure: Applies data-agnostic modifications, removing features like words (in text data) or specific image pixels to evaluate impact.
- SHAP (SHapley Additive exPlanations): A game-theoretic approach that calculates the marginal contribution of each feature to the output by permuting feature values across samples.
- RISE (Randomized Input Sampling for Explanations): Uses random masks to generate saliency maps that indicate feature importance
- Heatmaps and Saliency Maps (e.g., SmoothGrad, Extremal Perturbations): Combine visual and numerical interpretations to highlight decision-influencing regions.
- Temporal Masks for Video: Masks applied to sequences to study saliency in temporal models.
2. Distillation-Based Methods
Distillation techniques create simpler, interpretable models that approximate the behavior of complex neural networks, using either local or global approaches.
A. Local Approximation
Local approximation methods explain model behavior on a small, representative subset of data. These methods work by concentrating on specific instances and approximating the model’s behavior locally:
- LIME (Local Interpretable Model-Agnostic Explanations): Builds a surrogate model (often linear) around a specific prediction to explain influential features within that neighbourhood.
B. Model Translation
Model translation methods aim to approximate the entire model with a simpler model, mapping input features to predictions:
- Tree-Based Models, Graphs, and Rule-Based Systems: Translate complex neural networks into interpretable formats like decision trees or rule-based representations, allowing a more transparent understanding of decision processes. However, this often leads to an oversimplification, as continuous model behavior is approximated by discrete rules, which may not capture all nuances.
- Methods like SARFA (Saliency-Aware Reinforcement Learning Framework) are used for RL entities.
Current Methods of Explainable AI (XAI) fail to capture the complexities of deep learning models
Several methods have been developed to make complex AI systems more interpretable. Despite the advancements in methods, XAI faces critical challenges in achieving full transparency and reliability:
1. Black-Box Nature of DNNs
Deep neural networks (DNNs) often function as opaque systems, making it difficult to understand their decision-making processes. This lack of interpretability can lead to issues like bias, as seen in adversarial scenarios such as biased outputs in image classification.
2. Scalability and Computational Complexity
- Perturbation-Based Methods: Computationally expensive, particularly for high-dimensional inputs like video or sequence data.
- Global Methods (e.g., SHAP): Struggle to scale for large, high-dimensional models due to intensive resource requirements. SHAP calculates the contribution of each feature to a prediction, and the complexity grows exponentially with the number of features, demanding significant resources. Similarly, LIME generates local approximations by perturbing input data, but its reliance on random feature perturbations can result in inconsistent explanations.
- Occlusion and RISE: Require pixel-by-pixel computations, which are time-intensive.
- Temporal Masks: Increase computational burden with longer sequence lengths.
3. Architecture Dependency
Some methods, such as Gradient-weighted Class Activation Mapping (Grad-CAM) and Layer-Wise Relevance Propagation (LRP), are limited to specific architectures like CNNs, restricting their applicability to diverse models.
4. Input Modality Challenges
- Text and Discrete Data: Perturbation-based methods often struggle with handling discrete data like text.
- Video and Temporal Dependencies: Methods like Temporal Masks face difficulties in interpreting long video sequences and require task-specific adaptations.
5. Evaluation Challenges
- Subjective Metrics: Reliance on human annotation (e.g., pointing games) introduces inconsistencies.
- Lack of Standards: Heatmaps and saliency maps lack universally accepted evaluation criteria, making interpretation subjective and inconsistent.
6. Generalizability
- Limited Scope: Many techniques are designed for specific tasks (e.g., RL-focused methods) or data types (e.g., images), reducing their broader applicability.
- Sub-optimal Granularity: High-granularity methods may accurately highlight regions but lose semantic meaning, while patch-based methods smooth over fine details.
7. Vulnerability to Adversarial Noise
Perturbation granularity can expose methods to adversarial attacks, undermining their robustness and reliability.
8. Interpretation Complexity
- Post-Hoc Methods (e.g., SHAP, Grad-CAM): May fail to provide intuitive explanations, particularly for non-technical users.
- LIME: Highly dependent on sampling strategies, which can lead to oversimplification and misrepresentation of global behaviors.
9. Accuracy vs. Interpretability Trade-offs
- Model Translation Techniques: Translating black-box models into rule-based or tree-based systems often sacrifices accuracy for interpretability.
- Distillation Methods: They may simplify explanations but fail to capture complex global behaviors.
10. Data Dependence
Most global interpretability methods require extensive data and computational resources, limiting their practicality in data-scarce scenarios.
Data-Specific Challenges in Explainability Methods
Apart from the challenges mentioned above, explainability methods also encounter unique challenges across various data types.
In large language models (LLMs) and text data, attention weights, commonly used for interpretations, often fail to represent true reasoning paths, making explanations unreliable. Handling long-range dependencies further complicates attributions, while perturbation-based methods risk disrupting semantic or syntactic meaning, leading to misleading outcomes.
In computer vision, techniques like Grad-CAM and saliency maps often lack precision, highlighting broad regions rather than pinpointing causal features. Tasks such as medical imaging and object detection demand high-resolution granularity, which current methods fail to provide. Moreover, adversarial sensitivity—where small input changes drastically affect predictions—compromises the reliability of explanations. For sequential and temporal data, long-range dependencies across time steps make it difficult to attribute predictions to specific moments, while computational demands for models processing video or time-series data pose additional challenges.
Finally, in multi-modal data, existing methods struggle to integrate interactions between different modalities, such as text and images, leading to incomplete explanations. Modality-specific biases can dominate predictions, further obscuring the contributions of other data types.
Conclusion
Though several techniques including visualization-based methods and distillation and perturbation-based methods have advanced our understanding of complex models, enterprises still face significant adoption challenges. The opaque nature of deep learning systems along with computation overhead as well as the inability to handle varied modalities of data such as text images and multi-modal inputs creates a challenge towards the adoption of reliable AI systems.
To address these gaps, future research must prioritize methods that balance interpretability and accuracy, scale efficiently for large models, and provide robust explanations across data types.
This is Part I of our blog on the AI explainability series. In the next blog, we will explore the importance of explainability and the opportunities it can unlock.
References:
- Explainable AI workshop by AryaXAI
- Explainable Artificial Intelligence: a Systematic Review
- Explaining deep neural networks: A survey on the global interpretation methods
- Perturbation-based methods for explaining deep neural networks: A survey
- Explaining Deep Neural Networks and Beyond:A Review of Methods and Applications
SHARE THIS
Discover More Articles
Explore a curated collection of in-depth articles covering the latest advancements, insights, and trends in AI, MLOps, governance, and more. Stay informed with expert analyses, thought leadership, and actionable knowledge to drive innovation in your field.
Is Explainability critical for your 'AI' solutions?
Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.