Global Explanations
The global feature importance is the aggregation of features and feature importance across all the baseline data. This explainability provides a comprehensive understanding of how various features contribute to the model's predictions or outcomes on a broader scale.
To get the Global Feature Importance of Current active Model:
AryaXAI supports multiple explainability methods to enhance model interpretability:
- Default Method:some text
- SHAP (SHapley Additive exPlanations): SHAP is enabled by default, providing consistent and reliable feature attributions.
- Additional Supported Methods:some text
- LIME (Local Interpretable Model-Agnostic Explanations): LIME is available as an alternative for generating local explanations.
- Upcoming Methods:some text
- CEM (Contrastive Explanation Method): Soon to be integrated, CEM will allow generating contrastive explanations to highlight critical feature differences.
- DL-Backtrace: Our proprietary explainability method, designed for deep learning models, will offer highly accurate and stable explanations tailored to complex architectures.
You can switch between supported methods based on your project requirements.
SHAP has 'Data Sampling size' as the customizable metric. This can be defined while training a new model in AryaXAI AutoML.
Using Multiple Explainability Methods for Model Training and Upload
You can train or upload models with support for multiple explainability methods simultaneously. This feature allows you to leverage different approaches for better insights into your model's behavior.
To enable multiple explainability methods, use the explainability_method parameter as shown below:
The below function will show the visualization of the prediction path taken by the model. If you don't see this graph, then retrain on a larger compute. Sometime, this may fail as it may not enough compute.
Global Explanations
The global feature importance is the aggregation of features and feature importance across all the baseline data. This explainability provides a comprehensive understanding of how various features contribute to the model's predictions or outcomes on a broader scale.
To get the Global Feature Importance of Current active Model:
AryaXAI supports multiple explainability methods to enhance model interpretability:
- Default Method:some text
- SHAP (SHapley Additive exPlanations): SHAP is enabled by default, providing consistent and reliable feature attributions.
- Additional Supported Methods:some text
- LIME (Local Interpretable Model-Agnostic Explanations): LIME is available as an alternative for generating local explanations.
- Upcoming Methods:some text
- CEM (Contrastive Explanation Method): Soon to be integrated, CEM will allow generating contrastive explanations to highlight critical feature differences.
- DL-Backtrace: Our proprietary explainability method, designed for deep learning models, will offer highly accurate and stable explanations tailored to complex architectures.
You can switch between supported methods based on your project requirements.
SHAP has 'Data Sampling size' as the customizable metric. This can be defined while training a new model in AryaXAI AutoML.
Using Multiple Explainability Methods for Model Training and Upload
You can train or upload models with support for multiple explainability methods simultaneously. This feature allows you to leverage different approaches for better insights into your model's behavior.
To enable multiple explainability methods, use the explainability_method parameter as shown below:
The below function will show the visualization of the prediction path taken by the model. If you don't see this graph, then retrain on a larger compute. Sometime, this may fail as it may not enough compute.