QUICK LINKS

GETTING STARTED

COMPONENTS

TUTORIALS

Case explainability

Using the below command displays a list of cases. The list allows you to apply filters using tags and search for a particular case by using its unique identifier.


project.cases(tag='Training') # list of cases you can filter it using Tag and search case using Unique Identifier

Activating Inference for an Inactive Model:

When the XGBoost default model is active, you can still access cases associated with an inactive model by first activating inference for the desired model. Use the following function to activate the inference:


project.update_inference_model_status(model_name="XGBoost_v2", activate=True)

Retrieving Case Details for a Specific Model:

Once inference for the specified model is activated, you can retrieve the case details by using the case_info function. Use the following function:


case_info = project.case_info(unique_identifier="129550520", tag="training", model_name="XGBoost_v2")

Feature Importance

Case Feature Importance - Retrieve and analyze feature importance for a given case using SHAP or LIME.


case_info.explainability_feature_importance()

case_info.explainability_feature_importance()

Raw Data and Engineered Data

In AryaXAI, there is a convenient feature that allows users to segregate raw data and engineered data. This feature provides an easy way to switch between viewing and working with the original raw data and the processed engineered data, since your model is trained on the engineered data.

To fetch Raw Data of all features for a particular case via SDK, use the following prompt:


# raw data
case_info.explainability_raw_data() 

Uploading feature mapping:

To upload a feature mapping file in JSON format, use the upload_feature_mapping function. This function allows you to map features for your model directly from a JSON file.


project.upload_feature_mapping("/content/feature.json")

Observations

Observations executed for  Case


case_info.explainability_observations()

Policies which are executed for Case


case_info.explainability_policies()

Running a new case


case_info = project.case_info(unique_identifer='',tag='') 

Retrieve logs

You can set up ML Explainability and rerun the inferencing from time to time. You can see all the inferencing logs here.

Get all cases which are already viewed


project.case_logs(page=1)

 # You can retrieve the inferencing output for any previous here. No credits are used when retriving logs.
case = project.get_viewed_case(case_id="")

To fetch Explainability for a case. This will use the current 'active' model. To call for the explainability, you can pass the UID and the tag.

Caution: if you get notification that 'Failed to generate explainability', please rerun the model training again and redo the case view.

case_info = project.case_info('unique_identifer','tag')

# Case Decision
case_info.explainability_decision() 
NOTE: It takes some time as all predictions are in real time.

Help function on method case info:


help(project.case_info) 
NOTE: If you change the active model, then the prediction and explainability will change as well.

To fetch the Case Prediction Path:


case_info.explainability_prediction_path()

Similar cases as explanations

'Similar cases,' also known as reference explanations, parallels the concept of citing references for a prediction. This method extracts the most similar cases from the training data compared to the 'prediction case.' The similarity algorithm employed depends on the plan. In the AryaXAI Developer version, the 'prediction probability' similarity method is used, while the AryaXAI Enterprise version offers additional methods such as 'Feature Importance Similarity' and 'Data Similarity.'

To list all Similar Cases wrt a particular case use the following function:


# List of Similar Cases wrt to a Case
case_info.similar_cases()

Get data of the similar cases via SDK:


# Data of Similar Cases
case_info.explainability_similar_cases()

Case explainability

Using the below command displays a list of cases. The list allows you to apply filters using tags and search for a particular case by using its unique identifier.


project.cases(tag='Training') # list of cases you can filter it using Tag and search case using Unique Identifier

Activating Inference for an Inactive Model:

When the XGBoost default model is active, you can still access cases associated with an inactive model by first activating inference for the desired model. Use the following function to activate the inference:


project.update_inference_model_status(model_name="XGBoost_v2", activate=True)

Retrieving Case Details for a Specific Model:

Once inference for the specified model is activated, you can retrieve the case details by using the case_info function. Use the following function:


case_info = project.case_info(unique_identifier="129550520", tag="training", model_name="XGBoost_v2")

Feature Importance

Case Feature Importance - Retrieve and analyze feature importance for a given case using SHAP or LIME.


case_info.explainability_feature_importance()

case_info.explainability_feature_importance()

Raw Data and Engineered Data

In AryaXAI, there is a convenient feature that allows users to segregate raw data and engineered data. This feature provides an easy way to switch between viewing and working with the original raw data and the processed engineered data, since your model is trained on the engineered data.

To fetch Raw Data of all features for a particular case via SDK, use the following prompt:


# raw data
case_info.explainability_raw_data() 

Uploading feature mapping:

To upload a feature mapping file in JSON format, use the upload_feature_mapping function. This function allows you to map features for your model directly from a JSON file.


project.upload_feature_mapping("/content/feature.json")

Observations

Observations executed for  Case


case_info.explainability_observations()

Policies which are executed for Case


case_info.explainability_policies()

Running a new case


case_info = project.case_info(unique_identifer='',tag='') 

Retrieve logs

You can set up ML Explainability and rerun the inferencing from time to time. You can see all the inferencing logs here.

Get all cases which are already viewed


project.case_logs(page=1)

 # You can retrieve the inferencing output for any previous here. No credits are used when retriving logs.
case = project.get_viewed_case(case_id="")

To fetch Explainability for a case. This will use the current 'active' model. To call for the explainability, you can pass the UID and the tag.

Caution: if you get notification that 'Failed to generate explainability', please rerun the model training again and redo the case view.

case_info = project.case_info('unique_identifer','tag')

# Case Decision
case_info.explainability_decision() 
NOTE: It takes some time as all predictions are in real time.

Help function on method case info:


help(project.case_info) 
NOTE: If you change the active model, then the prediction and explainability will change as well.

To fetch the Case Prediction Path:


case_info.explainability_prediction_path()

Similar cases as explanations

'Similar cases,' also known as reference explanations, parallels the concept of citing references for a prediction. This method extracts the most similar cases from the training data compared to the 'prediction case.' The similarity algorithm employed depends on the plan. In the AryaXAI Developer version, the 'prediction probability' similarity method is used, while the AryaXAI Enterprise version offers additional methods such as 'Feature Importance Similarity' and 'Data Similarity.'

To list all Similar Cases wrt a particular case use the following function:


# List of Similar Cases wrt to a Case
case_info.similar_cases()

Get data of the similar cases via SDK:


# Data of Similar Cases
case_info.explainability_similar_cases()