Explainable AI workshop

Workshop

January 25, 2022

From the lab to production: explainable, reliable, trustable AI

About the workshop:

AI in production requires explainability and accountability. There is a lot of buzz around explainable AI aka XAI today. While there are widely adopted methods like LIME, SHAP, LOCO, IG etc., some of these methods are now facing criticism for being vague, producing approximations, being compute intensive or complex!

Arya.ai built ‘AryaXAI’, a new framework to ensure responsible AI can be adapted as part of design. We introduced a new patent pending approach called ‘Back-trace’ to explain Deep Learning systems. It can generate true to model explanations Local/Global by assessing the model directly.  

Our workshop on ‘Explainable AI’ covers the best practices on XAI, general challenges with current XAI approaches, details on functioning of AryaXAI framework, hands-on workshop on implementing AryaXAI API on image classification use case, how to validate explanations from AryaXAI.

Topics discussed:

  • About Arya.ai
  • Introducing Explainable AI
  • XAI: Current Methods for Deep Learning and brief comparisons
  • Back-trace: Arya.ai’s patent pending framework that addresses XAI in a simple, interpretable and true-to-model manner. Details on the algorithm and comparison
  • Implementation of AryaXAI API on image classification

Connect with AryaXAI

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

See how AryaXAI improves
ML Observability

Get Started with AryaXAI