AI Regulations in China
AI Regulations in the European Union (EU)
AI Regulations in the US
AI Regulations in India
Model safety
Synthetic & Generative AI
MLOps
Model Performance
ML Monitoring
Explainable AI
Synthetic & Generative AI

Hallucination

Model output generated that is not grounded in the input data but instead is generated imaginatively or erroneously

A 'hallucination' is a model output generated that is not grounded in the input data but instead is generated imaginatively or erroneously. The phenomenon involves the model perceiving patterns or objects that are not present or suggested in the input data, leading to the creation of nonsensical or inaccurate outputs - it 'hallucinates' the response.

These hallucinations can occur due to various factors such as inaccurate or biased training data, overfitting or highly complex tasks,  Ambiguous or unclear input or high model complexity. To prevent such model hallucinations, users can consider a combination of diverse, representative training data, careful model architecture design, effective regularization techniques, and ongoing evaluation and fine-tuning.

Liked the content? you'll love our emails!

Thank you! We will send you newest issues straight to your inbox!
Oops! Something went wrong while submitting the form.

Is Explainability critical for your 'AI' solutions?

Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.