Hallucination
Model output generated that is not grounded in the input data but instead is generated imaginatively or erroneously
A 'hallucination' is a model output generated that is not grounded in the input data but instead is generated imaginatively or erroneously. The phenomenon involves the model perceiving patterns or objects that are not present or suggested in the input data, leading to the creation of nonsensical or inaccurate outputs - it 'hallucinates' the response.
These hallucinations can occur due to various factors such as inaccurate or biased training data, overfitting or highly complex tasks, Ambiguous or unclear input or high model complexity. To prevent such model hallucinations, users can consider a combination of diverse, representative training data, careful model architecture design, effective regularization techniques, and ongoing evaluation and fine-tuning.
Liked the content? you'll love our emails!
Is Explainability critical for your 'AI' solutions?
Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.