AI hallucination is when AI tools confidently produce factually incorrect information as true content. In order to mitigate hallucination, users must critically evaluate AI outputs with human judgement, always double-check the accuracy of AI-generated content, use retrieval-based tools and feed clear prompts. Using low temperatures (setting controlling responses) produces more focused outputs.