AI models may hallucinate less than humans: Anthropic CEO

Anthropic CEO Dario Amodei claimed that AI models may now hallucinate or generate incorrect information less frequently than humans in factual tasks, as per reports. "If you define hallucination as confidently saying something that's wrong, humans do that a lot," Dario said. He highlighted that Anthropic's Claude models provide more accurate answers than humans in verifiable question formats.

Load More