Are bad incentives to blame for AI hallucinations?
Update: 2025-09-08
Description
A new research paper from OpenAI asks why large language models like GPT-5 and chatbots like ChatGPT still hallucinate, and whether anything can be done to reduce those hallucinations. In a blog post summarizing the paper, OpenAI defines hallucinations as plausible but false statements generated by language models, and it acknowledges that despite improvements, hallucinations remain a fundamental challenge for all large language models, one that will never be completely eliminated.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Comments
In Channel