DiscoverTechCrunch Industry NewsAre bad incentives to blame for AI hallucinations?
Are bad incentives to blame for AI hallucinations?

Are bad incentives to blame for AI hallucinations?

Update: 2025-09-08
Share

Description

A new research paper from OpenAI asks why large language models like GPT-5 and chatbots like ChatGPT still hallucinate, and whether anything can be done to reduce those hallucinations. In a blog post summarizing the paper, OpenAI defines hallucinations as plausible but false statements generated by language models, and it acknowledges that despite improvements, hallucinations remain a fundamental challenge for all large language models, one that will never be completely eliminated.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

Comments 
loading
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Are bad incentives to blame for AI hallucinations?

Are bad incentives to blame for AI hallucinations?

TechCrunch