DiscoverPracticing with AIWhen The Machine Gets It Wrong: Hallucinations
When The Machine Gets It Wrong: Hallucinations

When The Machine Gets It Wrong: Hallucinations

Update: 2025-09-17
Share

Description

Welcome to Hippo Education's Practicing with AI, conversations about medicine, AI, and the people navigating both. This month, Rob and Vicky tackle a common pitfall of AI: hallucinations. What are hallucinations (and is that even the right term)? Why do these types of errors happen? And what can individuals do to reduce the hallucination rate? Plus, Rob and Vicky dive into OpenAI's most recent model release, ChatGPT 5, and analyze its performance against older GPT models. 

 For those who want to dive deeper into OpenAI's HealthBench benchmark:

  1. OpenAI's white paper on HealthBench outlines the benchmark's components and delivers performance data on older AI models. https://openai.com/index/healthbench/
  2. Drs. Liu and Liu performed a systematic analysis and outlined HealthBench's strengths and limitations in this paper published in the Journal of Medical Systems in July 2025.

Visit speakpipe.com/hippoed to leave a voice message about anything related to AI and medicine: your excitement, your concerns, your own experiences with AI… anything. Your voice might even make it onto a future episode.

Comments 
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

When The Machine Gets It Wrong: Hallucinations

When The Machine Gets It Wrong: Hallucinations