“AI Craziness: Additional Suicide Lawsuits and The Fate of GPT-4o” by Zvi
Description
GPT-4o has been a unique problem for a while, and has been at the center of the bulk of mental health incidents involving LLMs that didn’t involve character chatbots. I’ve previously covered related issues in AI Craziness Mitigation Efforts, AI Craziness Notes, GPT-4o Responds to Negative Feedback, GPT-4o Sycophancy Post Mortem and GPT-4o Is An Absurd Sycophant. Discussions of suicides linked to AI previously appeared in AI #87, AI #134, AI #131 Part 1 and AI #122.
The Latest Cases Look Quite Bad For OpenAI
I’ve consistently said that I don’t think it's necessary or even clearly good for LLMs to always adhere to standard ‘best practices’ defensive behaviors, especially reporting on the user, when dealing with depression, self-harm and suicidality. Nor do I think we should hold them to the standard of ‘do all of the maximally useful things.’
Near: while the llm response is indeed really bad/reckless its worth keeping in mind that baseline suicide rate just in the US is ~50,000 people a year; if anything i am surprised there aren’t many more cases of this publicly by now
I do think it's fair to insist they never actively encourage suicidal behaviors.
[...]---
Outline:
(00:47 ) The Latest Cases Look Quite Bad For OpenAI
(02:34 ) Routing Sensitive Messages Is A Dominated Defensive Strategy
(03:28 ) Some 4o Users Get Rather Attached To The Model
(05:04 ) A Theory Of How All This Works
(07:38 ) Maybe This Is Net Good In Spite Of Everything?
(11:10 ) Could One Make A 'Good 4o'?
---
First published:
November 14th, 2025
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.



