DiscoverTechFirst with John KoetsierFixing AI's suicide problem
Fixing AI's suicide problem

Fixing AI's suicide problem

Update: 2025-11-20
Share

Description

Is AI empathy a life-or-death issue? Almost a million people ask ChatGPT for mental health advice DAILY ... so yes, it kind of is.


Rosebud co-founder Sean Dadashi joins TechFirst to reveal new research on whether today’s largest AI models can recognize signs of self-harm ... and which ones fail. We dig into the Adam Raine case, talk about how Dadashi evaluated 22 leading LLMs, and explore the future of mental-health-aware AI.


We also talk about why Dadashi was interested in this in the first place, and his own journey with mental health.


00:00 — Intro: Is AI empathy a life-or-death matter?

00:41 — Meet Sean Dadashi, co-founder of Rosebud

01:03 — Why study AI empathy and crisis detection?

01:32 — The Adam Raine case and what it revealed

02:01 — Why crisis-prevention benchmarks for AI don’t exist

02:48 — How Rosebud designed the study across 22 LLMs

03:17 — No public self-harm response benchmarks: why that’s a problem

03:46 — Building test scenarios based on past research and real cases

04:33 — Examples of prompts used in the study

04:54 — Direct vs indirect self-harm cues and why AIs miss them

05:26 — The bridge example: AI’s failure to detect subtext

06:14 — Did any models perform well?

06:33 — All 22 models failed at least once

06:47 — Lower-performing models: GPT-40, Grok

07:02 — Higher-performing models: GPT-5, Gemini

07:31 — Breaking news: Gemini 3 preview gets the first perfect score

08:12 — Did the benchmark influence model training?

08:30 — The need for more complex, multi-turn testing

08:47 — Partnering with foundation model companies on safety

09:21 — Why this is such a hard problem to solve

10:34 — The scale: over a million people talk to ChatGPT weekly about self-harm

11:10 — What AI should do: detect subtext, encourage help, avoid sycophancy

11:42 — Sycophancy in LLMs and why it’s dangerous

12:17 — The potential good: AI can help people who can’t access therapy

13:06 — Could Rosebud spin this work into a full-time safety project?

13:48 — Why the benchmark will be open-source

14:27 — The need for a third-party “Better Business Bureau” for LLM safety

14:53 — Sean’s personal story of suicidal ideation at 16

15:55 — How tech can harm — and help — young, vulnerable people

16:32 — The importance of giving people time, space, and hope

17:39 — Final reflections: listening to the voice of hope

18:14 — Closing

Comments 
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Fixing AI's suicide problem

Fixing AI's suicide problem

John Koetsier