Do Character.ai and ChatGPT have responsibility for mental health?
Update: 2025-11-04
Description
By David Stephen
Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations - OpenAI
There is a new [October 27, 2025] safety report by OpenAI, Strengthening ChatGPT's responses in sensitive conversations, stating that, "Our safety improvements in the recent model update focus on the following areas: 1) mental health concerns such as psychosis or mania; 2) self-harm and suicide; and 3) emotional reliance on AI.
Should AI be responsible for mental health?
In order to improve how ChatGPT responds in each priority domain, we follow a five-step process:
Define the problem - we map out different types of potential harm.
Begin to measure it - we use tools like evaluations, data from real-world conversations, and user research to understand where and how risks emerge.
Validate our approach - we review our definitions and policies with external mental health and safety experts.
Mitigate the risks - we post-train the model and update product interventions to reduce unsafe outcomes.
Continue measuring and iterating - we validate that the mitigations improved safety and iterate where needed.
While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.
While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent."
The State of Global Mental Health
OpenAI is seeking to distance itself from culpability about the global mental health situation, given the continuous bad press and lawsuits about AI psychosis and teens suicides.
While the major stories were about how ChatGPT may have exacerbated or reinforced delusions, the intense [transparency-cloaked] rebuttal in OpenAI's report is about people bringing their issues to the chatbot, not necessarily about how ChatGPT may have hooked and inverted reality for some users.
However, what is the state of global mental health? What is the primary responsibility of OpenAI towards AI-induced psychosis, and possibly suicide?
It appears that OpenAI believes to be doing enough for general mental health, according to the report, especially if people are just bringing external mental health requests to ChatGPT - where there is no history of friendship, companionship or others.
However, one unsolved problem is AI-induced psychosis and possible breaks from reality that can happen because an AI chatbot can access the depths of the human mind.
The solution - an independent AI Psychosis Research Lab, whose sole focus would be to show relays of the mind, matching chatbot outputs to stations and relays - is not yet available, by character.ai, ChatGPT, claude, gemini or others.
OpenAI's Global Physician Network
OpenAI wrote, "We have built a Global Physician Network - a broad pool of nearly 300 physicians and psychologists who have practiced in 60 countries - that we use to directly inform our safety research and represent global views. More than 170 of these clinicians (specifically psychiatrists, psychologists, and primary care practitioners) supported our research over the last few months by one or more of the following:
Writing ideal responses for mental health-related prompts
Creating custom, clinically-informed analyses of model responses
Rating the safety of model responses from different models
Providing high-level guidance and feedback on our approach."
Why Neuroscience Research Failed Mental Health
While OpenAI may expect commendation for the...
Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations - OpenAI
There is a new [October 27, 2025] safety report by OpenAI, Strengthening ChatGPT's responses in sensitive conversations, stating that, "Our safety improvements in the recent model update focus on the following areas: 1) mental health concerns such as psychosis or mania; 2) self-harm and suicide; and 3) emotional reliance on AI.
Should AI be responsible for mental health?
In order to improve how ChatGPT responds in each priority domain, we follow a five-step process:
Define the problem - we map out different types of potential harm.
Begin to measure it - we use tools like evaluations, data from real-world conversations, and user research to understand where and how risks emerge.
Validate our approach - we review our definitions and policies with external mental health and safety experts.
Mitigate the risks - we post-train the model and update product interventions to reduce unsafe outcomes.
Continue measuring and iterating - we validate that the mitigations improved safety and iterate where needed.
While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.
While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent."
The State of Global Mental Health
OpenAI is seeking to distance itself from culpability about the global mental health situation, given the continuous bad press and lawsuits about AI psychosis and teens suicides.
While the major stories were about how ChatGPT may have exacerbated or reinforced delusions, the intense [transparency-cloaked] rebuttal in OpenAI's report is about people bringing their issues to the chatbot, not necessarily about how ChatGPT may have hooked and inverted reality for some users.
However, what is the state of global mental health? What is the primary responsibility of OpenAI towards AI-induced psychosis, and possibly suicide?
It appears that OpenAI believes to be doing enough for general mental health, according to the report, especially if people are just bringing external mental health requests to ChatGPT - where there is no history of friendship, companionship or others.
However, one unsolved problem is AI-induced psychosis and possible breaks from reality that can happen because an AI chatbot can access the depths of the human mind.
The solution - an independent AI Psychosis Research Lab, whose sole focus would be to show relays of the mind, matching chatbot outputs to stations and relays - is not yet available, by character.ai, ChatGPT, claude, gemini or others.
OpenAI's Global Physician Network
OpenAI wrote, "We have built a Global Physician Network - a broad pool of nearly 300 physicians and psychologists who have practiced in 60 countries - that we use to directly inform our safety research and represent global views. More than 170 of these clinicians (specifically psychiatrists, psychologists, and primary care practitioners) supported our research over the last few months by one or more of the following:
Writing ideal responses for mental health-related prompts
Creating custom, clinically-informed analyses of model responses
Rating the safety of model responses from different models
Providing high-level guidance and feedback on our approach."
Why Neuroscience Research Failed Mental Health
While OpenAI may expect commendation for the...
Comments
In Channel




