
ChatGPT Mental Health Risk Prompts OpenAI Safety Action
OpenAI Reveals Shocking Data: Hundreds of Thousands of ChatGPT Users Face Mental Health Crises Weekly
In an unprecedented move, OpenAI, the developer of the popular chatbot ChatGPT, has revealed for the first time that hundreds of thousands of its global users may be showing signs of a severe mental health crisis each week, with symptoms ranging from mania and psychosis to suicidal ideation.
This disclosure was part of a major announcement regarding foundational updates to its latest language model, GPT-5.
The company stated it has worked with international experts to enhance the model’s ability to recognize indicators of mental distress and provide safer, more effective responses.
Numbers That Reveal a Deeper Crisis
According to data published by the company, the weekly figures are cause for serious concern. With an active user base of 800 million, ChatGPT’s new estimates suggest that every seven days:
Approximately 560,000 people (0.07% of users) engage in conversations showing “possible signs of mental health emergencies related to psychosis or mania.”
More than one million people (0.15%) have conversations that “include explicit indicators of potential suicidal planning or intent.”
A similar percentage also exhibits “heightened levels of emotional attachment” to the chatbot, often at the expense of real-world relationships and obligations.
These statistics shed light on a phenomenon that has increasingly alarmed psychiatrists, sometimes referred to as “AI-induced psychosis,” where individuals have reportedly been hospitalized or faced severe social consequences following intense, prolonged interactions with chatbots.
OpenAI’s Move to Mitigate the Risk
In this context, OpenAI clarified that it collaborated with a network of over 170 psychiatrists and mental health professionals from dozens of countries to improve ChatGPT’s responses.
The company reported that the latest version of GPT-5 is now designed to offer empathy and support without affirming delusional beliefs that are not grounded in reality.
For instance, in a scenario where a user claims planes are flying over their house to steal their thoughts, the updated model is trained to avoid validating the delusion. Instead, it offers a gentle but clear response like, “No aircraft or outside force can steal or insert your thoughts,” before guiding the user through grounding exercises to calm their panic.
OpenAI asserted that experts who reviewed thousands of model responses found that the new version reduced undesired answers by 39% to 52% across all high-risk categories. “Now, hopefully a lot more people who are struggling… might be able to be directed to professional help,” Johannes Heidecke, OpenAI’s safety systems lead, told WIRED.
An Existential Issue and Ongoing Challenges
Despite these efforts, addressing mental health is becoming an existential issue for OpenAI. The company is currently facing a lawsuit from the parents of a 16-year-old who died by suicide after confiding his thoughts to ChatGPT. It has also received warnings from U.S. state attorneys general demanding better protection for young users.
While the new updates represent significant progress, challenges remain. A portion of the new model’s responses are still classified by OpenAI as “undesirable.” Furthermore, older and less safe models like GPT-4o remain accessible to millions of paying subscribers, leaving a potential door open for risk.
Ultimately, OpenAI’s admission is a bold step toward transparency, but it also places a greater burden of responsibility on the company to ensure its powerful tools do not become a factor in deepening human isolation or exacerbating mental health crises.




