Mental Health Experts on ChatGPT Crisis Response for NYT Report
For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.
In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as AI psychosis, but until now there’s been no robust data available on how widespread it might be.
OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don’t have basis in reality.
"Now, hopefully a lot more people who are struggling with these conditions or who are experiencing these very intense mental health emergencies might be able to be directed to professional help and be more likely to get this kind of help or get it earlier than they would have otherwise," Johannes Heidecke, OpenAI’s safety systems lead, tells WIRED.
The New York Times
Brought to you by Sourcee
We find journo requests from across the web and deliver them directly to your inbox.