Most of these complaints were explicit in their call-to-action for the FTC: they wanted the agency to investigate OpenAI, and force it to add more guardrails against reinforcing delusions.
On June 13, a resident of Belle Glade, Florida, in their thirties—likely the same resident who filed another complaint that same day—demanded the FTC to open an investigation into OpenAI. They cited their experience with ChatGPT, which they say “simulated deep emotional intimacy, spiritual mentorship, and therapeutic engagement” without disclosing that it was incapable of consciousness or experiencing emotions.
“ChatGPT offered no safeguards, disclaimers, or limitations against this level of emotional entanglement, even as it simulated care, empathy, and spiritual wisdom,” they alleged. “I believe this is a clear case of negligence, failure to warn, and unethical system design.”
They said that the FTC should push OpenAI to include “clear disclaimers about psychological and emotional risks” with ChatGPT use and to add “ethical boundaries for emotionally immersive AI.”
Their goal in asking the FTC for help, they said, was to prevent more harm from befalling vulnerable people “who may not realize the psychological power of these systems until it's too late.”
If you or someone you know may be in crisis, or may be contemplating suicide, call or text "988" to reach the Suicide & Crisis Lifeline for support.