OpenAI, the company behind ChatGPT, has shared new particulars about how it checks for mental health issues, suicidal thoughts, and emotional attachment among its users. The goal, rendering to OpenAI, is to make discussions with ChatGPT safer and to guide people in distress toward professional help. However, this announcement has sparked major criticism online, with many accusing the business of “moral policing” and invading users’ privacy.
OpenAI’s New Safety Process
In its blog post, OpenAI explained that it has created detailed safety guides called “taxonomies.” These are rulebooks that help ChatGPT and other AI models classify when a user might be in emotional anxiety. The firm says it worked closely with mental health experts and doctors to build these systems.
The AI is now trained to:
- Recognize signs of distress or crisis
- Calm down intense conversations (de-escalate)
- Suggest professional mental health resources or hotlines
OpenAI has also added an expanded list of crisis helplines, and when it notices sensitive discussions, ChatGPT can switch to a safer version of the model for better handling.
How the Evaluation Works
The company said it doesn’t just monitor how persons use ChatGPT but also runs structured tests before presenting any new safety features.
According to OpenAI’s data:
- About 0.07% of users show signs of psychosis or mania
- About 0.15% show possible suicidal thoughts or emotional dependence on ChatGPT
To create this system, OpenAI referred nearly 300 doctors and psychologists from 60 countries, and 170 of them supported the company’s methods.
Criticism from Experts and Users
Despite OpenAI’s claims, many people online are unhappy with the idea. They believe the company should not decide what totals as a “healthy” or “unhealthy” conversation.
A user on X (formerly Twitter), @masenmakes, said that topics like AI-driven psychosis and AI reliance are too sensitive to be handled privately and need public discussion.
Another user, @voidfreud, pointed out that even experts disagreed 23–29% of the time on what replies were harmful, questioning how OpenAI could judge users accurately.
A third user, @justforglimpse, accused the company of “moral policing,” saying OpenAI built “an invisible moral court” that decides what’s too risky, without users even realizing it.
The Bigger Debate
While OpenAI says its goal is to protect users, critics argue that this method could lead to unwanted monitoring and control. The debate raises an significant question:
Should AI companies have the right to judge or delay with users’ emotional well-being — or should that responsibility belong only to humans and professionals?
For now, OpenAI’s mental health detection structure is being tested, but the mixed reactions show that balancing AI safety and user freedom is more complicated than it seems.

