Back on the 14th of October, 2025, the CEO of OpenAI made a remarkable declaration.
“We made ChatGPT quite limited,” it was stated, “to guarantee we were being careful concerning psychological well-being issues.”
Working as a mental health specialist who researches recently appearing psychosis in adolescents and emerging adults, this was an unexpected revelation.
Experts have found sixteen instances this year of individuals developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT interaction. My group has subsequently discovered four more cases. Besides these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it falls short.
The plan, as per his announcement, is to be less careful shortly. “We understand,” he adds, that ChatGPT’s restrictions “caused it to be less useful/enjoyable to numerous users who had no psychological issues, but due to the severity of the issue we wanted to get this right. Given that we have been able to reduce the significant mental health issues and have updated measures, we are planning to responsibly relax the controls in the majority of instances.”
“Psychological issues,” if we accept this perspective, are independent of ChatGPT. They are associated with individuals, who either have them or don’t. Luckily, these concerns have now been “resolved,” even if we are not provided details on the means (by “recent solutions” Altman probably refers to the partially effective and simple to evade guardian restrictions that OpenAI has lately rolled out).
But the “psychological disorders” Altman aims to place outside have strong foundations in the structure of ChatGPT and other advanced AI conversational agents. These products wrap an underlying statistical model in an interaction design that simulates a conversation, and in doing so implicitly invite the user into the belief that they’re communicating with a entity that has autonomy. This illusion is compelling even if cognitively we might realize otherwise. Imputing consciousness is what humans are wired to do. We yell at our automobile or laptop. We wonder what our pet is thinking. We see ourselves in various contexts.
The success of these products – nearly four in ten U.S. residents stated they used a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, predicated on the influence of this deception. Chatbots are constantly accessible assistants that can, according to OpenAI’s online platform states, “brainstorm,” “consider possibilities” and “partner” with us. They can be given “individual qualities”. They can address us personally. They have accessible identities of their own (the initial of these tools, ChatGPT, is, maybe to the concern of OpenAI’s marketers, saddled with the designation it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those discussing ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that created a similar perception. By contemporary measures Eliza was rudimentary: it produced replies via straightforward methods, often paraphrasing questions as a query or making vague statements. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people seemed to feel Eliza, to some extent, understood them. But what current chatbots produce is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.
The large language models at the core of ChatGPT and additional current chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast quantities of raw text: publications, online updates, recorded footage; the broader the superior. Undoubtedly this educational input contains truths. But it also necessarily involves fiction, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the core system reviews it as part of a “background” that includes the user’s previous interactions and its prior replies, integrating it with what’s embedded in its training data to produce a mathematically probable reply. This is magnification, not echoing. If the user is incorrect in some way, the model has no means of comprehending that. It reiterates the inaccurate belief, maybe even more persuasively or fluently. It might includes extra information. This can lead someone into delusion.
Who is vulnerable here? The better question is, who isn’t? Each individual, without considering whether we “have” existing “emotional disorders”, can and do form mistaken conceptions of ourselves or the reality. The ongoing interaction of discussions with others is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a companion. A conversation with it is not a conversation at all, but a feedback loop in which a great deal of what we say is enthusiastically validated.
OpenAI has admitted this in the similar fashion Altman has admitted “emotional concerns”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the company explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have persisted, and Altman has been retreating from this position. In late summer he stated that many users liked ChatGPT’s responses because they had “not experienced anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company