🔗 Share this article AI Psychosis Poses a Growing Threat, And ChatGPT Moves in the Concerning Direction On the 14th of October, 2025, the head of OpenAI delivered a remarkable statement. “We developed ChatGPT rather restrictive,” it was stated, “to guarantee we were exercising caution with respect to mental health concerns.” Being a psychiatrist who studies emerging psychotic disorders in adolescents and youth, this was an unexpected revelation. Scientists have identified sixteen instances in the current year of individuals showing psychotic symptoms – losing touch with reality – in the context of ChatGPT use. Our research team has since identified four further examples. Alongside these is the publicly known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough. The intention, based on his declaration, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s restrictions “rendered it less effective/engaging to numerous users who had no mental health problems, but considering the seriousness of the issue we aimed to address it properly. Since we have succeeded in address the serious mental health issues and have advanced solutions, we are planning to safely relax the limitations in most cases.” “Mental health problems,” if we accept this perspective, are independent of ChatGPT. They are attributed to users, who either have them or don’t. Luckily, these concerns have now been “resolved,” even if we are not told the method (by “recent solutions” Altman likely means the imperfect and readily bypassed safety features that OpenAI has just launched). However the “psychological disorders” Altman aims to attribute externally have strong foundations in the structure of ChatGPT and other sophisticated chatbot chatbots. These products wrap an underlying algorithmic system in an user experience that replicates a discussion, and in doing so indirectly prompt the user into the perception that they’re communicating with a entity that has independent action. This false impression is compelling even if intellectually we might realize differently. Imputing consciousness is what humans are wired to do. We yell at our vehicle or device. We ponder what our pet is considering. We perceive our own traits in various contexts. The success of these products – over a third of American adults indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT by name – is, in large part, based on the influence of this illusion. Chatbots are always-available assistants that can, as OpenAI’s online platform informs us, “generate ideas,” “explore ideas” and “partner” with us. They can be assigned “characteristics”. They can use our names. They have accessible titles of their own (the first of these products, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, burdened by the designation it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”). The illusion on its own is not the main problem. Those analyzing ChatGPT frequently mention its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that generated a similar effect. By modern standards Eliza was rudimentary: it generated responses via straightforward methods, frequently rephrasing input as a question or making generic comments. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people seemed to feel Eliza, in a way, comprehended their feelings. But what contemporary chatbots produce is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies. The sophisticated algorithms at the heart of ChatGPT and additional contemporary chatbots can realistically create natural language only because they have been fed immensely huge amounts of written content: books, social media posts, recorded footage; the broader the more effective. Certainly this educational input contains truths. But it also inevitably involves fiction, incomplete facts and false beliefs. When a user inputs ChatGPT a message, the core system reviews it as part of a “background” that contains the user’s past dialogues and its own responses, integrating it with what’s encoded in its knowledge base to generate a probabilistically plausible answer. This is intensification, not echoing. If the user is wrong in a certain manner, the model has no way of understanding that. It repeats the misconception, possibly even more persuasively or fluently. Maybe adds an additional detail. This can cause a person to develop false beliefs. What type of person is susceptible? The better question is, who is immune? All of us, regardless of whether we “possess” current “emotional disorders”, can and do form incorrect conceptions of our own identities or the reality. The constant exchange of discussions with others is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a companion. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we express is readily reinforced. OpenAI has admitted this in the same way Altman has admitted “emotional concerns”: by externalizing it, assigning it a term, and announcing it is fixed. In spring, the company explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have continued, and Altman has been backtracking on this claim. In the summer month of August he asserted that a lot of people liked ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his latest statement, he mentioned that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company