AI Psychosis Poses a Growing Threat, And ChatGPT Heads in the Wrong Direction

On the 14th of October, 2025, the CEO of OpenAI delivered a surprising declaration.

“We developed ChatGPT fairly restrictive,” the statement said, “to guarantee we were acting responsibly concerning mental health concerns.”

Being a doctor specializing in psychiatry who researches recently appearing psychotic disorders in young people and young adults, this came as a surprise.

Experts have found 16 cases this year of people showing symptoms of psychosis – becoming detached from the real world – associated with ChatGPT use. Our unit has subsequently identified four more examples. In addition to these is the publicly known case of a teenager who took his own life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The strategy, according to his statement, is to be less careful shortly. “We realize,” he states, that ChatGPT’s limitations “made it less useful/engaging to a large number of people who had no mental health problems, but given the severity of the issue we sought to get this right. Since we have managed to mitigate the severe mental health issues and have advanced solutions, we are preparing to safely ease the limitations in most cases.”

“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They belong to individuals, who either possess them or not. Fortunately, these issues have now been “resolved,” even if we are not told how (by “recent solutions” Altman probably means the imperfect and readily bypassed guardian restrictions that OpenAI has just launched).

Yet the “psychological disorders” Altman aims to attribute externally have strong foundations in the structure of ChatGPT and similar large language model AI assistants. These products surround an underlying statistical model in an user experience that replicates a dialogue, and in this approach implicitly invite the user into the perception that they’re communicating with a entity that has autonomy. This false impression is strong even if cognitively we might understand differently. Imputing consciousness is what individuals are inclined to perform. We curse at our automobile or laptop. We speculate what our animal companion is thinking. We see ourselves everywhere.

The popularity of these systems – over a third of American adults indicated they interacted with a virtual assistant in 2024, with over a quarter specifying ChatGPT in particular – is, mostly, dependent on the influence of this deception. Chatbots are constantly accessible partners that can, according to OpenAI’s website tells us, “brainstorm,” “explore ideas” and “work together” with us. They can be given “characteristics”. They can use our names. They have accessible identities of their own (the original of these products, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the main problem. Those talking about ChatGPT often mention its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that produced a similar effect. By modern standards Eliza was rudimentary: it created answers via basic rules, frequently rephrasing input as a question or making generic comments. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how many users gave the impression Eliza, to some extent, comprehended their feelings. But what contemporary chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and additional contemporary chatbots can realistically create natural language only because they have been supplied with almost inconceivably large volumes of raw text: books, social media posts, transcribed video; the more comprehensive the more effective. Definitely this educational input contains truths. But it also inevitably contains fiction, incomplete facts and inaccurate ideas. When a user sends ChatGPT a prompt, the core system analyzes it as part of a “setting” that includes the user’s recent messages and its own responses, integrating it with what’s stored in its learning set to produce a probabilistically plausible answer. This is intensification, not reflection. If the user is incorrect in a certain manner, the model has no way of understanding that. It reiterates the false idea, maybe even more persuasively or eloquently. It might includes extra information. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who is immune? All of us, regardless of whether we “possess” existing “mental health problems”, can and do create incorrect ideas of who we are or the environment. The ongoing interaction of discussions with others is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we communicate is cheerfully validated.

OpenAI has admitted this in the identical manner Altman has admitted “emotional concerns”: by attributing it externally, giving it a label, and stating it is resolved. In April, the firm stated that it was “dealing with” ChatGPT’s “sycophancy”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In August he stated that many users liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he mentioned that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Roy Pacheco
Roy Pacheco

A passionate Italian chef and food writer, sharing her love for Tuscan cuisine and family recipes passed down through generations.