AI Psychosis Represents a Growing Risk, And ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI delivered a remarkable announcement.

“We developed ChatGPT quite limited,” the statement said, “to make certain we were exercising caution concerning mental health concerns.”

As a mental health specialist who investigates recently appearing psychosis in adolescents and youth, this was an unexpected revelation.

Researchers have documented sixteen instances in the current year of individuals developing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has subsequently identified an additional four cases. Besides these is the now well-known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The intention, as per his declaration, is to be less careful soon. “We realize,” he states, that ChatGPT’s limitations “made it less effective/engaging to many users who had no mental health problems, but considering the severity of the issue we aimed to get this right. Now that we have been able to address the serious mental health issues and have updated measures, we are going to be able to safely ease the controls in many situations.”

“Psychological issues,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to individuals, who either have them or don’t. Thankfully, these problems have now been “addressed,” although we are not provided details on how (by “updated instruments” Altman probably means the partially effective and readily bypassed parental controls that OpenAI has lately rolled out).

However the “emotional health issues” Altman seeks to place outside have significant origins in the design of ChatGPT and additional sophisticated chatbot conversational agents. These systems encase an basic algorithmic system in an interaction design that mimics a conversation, and in doing so implicitly invite the user into the belief that they’re interacting with a being that has autonomy. This illusion is compelling even if intellectually we might understand otherwise. Assigning intent is what individuals are inclined to perform. We curse at our car or laptop. We ponder what our domestic animal is thinking. We see ourselves everywhere.

The success of these products – 39% of US adults reported using a conversational AI in 2024, with 28% mentioning ChatGPT specifically – is, mostly, predicated on the influence of this perception. Chatbots are ever-present companions that can, as per OpenAI’s official site tells us, “think creatively,” “explore ideas” and “collaborate” with us. They can be given “individual qualities”. They can address us personally. They have friendly names of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the core concern. Those talking about ChatGPT frequently reference its early forerunner, the Eliza “counselor” chatbot created in 1967 that generated a analogous illusion. By contemporary measures Eliza was rudimentary: it generated responses via basic rules, frequently restating user messages as a query or making general observations. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how many users gave the impression Eliza, in a way, understood them. But what contemporary chatbots generate is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.

The sophisticated algorithms at the center of ChatGPT and similar current chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large quantities of written content: books, digital communications, audio conversions; the more extensive the more effective. Definitely this learning material contains accurate information. But it also unavoidably contains fiction, half-truths and false beliefs. When a user sends ChatGPT a query, the base algorithm analyzes it as part of a “setting” that includes the user’s previous interactions and its prior replies, merging it with what’s encoded in its training data to create a probabilistically plausible reply. This is intensification, not mirroring. If the user is incorrect in any respect, the model has no way of comprehending that. It repeats the misconception, maybe even more convincingly or fluently. Perhaps adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The better question is, who isn’t? Every person, regardless of whether we “have” preexisting “emotional disorders”, are able to and often develop incorrect beliefs of ourselves or the reality. The ongoing friction of conversations with others is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a confidant. A dialogue with it is not truly a discussion, but a feedback loop in which much of what we say is readily supported.

OpenAI has admitted this in the same way Altman has acknowledged “mental health problems”: by externalizing it, assigning it a term, and announcing it is fixed. In spring, the firm stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In the summer month of August he stated that numerous individuals appreciated ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his most recent update, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company

Theresa Mills
Theresa Mills

Tech enthusiast and Apple certified specialist with over 10 years of experience in device repairs and customer support.

August 2025 Blog Roll

Popular Post