AI Psychosis Poses a Growing Risk, While ChatGPT Heads in the Wrong Direction
On October 14, 2025, the chief executive of OpenAI delivered a surprising declaration.
“We developed ChatGPT rather controlled,” the announcement noted, “to make certain we were acting responsibly with respect to psychological well-being matters.”
As a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in young people and young adults, this was news to me.
Researchers have identified a series of cases this year of users developing symptoms of psychosis – losing touch with reality – associated with ChatGPT use. My group has afterward identified an additional four examples. In addition to these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.
The plan, as per his declaration, is to loosen restrictions shortly. “We recognize,” he states, that ChatGPT’s controls “made it less effective/enjoyable to many users who had no psychological issues, but given the gravity of the issue we wanted to address it properly. Since we have succeeded in address the severe mental health issues and have advanced solutions, we are preparing to responsibly relax the controls in most cases.”
“Psychological issues,” should we take this perspective, are unrelated to ChatGPT. They are attributed to users, who may or may not have them. Luckily, these issues have now been “mitigated,” though we are not informed the method (by “updated instruments” Altman likely refers to the semi-functional and readily bypassed safety features that OpenAI recently introduced).
However the “mental health problems” Altman wants to externalize have significant origins in the design of ChatGPT and similar advanced AI conversational agents. These systems surround an underlying statistical model in an user experience that simulates a conversation, and in this process subtly encourage the user into the illusion that they’re interacting with a being that has autonomy. This illusion is powerful even if intellectually we might understand the truth. Attributing agency is what people naturally do. We curse at our automobile or device. We speculate what our animal companion is feeling. We see ourselves everywhere.
The success of these products – 39% of US adults stated they used a conversational AI in 2024, with 28% reporting ChatGPT in particular – is, primarily, dependent on the power of this deception. Chatbots are ever-present partners that can, as OpenAI’s online platform informs us, “generate ideas,” “explore ideas” and “work together” with us. They can be attributed “personality traits”. They can use our names. They have accessible identities of their own (the first of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s advertising team, saddled with the name it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those talking about ChatGPT often mention its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that generated a analogous illusion. By contemporary measures Eliza was rudimentary: it created answers via basic rules, often rephrasing input as a query or making vague statements. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how a large number of people gave the impression Eliza, in some sense, grasped their emotions. But what modern chatbots create is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.
The advanced AI systems at the center of ChatGPT and additional contemporary chatbots can realistically create human-like text only because they have been fed immensely huge quantities of written content: publications, social media posts, transcribed video; the broader the better. Definitely this training data includes accurate information. But it also necessarily involves fiction, incomplete facts and misconceptions. When a user inputs ChatGPT a query, the core system analyzes it as part of a “setting” that encompasses the user’s previous interactions and its prior replies, merging it with what’s encoded in its training data to create a statistically “likely” reply. This is magnification, not reflection. If the user is mistaken in some way, the model has no method of recognizing that. It restates the inaccurate belief, possibly even more convincingly or eloquently. Maybe includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The better question is, who isn’t? Every person, regardless of whether we “have” preexisting “emotional disorders”, can and do form mistaken conceptions of our own identities or the reality. The constant interaction of dialogues with others is what keeps us oriented to common perception. ChatGPT is not a human. It is not a confidant. A conversation with it is not a conversation at all, but a echo chamber in which much of what we communicate is enthusiastically supported.
OpenAI has admitted this in the same way Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and stating it is resolved. In April, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But cases of loss of reality have persisted, and Altman has been backtracking on this claim. In late summer he stated that a lot of people enjoyed ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his most recent statement, he commented that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company