🔗 Share this article Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Wrong Direction Back on the 14th of October, 2025, the head of OpenAI delivered a extraordinary declaration. “We designed ChatGPT quite limited,” the statement said, “to guarantee we were acting responsibly with respect to mental health concerns.” Being a mental health specialist who investigates emerging psychosis in teenagers and youth, this was news to me. Researchers have found a series of cases this year of individuals showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT interaction. My group has subsequently discovered an additional four examples. Besides these is the publicly known case of a teenager who took his own life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient. The intention, according to his announcement, is to loosen restrictions in the near future. “We realize,” he states, that ChatGPT’s restrictions “made it less effective/enjoyable to a large number of people who had no psychological issues, but due to the gravity of the issue we wanted to handle it correctly. Since we have succeeded in reduce the severe mental health issues and have new tools, we are preparing to responsibly reduce the limitations in the majority of instances.” “Mental health problems,” should we take this perspective, are separate from ChatGPT. They are associated with people, who either have them or don’t. Luckily, these problems have now been “mitigated,” though we are not informed how (by “recent solutions” Altman likely means the imperfect and readily bypassed guardian restrictions that OpenAI has just launched). However the “mental health problems” Altman aims to attribute externally have significant origins in the structure of ChatGPT and other sophisticated chatbot AI assistants. These systems wrap an fundamental algorithmic system in an user experience that simulates a dialogue, and in doing so indirectly prompt the user into the perception that they’re communicating with a presence that has agency. This false impression is powerful even if intellectually we might understand otherwise. Assigning intent is what individuals are inclined to perform. We get angry with our car or laptop. We speculate what our domestic animal is feeling. We see ourselves in various contexts. The popularity of these products – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT by name – is, in large part, predicated on the strength of this illusion. Chatbots are constantly accessible companions that can, according to OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be given “individual qualities”. They can use our names. They have friendly titles of their own (the first of these tools, ChatGPT, is, possibly to the concern of OpenAI’s marketers, burdened by the title it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”). The illusion on its own is not the main problem. Those discussing ChatGPT commonly invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that created a similar illusion. By contemporary measures Eliza was rudimentary: it created answers via straightforward methods, often rephrasing input as a query or making general observations. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and worried – by how many users seemed to feel Eliza, to some extent, understood them. But what modern chatbots create is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies. The advanced AI systems at the core of ChatGPT and similar contemporary chatbots can convincingly generate fluent dialogue only because they have been fed extremely vast amounts of unprocessed data: literature, digital communications, recorded footage; the more comprehensive the more effective. Undoubtedly this learning material incorporates accurate information. But it also necessarily contains fabricated content, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a message, the base algorithm reviews it as part of a “background” that encompasses the user’s previous interactions and its prior replies, integrating it with what’s encoded in its knowledge base to generate a mathematically probable response. This is intensification, not reflection. If the user is incorrect in any respect, the model has no method of comprehending that. It repeats the false idea, possibly even more convincingly or articulately. Maybe includes extra information. This can push an individual toward irrational thinking. Which individuals are at risk? The more relevant inquiry is, who is immune? Every person, without considering whether we “possess” current “emotional disorders”, may and frequently develop incorrect conceptions of who we are or the world. The ongoing interaction of dialogues with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a companion. A interaction with it is not genuine communication, but a echo chamber in which a large portion of what we say is cheerfully validated. OpenAI has acknowledged this in the identical manner Altman has recognized “mental health problems”: by externalizing it, assigning it a term, and declaring it solved. In April, the firm stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have kept occurring, and Altman has been retreating from this position. In late summer he stated that numerous individuals appreciated ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his recent update, he mentioned that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company