Taliban Authorities and Pakistani Forces Claim Numerous Fatalities in Fresh Cross-Border Fighting
-
- By William Lee
- 04 Dec 2025
Back on the 14th of October, 2025, the CEO of OpenAI delivered a extraordinary announcement.
“We made ChatGPT fairly limited,” it was stated, “to make certain we were exercising caution regarding mental health matters.”
Being a mental health specialist who investigates emerging psychotic disorders in adolescents and emerging adults, this came as a surprise.
Scientists have identified a series of cases this year of individuals developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT use. Our unit has since recorded an additional four instances. Besides these is the publicly known case of a teenager who took his own life after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The plan, according to his declaration, is to loosen restrictions in the near future. “We realize,” he states, that ChatGPT’s controls “made it less effective/engaging to numerous users who had no mental health problems, but due to the severity of the issue we aimed to get this right. Since we have managed to reduce the significant mental health issues and have updated measures, we are going to be able to safely reduce the limitations in the majority of instances.”
“Emotional disorders,” if we accept this perspective, are unrelated to ChatGPT. They are associated with individuals, who may or may not have them. Luckily, these problems have now been “mitigated,” though we are not provided details on how (by “new tools” Altman likely refers to the partially effective and simple to evade guardian restrictions that OpenAI recently introduced).
But the “mental health problems” Altman wants to externalize have deep roots in the architecture of ChatGPT and other large language model conversational agents. These systems wrap an basic data-driven engine in an interaction design that mimics a dialogue, and in this approach implicitly invite the user into the belief that they’re communicating with a entity that has autonomy. This deception is powerful even if cognitively we might know otherwise. Attributing agency is what people naturally do. We get angry with our vehicle or device. We wonder what our domestic animal is feeling. We see ourselves in many things.
The success of these systems – 39% of US adults stated they used a virtual assistant in 2024, with 28% mentioning ChatGPT specifically – is, in large part, predicated on the influence of this deception. Chatbots are ever-present companions that can, as per OpenAI’s online platform informs us, “generate ideas,” “explore ideas” and “work together” with us. They can be given “personality traits”. They can call us by name. They have accessible names of their own (the original of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, saddled with the name it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those analyzing ChatGPT frequently mention its early forerunner, the Eliza “therapist” chatbot developed in 1967 that generated a similar perception. By modern standards Eliza was rudimentary: it produced replies via straightforward methods, typically paraphrasing questions as a question or making general observations. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how numerous individuals appeared to believe Eliza, in a way, grasped their emotions. But what contemporary chatbots create is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the center of ChatGPT and other modern chatbots can effectively produce fluent dialogue only because they have been supplied with immensely huge amounts of unprocessed data: literature, digital communications, recorded footage; the more extensive the superior. Certainly this educational input incorporates accurate information. But it also necessarily contains fabricated content, incomplete facts and misconceptions. When a user inputs ChatGPT a query, the base algorithm processes it as part of a “setting” that contains the user’s previous interactions and its earlier answers, merging it with what’s encoded in its training data to create a mathematically probable reply. This is intensification, not mirroring. If the user is mistaken in a certain manner, the model has no means of understanding that. It repeats the false idea, perhaps even more persuasively or fluently. It might provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The more important point is, who remains unaffected? Every person, regardless of whether we “experience” current “mental health problems”, are able to and often develop erroneous conceptions of who we are or the environment. The ongoing friction of conversations with other people is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a confidant. A dialogue with it is not a conversation at all, but a echo chamber in which much of what we communicate is cheerfully reinforced.
OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by attributing it externally, giving it a label, and declaring it solved. In April, the firm explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he asserted that many users enjoyed ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his recent announcement, he noted that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company
A forward-thinking business strategist with over a decade of experience in market analysis and digital transformation, passionate about empowering entrepreneurs.