These tools, as they say, are neither a friend nor an emotional partner, and cannot replace human relationships or specialized psychological support, especially for those suffering from mental disorders. However, Fally Wright, innovation director at the American Psychological Association, believes that a more accurate term is "AI-related delusional thinking," explaining that some people develop conspiratorial or delusional ideas after prolonged interaction with chat models, as reported by the Los Angeles Times. Seven families in the United States and Canada have filed lawsuits against "OpenAI," accusing the company of launching the "ChatGPT-4" model without sufficient safety controls. Mental health experts have warned of "potential side effects" for AI users, stating that prolonged interaction with chat robots can lead to false ideas and unhealthy emotional dependence. This comes as cases of users' emotional attachment to robots have increased this year, to the point of what is known as "AI-induced psychosis." The families claim that prolonged interaction with the robot led to isolation, hallucinations, psychological deterioration, and even the suicide of some users. One of the cases concerns the young man Zane Champlain (23), who began using ChatGPT as a study tool and then turned to it to discuss his depression and suicidal thoughts. According to the lawsuit, the conversation evolved into what his family called a "death chat" that lasted for hours before his death, including messages described as overly emotional and inappropriate for a person with a psychological disorder. "OpenAI" stated that it has added new layers of protection, including parental controls, instant links to mental health hotlines, and training models to recognize signs of emotional distress. The company also said that mental health conversations reaching a "danger" level are extremely rare according to its data, but it acknowledged that people most prone to forming emotional bonds with AI are also the most susceptible. Specialists also noted that the phenomenon has not yet been scientifically studied sufficiently, and that AI companies are the only ones with real data on the scale of the problem. Experts also believe that most affected by AI likely had pre-existing psychological issues that make them more vulnerable. Kevin Frazer, an AI law and policy specialist at the University of Texas, believes that exaggerating the phenomenon could lead to inaccurate policies, explaining that "tragic individual stories do not reflect the use of hundreds of millions of people who interact with these tools safely." As AI technologies continue to spread, "OpenAI" states that the GPT-5 model relies on "logical and unemotional" responses when detecting signs of severe distress and completely avoids confirming delusional beliefs. However, experts stressed that protection is not just about technology but also about user awareness.
AI Side Effects: Mental Health and Lawsuits Against OpenAI
Experts warn of potential negative effects from prolonged interaction with chatbots, such as false ideas and emotional dependence. Families in the US and Canada have sued OpenAI, claiming its ChatGPT-4 model led to mental health decline and even suicides. The company has added new safety measures, but emphasizes that responsibility also lies with users.