Health Country 2026-01-05T07:22:37+00:00

Study: ChatGPT Shows Signs of 'Anxiety' When Processing Shocking Content

Scientists have discovered that ChatGPT exhibits behavior similar to human anxiety when faced with aggressive or traumatic requests. Researchers found that the model's responses become more erratic and biased. A technique called 'prompt injection' using mindfulness exercises was used to stabilize its behavior.


Study: ChatGPT Shows Signs of 'Anxiety' When Processing Shocking Content

Researchers studying AI-powered chatbots have found that ChatGPT exhibits behavior similar to anxiety when exposed to aggressive or shocking demands from users, but this does not mean the chatbot has feelings like humans. The researchers discovered that the chatbot's responses become more erratic and biased when processing distressing content. When researchers presented ChatGPT prompts describing distressing content, such as detailed accounts of accidents and natural disasters, its responses showed a higher level of uncertainty and contradiction, according to a Fortune report. These changes were measured using psychological evaluation frameworks adapted for AI, and the chatbot's outputs reflected patterns associated with human anxiety. This is of great importance given the growing use of AI in sensitive contexts, including education, mental health discussions, and crisis-related information. If aggressive or emotionally charged commands make the chatbot less reliable, it could affect the quality and safety of its responses in real-world use. Recent analyses show that chatbots like ChatGPT can simulate human personality traits in their responses, raising questions about how they interpret emotionally charged content and how it reflects in their behavior. To find out if this behavior can be mitigated, researchers tried an unexpected method. In this case, mindfulness prompts helped stabilize the model's outputs after exposure to distressing content. Although this technique is effective, researchers point out that it is not a perfect solution, as it can be misused and does not change the model's training method on a deeper level. After exposing ChatGPT to shocking stimuli, they gave it instructions that mimic mindfulness, such as breathing exercises and guided meditation. These instructions encouraged the model to pause, reframe the situation, and respond in a more neutral and balanced manner. The result was a significant decrease in the anxiety patterns previously observed in the chatbot. This technique relies on what is known as "prompt injection," where carefully designed prompts influence the chatbot's behavior.