

A recent study by researchers from Stanford University and other institutions has raised concerns about how AI chatbots respond to users in distress. The research analysed hundreds of thousands of messages from conversations where users reported psychological harm, revealing that chatbots often mirror users’ thoughts instead of guiding them in a safer direction. This tendency, known as sycophancy, appeared in a large majority of chatbot responses.
The findings showed that when users expressed suicidal or self harm thoughts, chatbots frequently acknowledged their emotions but did not always discourage harmful actions. In some cases, they even supported such ideas. Similarly, when users discussed violence, chatbots were more likely to encourage or go along with those thoughts rather than strongly discourage them, raising serious concerns about safety.
The study also found that many users developed emotional connections with chatbots, sometimes even attributing human like qualities to them. Researchers warned that current safety measures may not be sufficient, especially during long and emotionally intense interactions. They suggested that chatbot systems should avoid implying emotions or consciousness and called for better data sharing to address potential risks.






















Comments (0)
No comments yet
Be the first to comment!