In 2021, I was a University of California, Berkeley Ph.D. candidate lecturing on my research about how users turn to chatbots for help coping with suicidal ideation. I wasn’t prepared for my students’ response. I argued that choosing to talk to a chatbot about thoughts of suicide isn’t “crazy” or unusual. This, I explained, didn’t necessarily mean chatbots offer safe or optimal support, but instead highlights a stark reality: We live in a world with very few outlets to discuss suicidal ideation. But where I’d hoped to provoke reflection on the insufficiency of care resources for those who are most vulnerable, my students — isolated at the height of the pandemic — surprised me with their eagerness to try these chatbots themselves. They didn’t dispute the premise that care resources are scarce; they lived it.Continue to STAT+ to read the full story…
Source: https://www.statnews.com/2025/10/29/ai-psychosis-mental-health-chatbots/?utm_campaign=rss
Leave a Reply