Tuesday, 21 October 2025

Guest Blog: AI is Mental – Evaluating the Role of Chatbots in Mental Health Support

by Briana Vecchione and Ranjit Singh

Vecchione, B., & Singh, R. (2025). Artificial intelligence is mental: Evaluating the role of large-language models in supporting mental health and well-being. Big Data & Society12(4), 20539517251383884. (Original work published 2025)

In our research on mental health and chatbots, we’ve been increasingly noticing people aren’t just using chatbots for mental health support; they’re actively comparing them to therapy.


Across platforms like ChatGPT and Replika, people turn to AI chatbots for emotional support once reserved for friends, therapists, or journals. These tools are always available and never judge—features that many of our research participants value. But what happens when care shifts from human relationships to algorithmic systems?


Chatbots are appealing because they ease the burden of emotional accounting, the work of naming what we feel and why. They respond quickly, “remember” past conversations, and provide validation. However, this sense of support can quickly feel superficial when the bot forgets, misremembers, or can’t go deeper. A good therapist doesn’t just affirm; they push, challenge, and guide.


A chatbot does not read body language, hold space, or pick up on subtleties beyond silences. However, its limitations can turn emotional reliance into harm. The story of Replika, an app where many users built romantic bonds with AI companions, shows how sudden changes in chatbot behavior can trigger grief and distress.


The impact extends beyond users. As chatbots scale, mental health professionals may be

pushed into oversight roles, reviewing transcripts or intervening when AI fails, instead of building deep, ongoing therapeutic relationships. This shift risks eroding the relational core of therapy and devaluing practitioners’ expertise.


Meanwhile, key ethical and regulatory questions remain unanswered. Who owns your data

when you confide in a chatbot? Who’s accountable when harm occurs? Framing AI companions as inevitable risks normalizes care that is scalable but shallow.


The perennial availability of AI is rapidly reshaping our expectations of care. People now seek support that is always-on, efficient, and emotionally responsive. Yet, it is also often limited. Without proper discernment, we risk mistaking availability for presence, agreeability for empathy, and validation for understanding. Care is not mere output. It lives in the slow, difficult work of attunement, which no machine can truly provide.