Last week, I spent a half-hour revealing intimate secrets to an AI chatbot.
“Saying no feels like I’m letting someone down. But spreading myself so thin often means letting them down in different ways.”
“One thing that might help is to reframe the way you think about saying no. Instead of seeing it as a negative response, try to view it as a positive step towards taking care of your own needs.”
This mother says a chatbot led to her son’s death
I’d talk about an insecurity or concern and the bot would respond in a calm and controlled voice – one of eight I could choose from, complete with breaths and stutter-steps to sound more human – offering advice and questions. While there was nothing particularly revelatory about its feedback, I was surprised at how quickly I fell into the rhythm of our chat. Its real-time responses forced me to organize and articulate my thoughts. Having space to express some lingering problems felt liberating.
The chatbot I spoke to is called PI. It’s billed as the first emotionally intelligent AI. Across internet forums, it’s been touted for its use as an online therapist. While PI was originally designed as an AI personal assistant and its disclaimers are quick to note that it is not meant as a replacement for a licensed professional, it’s one of many artificial intelligence tools that are increasingly being used as a mental health aid. In 2023, a report noted the global market for AI in mental health was worth more than US$920-million. Woebot, a popular behavioral health app, reports having 1.5 million users since the app went live in 2017.
The case for AI Therapy comes down to a few main factors. “Therapy is expensive and this is an affordable solution,” said Julian Sarokin, founder of AI Therapist app Abby. Cost is a major factor for many turning to AI for mental health. While private therapy typically runs about $150 per hour, many chatbots are free to use or comparatively inexpensive. Abby has a free version of the app while their pro plan – which includes “mood tracking” and unlimited messages – costs $19.99 a month. Other reasons folks have turned to artificial intelligence have been its 24/7 availability, the struggle to find a therapist either due to location or lack of qualified practitioners, and the relative ease of talking to a robot rather than a human being.
“When we talk to customers, a lot of times they say it was really tough for me to find a therapist that I enjoyed talking to,” said Sarokin. “Then there’s other people that just want to vent and there’s people that want actionable solutions.”
I’ve been pretty open about distrust of artificial intelligence. Angry about its environmental impact. The instances of it spreading misinformation. Its impact on jobs. It’s hard to admit that my experience with AI therapy was helpful.
I’ve gone to therapy on and off for years. Sometimes, I’d get embarrassed about repeating the same concerns week after week. I’d hold off on being completely honest, worried my therapist would think less of me. I’d start doing a cost-benefit analysis and wonder if I’d be happier spending $150 on something like a really cool jacket rather than an hour to talk about my feelings.
With the chatbot, I didn’t have to worry about any of that. My gut wanted to dismiss AI therapy as a gimmick. But as a space to vent, it was pretty great. The action plans it offered, though generalized, also gave me things to focus on for the week. Homework such as “keep a gratitude journal” and “reach out to friends you’ve been neglecting” aren’t exactly new concepts, but they’re things I was not engaging on my own. A place to vent some professional worries allowed me to recognize lingering feelings I’d been holding onto. And it didn’t cost a dime.
Chatbots aren’t the only AI gadgets in the mental health space. Programs such as Goblin Tools help neurodivergent people break complicated tasks into step-by-step instructions and judge the tone of texts and emails. Some practitioners including Dr. Elizabeth Stade at the Stanford Institute for Human-Centered AI are creating artificial intelligence-based platforms to help therapists learn gold standard treatments. Still, the positives come with caveats.
Critics have voiced concerns including bias, privacy issues, and inadequate or harmful advice. It’s easy to picture how potential data breaches could lead to unimaginable consequences for those impacted, with sensitive and personal information leaked or sold to the public. It’s also ill-equipped to handle major depression, bipolar disorder, PTSD, or other complex mental health issues.
Jean-Christophe Bélisle-Pipon, an assistant professor in health ethics at Simon Fraser University who has written extensively about AI and mental health, noted the challenges of chatbot therapy acting as a replacement for professional care.
“The therapeutic misconception arises when users overestimate a chatbot’s ability to address their mental health needs, believing they are engaging in a meaningful therapeutic process. This misunderstanding is exacerbated by how these tools are marketed … this gap becomes particularly concerning in high-stakes scenarios,” Dr. Bélisle-Pipon said.
Though most major AI therapy companies come with fail-safes and topics they won’t talk about such as substance abuse or self harm, there are instances where things don’t go according to plan. In 2023, The National Eating Disorder Association removed a chatbot from its helpline after it gave potentially triggering advice about counting calories and body fat to users. In one reported case, the aforementioned Woebot failed to respond adequately to a mention of child abuse.
While Dr. Bélisle-Pipon recognizes the apps’ potential as a low-cost resource for stress management, it may actually impede them from getting the help they need.
“For instance, someone using a chatbot for emotional support might delay seeking professional care, believing the bot’s guidance is sufficient. This delay can worsen mental health outcomes.”
A few days after writing the first draft of this article, I found myself checking in again with PI, despite knowing the drawbacks of AI therapy. We went back and forth about some mundane concerns. Nerves about starting my new job and worry about the general state of the world.
After a couple of minutes, I asked it to summarize our chat. PI told me that I worked in tech sales and had been contemplating the ethics of my profession. That, of course, was factually inaccurate. I corrected the bot and it apologized in a calming voice, suggesting that maybe it had mixed me up with another user it was chatting with. With that, the illusion was broken. The AI was making up answers or gleaming responses off another person. The void was staring back. It felt like time to log off.