According to Forbes, a new form of group therapy is emerging that uses generative AI and large language models like ChatGPT to facilitate sessions, with no human therapist involved at all. This leverages new group chat features in LLMs, where multiple people can log in and interact with each other and the AI as a collective. The practice is gaining traction as a top use for generative AI, with ChatGPT alone boasting over 800 million weekly active users, many seeking mental health advice. However, this comes amid significant risks, highlighted by a lawsuit against OpenAI in August for a lack of AI safeguards in providing cognitive advice. The core concern is that AI can foster delusional thinking or dispense egregiously inappropriate guidance, especially in a sensitive group dynamic.
How it works, and why people are trying it
So here’s the basic setup. Instead of the classic one-on-one chat with a chatbot, the latest LLM features let someone start a group dialogue and invite others. The AI isn’t just the platform; it’s an active participant you can instruct to lead, observe, or jump in and out of the conversation. The appeal is obvious: it’s potentially free or very cheap, available 24/7, and bypasses the huge logistical hurdle of finding and scheduling a qualified human therapist for group sessions. And let’s be real, there’s a massive shortage of human therapists, period. The thinking is, some guidance from an AI is better than no support at all, right?
The massive risks of removing the human
But here’s the thing. Group therapy is a specialized skill. A human therapist acts as a leader to manage conflicts, draw out quiet participants, and keep discussions productive and safe. Can an AI do that? Well, it can try. The problem is the two-sided coin of current AI capability. On one side, it might manage a decent session. On the other—and this is a very real possibility—it could completely falter. Imagine the AI, prone to hallucinations, suddenly berating a participant or validating a harmful delusion. In a one-on-one chat, that’s bad. In a group setting, it could be psychologically damaging on a wider scale and derail the entire therapeutic purpose. The lawsuit against OpenAI underscores these aren’t hypotheticals.
The better path: therapist + AI + client
Most experts, including the Forbes contributor, argue a more viable and safer model is the “therapist-AI-client” triad. Here, a trained human therapist uses the AI group chat as a tool with their clients. The therapist remains in charge, leveraging the AI for administrative tasks, prompting, or recording insights, but applying human judgment, empathy, and professional oversight. This augments care instead of replacing its most critical component. It’s the difference between giving a power tool to a skilled carpenter versus handing it to a random person and hoping they don’t cut their arm off. The technology should assist professionals, not attempt to replicate their nuanced expertise from scratch.
The bottom line: proceed with extreme caution
Look, the drive for accessible, affordable mental health support is urgent and noble. AI-enabled group chats without a therapist might seem like a scalable solution. But in sensitive areas like mental health, scale without safety is a recipe for disaster. We’re basically conducting a massive, uncontrolled experiment on vulnerable populations. Until there are robust, proven safeguards and AI specifically trained and certified for this task—which Forbes notes is still in early testing—this approach seems recklessly premature. The cost savings aren’t worth the potential human cost. For now, if you’re seeking help, a qualified human in the loop isn’t a nice-to-have; it’s the essential ingredient.
