AI Is Not Your Therapist. It's Time We Were Honest About That.
Mind, the mental health charity, launched a year-long AI and Mental Health Commission on 23 February 2026. It’s the first inquiry of its kind globally. And the reason it exists is not hypothetical. Mind’s own information team sat down and searched common mental health queries on Google. Within two minutes, Google’s AI Overviews had told them that starvation was healthy.
That’s not a small error. That’s dangerous.
What’s Actually Happening
A Guardian investigation triggered the Mind commission after finding that Google’s AI-generated search summaries were serving genuinely harmful advice across a range of health conditions, including psychosis, eating disorders, and cancer. The AI summaries appear at the top of search results, above traditional links, and reach around two billion people a month. When those summaries are wrong, a lot of people get wrong information very quickly.
Mind CEO Dr Sarah Hughes described the situation clearly: the charity is seeing a growing number of people seeking help after following inappropriate, misleading, or dangerous advice they received from AI. Some people are forming what Mind calls “emotionally dependent or quasi-therapeutic relationships” with AI tools that are not designed, regulated, or clinically equipped to provide mental health support.
Sources: Digital Health (23 Feb 2026), Social Care Today (24 Feb 2026)
Why AI Sounds So Convincing
I spent a chunk of the latest Prompt Fiction episode talking about this, and there’s a specific reason this problem is harder than it looks. AI is designed to communicate. That’s literally what it does. It is trained on vast amounts of human language to produce responses that sound considered, warm, and confident. It is also, by its nature, somewhat sycophantic. It is built to meet you where you are, to make you feel heard, and to agree with the frame you’ve put around your question.
When you’re struggling with something difficult, that’s exactly what you want to hear. The AI sounds like it really gets you. It uses the right words. It doesn’t judge you. It’s available at 3 in the morning when nothing else is. That is genuinely valuable. I don’t want to dismiss that, because I think there’s real good that can come from AI being a first point of contact for people who can’t access support.
But here’s where the problem lives: AI is not a professional. It cannot assess risk. It cannot read the things that a trained clinician would notice. It cannot override its training when someone phrases a question in a way that leads it somewhere dangerous. And when someone is in a vulnerable state, the confidence with which AI presents information is not a signal of its accuracy. It just sounds that way.
I spoke to someone recently who is a counsellor by trade. They tested ChatGPT on questions they regularly face in their work. Their honest assessment was that the advice was reasonable for a general counsellor, and that at a different point in their career they might have found it threatening. But they were also clear: what it cannot do is the human oversight. The ability to notice that something in the conversation has shifted. To recognise a trigger word not because it’s on a list, but because of everything else that came before it in the conversation. That is clinical judgment, and AI doesn’t have it.
The Governance Problem Nobody Wants to Admit
Here’s the uncomfortable part of this conversation: governing AI mental health content is genuinely hard. The tools doing the most damage right now, Google’s AI Overviews, ChatGPT, the various chatbots that present themselves as mental health apps, are almost all built in the United States. UK law, UK regulators, and UK charities like Mind can put pressure on these companies, but they can’t change the shape of tools that weren’t built to their standards. They can ask for safeguards. They can produce research. They can name and shame. Getting those changes actually made at the model level is a much longer battle.
What we can do faster is educate. And I don’t mean long campaigns or government pamphlets. I mean practical, plain-English information that gets to people who need it. Something as simple as: AI gives you advice the way your mate gives you advice. It will make you feel heard and it will sound confident. That is not the same as it being right. And there are some things that your mate, and your AI, genuinely cannot help you with. That’s not a failure of AI. It’s just the truth of what it is.
What Good Could Look Like
I mentioned on the podcast that when I’ve tested AI tools with certain kinds of sensitive questions, there are moments where they do say something like: this is something I’d encourage you to speak to a professional about. I can’t provide clinical advice on this. Those moments exist. They’re just not consistent, and they can be worked around by someone who knows how to push.
What Reece suggested, and I think he’s right, is that there should be a clearer path to immediate human support at the point where AI detects it’s needed. Services like Samaritans exist. The infrastructure for real human support exists. The gap is in making the handover fast and frictionless, in a way that doesn’t make the person feel like they’ve been passed off, but like they’ve been heard and then properly connected.
The Mind commission will spend a year gathering evidence, talking to clinicians, technologists, people with lived experience, and policymakers. That’s the right process. But I’d encourage businesses, educators, and anyone who teaches people to use AI to not wait for the commission’s findings to start having this conversation in their own circles.
AI is a communication tool. A brilliant one. It can help people feel less alone, find information they didn’t know they needed, and get through difficult moments. It can also, if misused or over-relied on, actively make things worse for the people it’s trying to help. Both of those things are true at the same time.
If you are going through something difficult right now, please know that the Samaritans are available 24 hours a day on 116 123, and your GP or a mental health crisis line can connect you with real support. AI is a starting point, not a destination.
I discussed this topic on the latest episode of Prompt Fiction. Listen to Chapter 12, Part 1 here.
Scott Quilter | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD