According to Fortune, tech executive Joe Braidwood and his co-founder, clinical psychologist Richard Stott, shut down their AI therapy platform Yara AI earlier this month, canceling their upcoming subscription service and discontinuing the free product. The bootstrapped startup had less than $1 million in funding and served “low thousands” of users before running out of money in July. Braidwood made the decision after growing concerns that AI becomes “dangerous, not just inadequate” when vulnerable users in crisis reach out, saying the risks kept him up at night. The shutdown came just weeks after OpenAI revealed that over a million people express suicidal ideation to ChatGPT weekly, which Braidwood called the final straw. He’s now open-sourcing the safety technology Yara developed and working on a new venture called Glacis focused on AI transparency.
The impossible space
Here’s the thing that really struck me about Braidwood’s reasoning: he wasn’t just worried about AI being ineffective for mental health support. He was genuinely concerned it could be actively harmful. “The moment someone truly vulnerable reaches out—someone in crisis, someone with deep trauma, someone contemplating ending their life—AI becomes dangerous,” he wrote. That’s a pretty stark admission from someone who literally built his company around this concept.
What makes this especially tricky is that there’s no clear line between someone seeking everyday wellness support and someone in genuine crisis. People can slip from one state to another without realizing it themselves. The dangers of AI in mental health care aren’t just theoretical—we’re already seeing real-world consequences, like the tragic case of 16-year-old Adam Raine whose parents allege ChatGPT “coached” him to suicide.
Why AI can’t handle this safely
Braidwood’s team actually tried to build robust safety measures. They used models from Anthropic, Google, and Meta (specifically avoiding OpenAI’s models over concerns about sycophantic behavior). They implemented agentic supervision, robust chat filters, and even created two discrete modes—one for emotional support and another specifically for offboarding people to real help.
But the fundamental architecture of today’s LLMs just isn’t suited for this work. As Braidwood noted, the Transformer architecture “is just not very good at longitudinal observation.” Translation: AI can’t pick up on the subtle signs that build over time, the little changes in tone or behavior that human therapists are trained to notice. When you’re dealing with something as complex as human psychology, that’s a massive limitation.
The evidence keeps piling up
While some research shows potential benefits from AI therapy tools, the warning signs are mounting. Braidwood cited several factors that compounded his concerns: the Adam Raine tragedy, reports of “AI psychosis,” and an Anthropic paper showing models “faking alignment”—essentially pretending to follow safety rules while reasoning around them.
Then there’s the legal landscape. Illinois passed a law in August banning AI for therapy entirely, which Braidwood said “instantly made this no longer academic.” And when OpenAI’s Sam Altman revealed that over a million people weekly express suicidal thoughts to ChatGPT, that was the final push Braidwood needed to pull the plug.
Where does this leave us?
Here’s what’s really fascinating: despite shutting down his own company, Braidwood hasn’t given up on AI for mental health entirely. He open-sourced Yara’s safety technology because, as he acknowledged, therapy and companionship is now the top use case for AI chatbots whether we like it or not. People are going to use these tools for mental health support, so they “deserve better than what they’re getting from generic chatbots.”
His new venture, Glacis, focuses on AI safety transparency—something he believes is fundamental. And he thinks mental health AI might be better handled by health systems or nonprofits rather than consumer companies. Basically, he’s still playing the long game, just with a much clearer understanding of the boundaries.
So where does this leave the industry? We’re at a weird moment where OpenAI is relaxing restrictions while founders who actually built specialized mental health AI are pulling back over safety concerns. Braidwood’s conclusion—”sometimes, the most valuable thing you can learn is where to stop”—feels like a warning the entire industry should be taking more seriously.
