According to CNET, over the past 18 months, nearly every major tech and fitness brand has launched or is considering an AI health coach. Google is testing a coach inside the Fitbit app built on Gemini, Apple has a Workout Buddy for real-time motivation on the Apple Watch, and Samsung, Garmin, Oura, and iFit have all rolled out AI features. Meta has even partnered with Garmin and Oakley to embed its Meta AI voice assistant into smart workout glasses. These tools, ranging from predictive models for alerts to generative AI chatbots, promise to analyze personal biometric data and offer personalized wellness advice. However, CNET’s testing found most remain in their infancy, often offering generic advice, while requiring users to hand over sensitive health data through dense, confusing privacy agreements.
The hype vs the half-baked reality
Look, the idea is incredibly seductive. You’ve got years of heart rate, sleep, and activity data piling up in an app, just sitting there. The promise of an AI to finally make sense of it all? That’s the dream. But here’s the thing: the reality right now is pretty underwhelming. CNET’s tester found Samsung’s running coach offered a one-size-fits-all plan that didn’t match her goals. That’s the core issue. These models are supposed to get better with time and personal data, but we’re in the awkward early phase where they often feel like a clumsy beta feature, not a revolutionary personal advisor.
The real cost is your data
And this is where it gets sticky. Asking ChatGPT for a generic diet tip is one thing. Giving a generative AI model—notorious for confabulations and hallucinations—an all-access pass to your real-time heart rhythm, location, sleep patterns, and chat history? That’s a completely different beast. As Karin Verspoor from RMIT University told CNET, we can’t have these models without data. The “payment” is your most intimate information, often buried in disclosures we blindly accept. This isn’t just about ads; it’s about how that data could be used, sold, or leaked, potentially even affecting things like insurance rates down the line. The trade-off feels incredibly high for a coach that might just tell you to rest because you slept poorly.
A glimpse of potential and peril
So, is it all doom and gloom? Not necessarily. There are glimmers of a better path. Dr. Jonathan Chen from Stanford sees AI’s best role as starting better conversations, not replacing them. Think about it: instead of dumping a month of glucose data on your doctor, an AI could highlight the key patterns beforehand. CNET’s own example is powerful—an Apple Watch catching a heart rhythm issue that clinical monitors missed, leading to a life-saving procedure. That’s the best-case scenario: closing gaps in care.
But the worst case is just as plausible. What happens when these systems flood doctors with false alarms from anxious patients? We’re already primed for health anxiety from the “Dr. Google” era. An AI coach constantly whispering about minor deviations could turn that anxiety up to eleven, overwhelming the system with unnecessary worry and tests. It’s a tightrope walk between helpful signal and noisy distraction.
Where do we go from here?
Basically, this isn’t a fad. AI in health tech is here to stay and will only get more embedded. The question isn’t whether we’ll use it, but *how*. The guardrails around data privacy, transparency, and medical accountability are what matter now. These tools need to be designed as supportive guides, not definitive diagnosticians. They should empower us with insights to have better talks with human professionals, not make us slaves to every algorithmic nudge. The dream of a truly helpful, private AI health coach is still out there. But based on the current crop? We’re not there yet, and the price of admission is still way too vague and potentially costly.
