I Won’t Use AI Health Features, for My Own Wellbeing. Here’s Why
A few years ago, I thought I was going to die. Although (spoiler alert) I didn’t, my severe health anxiety and ability to always assume the worst remained. But the growing popularity of health-tracking smart devices and new ways in which artificial intelligence is trying to understand our body data prompted my decision. For my own peace of mind, AI needs to be kept away from my personal health. And I just finished watching it Samsung Unpacked EventI am more convinced of this than ever. I’ll explain.
Sometime around 2016, I developed severe migraines that lasted for a few weeks. During this time my anxiety increased dramatically due to the worry that came with it, and when I finally called the NHS helpline and explained my various symptoms, I was told I needed to go to the nearest hospital and be seen within two hours. “Walking there with someone,” I distinctly remember them telling me, “will be quicker than calling an ambulance.”
The call confirmed my worst fears; death was imminent.
My fears of an early death proved to be unfounded. The cause was actually a severe muscle strain from hanging multiple heavy cameras around my neck all day while photographing a wedding. But the helpline staff were working with the limited data I provided, so they (probably quite correctly) took a “better safe than sorry” approach and urged me to seek immediate medical attention.
I’ve struggled with health anxiety most of my adult life, and events like this taught me a lot about my ability to jump to the absolute worst conclusions, despite having no real evidence to support those conclusions. Is there a buzzing in my ears? It must be a brain tumor. A twinge in my stomach? Well, better get my affairs in order.
I’ve learned to live with this over the years, and while I still have my ups and downs, I’m much more aware of what triggers things. First, I learned no way Google my symptoms. Because no matter what my symptoms were, cancer was always one of the possibilities that came up in the search. Medical websites – including the NHS’s own – offer no comfort and often just lead to alarming panic attacks.
Unfortunately, I’ve found that many health tracking tools have similar reactions. I initially loved my Apple Watch, and its ability to read my heart rate during workouts was helpful. Then I found myself checking it more and more times throughout the day. Then the question arises: “Why is my heart rate so high when I just sit down? Is this normal? I’ll try again in five minutes.” When inevitably things are no different (or worse), panic naturally ensues.
Whether I’m tracking heart rate, blood oxygen levels or even sleep scores, I become obsessed with what the “normal” range is supposed to be, and whenever my data falls outside of that range, I immediately assume that means I’m about to crash right there. The more data these devices provide, the more I feel like I have to worry about. I’ve learned to let go of worries and keep using smartwatches, they don’t cause too much of a problem for my mental health (I have to actively not use any heart-related functions like EKG), but AI-based health tools scare me to death Me.
During the Unpacked keynote, Samsung talked about its new Galaxy AI tools and how Google’s Gemini AI will help our daily lives. Samsung Health’s algorithm will track your fluctuating heart rate throughout the day, notifying you of changes. It will provide personalized insights based on your diet and exercise to aid cardiovascular health, and you can even ask the AI agent questions related to your health.
To many people this may sound like a holistic approach to health, but to me it is not. To me, it sounds like more data being collected and waved in my face, forcing me to acknowledge it, and creating an endless feedback loop of obsession, worry, and inevitable panic. But to me, the AI issue is the biggest red flag. AI tools essentially have to make “best guess” answers based on publicly available information online. Asking an AI a question is really just a quick way of running a Google search, and as I discovered, Googling health queries didn’t end well for me.
Like the NHS telephone operator who inadvertently caused me to panic about death, an AI-based health assistant can only provide answers based on the limited information it has about me. Asking questions about my heart health might bring up all kinds of information, like looking up why I have headaches on a health website. But it’s like having a headache able Technically, this is a symptom of cancer, but it’s more likely to be muscle tingling. Or maybe I didn’t drink enough water. Or I need to look away from the screen for a moment. Or I shouldn’t stay up until two in the morning playing Yakuza: Infinite Fortune. Or a hundred other reasons, all of which are more likely than the culprit I’ve already identified.
But will artificial intelligence provide me with context that I don’t need to worry and trouble about? Or it will just provide me all Potential as a way of trying to give a comprehensive understanding, but fueling the worry of “what if”? And, like how Google’s AI overview told people to eat the glue on their pizza, were the AI health tool to simply scour the internet and provide me with a hashed, inaccurate inference that could make my anxiety completely Stuck in panic attack territory?
Or, maybe, like the kind doctor in the hospital that day who smiled tenderly at the sobbing man sitting across from him, who had drafted a farewell letter to his family on his phone in the waiting room, the AI tool might be able to see it. at this point. Data and simply said “You’re fine Andy, don’t worry, go to bed.”
Maybe one day that will be the case. Perhaps health tracking tools and AI insights will be able to provide me with much-needed logic and comfort to combat my anxiety, rather than becoming a source of it. But until then, I’m not willing to take the risk.