Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Are AI Chatbots Safe for Medical and Healthcare Advice?

Are AI Chatbots Safe for Medical and Healthcare Advice? Are AI Chatbots Safe for Medical and Healthcare Advice?
IMAGE CREDITS: FLICKR

As healthcare wait times grow and costs continue to rise, many people are turning to AI chatbots like ChatGPT for health guidance. A recent survey shows that around one in six U.S. adults now use AI-powered chatbots monthly for medical advice.

But while chatbots may offer fast responses, a new study led by Oxford University warns that relying on them for health decisions may be risky. The research found that people often don’t know what information to share with chatbots, leading to less accurate and sometimes dangerous health recommendations.

Study Finds AI Chatbots Can Mislead Users

Researchers at the Oxford Internet Institute asked 1,300 U.K. participants to review fictional medical cases created by doctors. The volunteers used different tools—including ChatGPT’s GPT-4o, Cohere’s Command R+, and Meta’s Llama 3—to diagnose conditions and decide what action to take, such as visiting a doctor or hospital.

Surprisingly, those using chatbots were less accurate in spotting medical issues than those using traditional methods like Google or personal judgment. In some cases, chatbot users even underestimated how serious a condition was.

Study co-author Adam Mahdi explained the problem: “Participants often left out important details in their chatbot queries. The responses they received mixed good and bad advice, making it hard to decide what to do.”

The issue, Mahdi said, stems from poor communication on both sides. Users don’t always phrase questions clearly, and AI systems don’t handle the complexity of human interaction well. Current chatbot evaluations, he added, don’t account for real-world situations.

Tech Giants Push Ahead Despite Concerns

Despite these warnings, major tech companies continue to invest in AI for healthcare. Apple is reportedly creating an AI assistant for fitness, diet, and sleep advice. Amazon is working on AI to explore social factors affecting health. Microsoft is developing tools that help care teams manage patient messages more efficiently.

Yet, both medical experts and patients remain cautious. The American Medical Association currently advises against using chatbots like ChatGPT for clinical decisions. Even OpenAI, the company behind ChatGPT, warns users not to rely on it for medical diagnoses.

Mahdi emphasized the need for caution: “Healthcare decisions should be based on trusted sources. AI chatbots must undergo real-world testing, just like new medications, before they’re ready for use.”

Share with others