Skip to main content
Advertisement

Study Warns AI Chatbots Pose Risks with Inaccurate Medical Advice

A University of Oxford study finds AI chatbots provide inconsistent medical advice, posing risks to users. Experts call for improved AI systems and clear regulations in healthcare applications.

·3 min read
Fiordaliso via Getty Images A woman looks at her phone lying on the bed in a low light.

AI Chatbots and Medical Advice Risks

AI chatbots provide medical advice that can be inaccurate and inconsistent, potentially posing risks to users, according to research conducted by the University of Oxford.

The study revealed that individuals seeking healthcare guidance from AI received a combination of both accurate and misleading responses, complicating their ability to discern which advice to trust.

In November 2025, a poll by Mental Health UK indicated that over one-third of UK residents currently use AI to support their mental health or wellbeing.

Dr Rebecca Payne, the lead medical practitioner involved in the study, emphasized the potential dangers of consulting chatbots about symptoms.

"It could be 'dangerous' for people to ask chatbots about their symptoms," Dr Payne stated.

Study Methodology and Findings

Researchers presented 1,300 participants with various scenarios, such as experiencing a severe headache or being a new mother feeling constantly exhausted.

Participants were divided into two groups, with one group utilizing AI to help interpret their symptoms and decide on subsequent actions.

The researchers assessed whether participants correctly identified possible conditions and determined if they should consult a general practitioner (GP) or visit accident and emergency (A&E) services.

Advertisement

Findings indicated that individuals using AI often struggled to formulate appropriate questions and received diverse answers depending on how they phrased their inquiries.

The chatbot responses included a mix of information, making it difficult for users to distinguish between helpful and unhelpful advice.

Expert Insights on AI Limitations

Dr Adam Mahdi, senior author of the study, explained to the BBC that while AI can provide medical information, users often find it challenging to obtain useful advice.

"People share information gradually," he said.
"They leave things out, they don't mention everything. So, in our study, when the AI listed three possible conditions, people were left to guess which of those can fit.
"This is exactly when things would fall apart."

Lead author Andrew Bean noted that the study highlights the difficulty AI faces when interacting with humans, even for advanced models.

"We hope this work will contribute to the development of safer and more useful AI systems," Bean said.

Future Developments and Regulatory Perspectives

Dr Bertalan Meskó, editor of The Medical Futurist, which forecasts technology trends in healthcare, commented on upcoming advancements in this field.

He noted that two major AI developers, OpenAI and Anthropic, recently launched health-focused versions of their general chatbots, which he believes would "definitely yield different results in a similar study."

Dr Meskó emphasized the importance of continuous improvement in AI technology, particularly in health-related applications, alongside clear national regulations and medical guidelines.

"The goal should be to keep on improving the tech, especially health-related versions, with clear national regulations, regulatory guardrails and medical guidelines," he said.

for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? here.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

This article was sourced from bbc

Advertisement

Related News