Скопировано

Study warns against relying on chatbots for health advice

12.05.2025 17:46:00
Дата публикации

Amid overloaded healthcare systems and rising pharmaceutical costs, more people are turning to AI chatbots for health advice. According to a recent survey, one in six American adults uses such services at least once a month.

However, a study by researchers at the Oxford Internet Institute warns that relying on AI advice can be dangerous.

The issue is not just potential errors from the chatbot, but also users not knowing what input it needs to generate accurate advice.

Co-author Adam Mahdi explains:
“We encountered a two-sided misunderstanding: users don’t get better results than through standard searches.”

The experiment involved around 1,300 participants in the UK. They were given medical scenarios written by doctors and asked to identify potential conditions and necessary actions — for example, calling an ambulance or visiting a GP.

Participants used several AI models: GPT-4o (ChatGPT), Command R+ (Cohere), and Meta’s Llama 3. When interacting with chatbots, users made more mistakes and often underestimated symptom severity.

“Current AI evaluation methods don’t account for the complexity of real-life communication,” Mahdi said. He believes AI systems should undergo clinical trials like new medicines.

Meanwhile, major tech companies are actively integrating AI into healthcare. Apple is developing a nutrition and sleep advisor, Amazon is analyzing social determinants of disease, and Microsoft is automating patient messaging.

But experts and medical professionals remain skeptical. The American Medical Association recommends against using chatbots for clinical decision-making.

Even AI developers, including OpenAI, warn users not to rely on chatbots for medical diagnoses.

(This translation was generated automatically.)