Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Anne Cronin, PhD Candidate, Medicine, University of Limerick

AI translation is replacing interpreters in GP care – here’s why that’s troubling

Krakenimages.com/Shutterstock.com

When a doctor can’t find an interpreter, many now reach for Google Translate. It seems like a practical fix to a pressing problem. But a new study warns this quick solution may be putting refugee and migrant patients at serious risk – exposing them to translation errors that could lead to misdiagnosis, wrong treatment or worse.

The study, led by an interdisciplinary team of researchers at the University of Limerick – of which we were part – examined how artificial intelligence (AI) is being used to bridge language gaps between doctors and patients. The findings reveal a troubling pattern: AI translation tools are increasingly replacing human interpreters in GP surgeries, even though none of these apps have been tested for patient safety.

Anyone who has tried to explain themselves across a language barrier knows how easily meaning can slip away. In everyday situations – from the nail salon to the car mechanic – we often manage with gestures, guesses and good humour. But healthcare is different.

Clear communication between a patient and their doctor must be accurate and safe. It is the cornerstone of good medical care, especially when symptoms, risks or treatment decisions are involved, and it allows patients to feel heard and to participate meaningfully in decisions about their own health.

When a patient and doctor do not speak the same language and rely instead on an AI translation app such as Google Translate, communication becomes less certain and more problematic. What appears to be a convenient solution may obscure important details at precisely the moment when clarity matters most.

The recognised standard for cross-cultural communication in healthcare is access to a trained interpreter. The role of an interpreter is to provide impartial support to both the patient and the doctor. However, interpreters are often inaccessible in practice, due to availability, time pressures and limited resources in general practice.

Consequently, doctors report that they increasingly turn to the device in their pocket – their phone – as a quick, improvised solution to bridge communication gaps during consultations. Google Translate is now being used as an interpreter substitute, despite not being designed for medical communication.

My colleagues and I examined international studies from 2017 to 2024 and found no evidence that an AI-powered tool can safely support the live, back-and-forth medical conversations needed in clinical consultations.

A mobile phone with the Google Translate app open.
Not designed for medical translation. Yaman2407/Shutterstock.com

Errors create serious risks

In all the studies we reviewed, doctors relied on Google Translate, and they consistently raised concerns about its limitations. These included inaccurate translations, failure to recognise medical terminology and the inability to handle conversations that unfold over multiple turns.

The studies reported translation errors that risk misdiagnosis, inappropriate treatment and, in some cases, serious harm. Worryingly, the research found no evidence that Google Translate has ever been tested for patient safety in general practice.

In other studies, Google Translate was shown to misinterpret key medical words and phrases. Terms such as congestion, drinking, feeding, gestation, vagina and other reproductive organs were sometimes mistranslated in certain languages.

It also misinterpreted pronouns, numbers and gender, and struggled with dialects or accents, leading to confusing or inaccurate substitutions. Alarmingly, researchers also reported “hallucinations” – where the app produced fluent-sounding but entirely fabricated text.

Relying on Google Translate to support doctor-patient communication carries the risk of displacing human interpreters and creating an overdependence on AI tools that were not designed for medical interpretation. It also normalises the use of AI apps that have not undergone the safety testing expected of healthcare technologies.

It is difficult to imagine any other area of medical practice where such an untested approach would be considered acceptable.

The study found that refugee and migrant advocates prefer human interpreters, particularly in maternal healthcare and mental health. Patients also raised concerns about consenting to the use of AI and about where their personal information might be stored and how it might be used.

To deliver safe healthcare to refugees and migrants, doctors should ensure that patients have access to trained interpreters, whether in person, by video, or by phone. Clear instructions for accessing these interpreters must be available in every healthcare setting so that staff can arrange support quickly and confidently.

The evidence shows that AI tools not specifically designed and tested for medical interpreting should no longer be used, as they cannot yet provide safe or reliable communication in clinical situations.

The Conversation asked Google to comment on the issues raised by this article but received no reply.

The Conversation

Anthony Kelly receives funding from Innovation Fund Denmark.

Anne Cronin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.