Background: Access to interpreters for refugee and migrant patients that do not share the same language and culture as their GPs is considered a critical healthcare adaptation. However, interpreters are not routinely available in many healthcare settings and artificial intelligence (AI) is increasingly used as a pragmatic alternative. The patient-safety implications of relying on AI for this purpose are under-researched.
Aim: To identify and map available evidence on AI-facilitated synchronous communication between refugee or migrant patients and their healthcare provider, focusing on the patient-safety implications.
Design & setting: A six-stage scoping review was undertaken, examining the international literature.
Method: A literature search of five relevant electronic databases and grey literature from July 2017 to October 2024 was conducted. Data were extracted and synthesised accordingly.
Results: A total of 220 articles spanning various healthcare contexts were screened, with five articles meeting inclusion criteria. These studies report use of the AI-tool Google Translate to address language barriers across diverse clinical settings, despite Google Translate not being designed to support synchronous communication or communication in medical contexts. Negative experiences of using these tools were reported more than positive experiences. Clinicians discussed specific concerns about reliability of Google Translate for medical terms, patient consent, and complex consultations.
Conclusion: There is no evidence that using Google Translate to synchronously communicate medical information to refugees and migrants has been tested for patient safety, highlighting potential for translation inaccuracies impacting patient safety. In clinical settings, where the high stakes of failure are ever-present, such inaccuracies can result in misdiagnosis, inappropriate treatment, and serious harm.
扫码关注我们
求助内容:
应助结果提醒方式:
