{"title":"A Framework to Assess Clinical Safety and Hallucination Rates of LLMs for Medical Text Summarisation","authors":"Elham Asgari, Nina Montana-Brown, Magda Dubois, Saleh Khalil, Jasmine Balloch, Dominic Pimenta","doi":"10.1101/2024.09.12.24313556","DOIUrl":null,"url":null,"abstract":"The integration of large language models (LLMs) into healthcare settings holds great promise for improving clinical workflow efficiency and enhancing patient care, with the potential to automate tasks such as text summarisation during consultations. The fidelity between LLM outputs and ground truth information is therefore paramount in healthcare, as errors in medical summary generation can lead to miscommunication between patients and clinicians, leading to incorrect diagnosis and treatment decisions and compromising patient safety. LLMs are well-known to produce a variety of errors. Currently, there is no established clinical framework for assessing the safety and accuracy of LLM-generated medical text.\nWe have developed a new approach to: a) categorise LLM errors within the clinical documentation context, b) establish clinical safety metrics for the live usage phase, and c) suggest a framework named CREOLA for assessing the safety risk for errors. We present clinical error metrics over 18 different LLM experimental configurations for the clinical note generation task, consisting of 12,999 clinician-annotated sentences. We illustrate the utility of using our platform CREOLA for iteration over LLM architectures with two experiments. Overall, we find our best-performing experiments outperform previously reported model error rates in the note generation literature, and additionally outperform human annotators. Our suggested framework can be used to assess the accuracy and safety of LLM output in the clinical context.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"39 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Health Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.09.12.24313556","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The integration of large language models (LLMs) into healthcare settings holds great promise for improving clinical workflow efficiency and enhancing patient care, with the potential to automate tasks such as text summarisation during consultations. The fidelity between LLM outputs and ground truth information is therefore paramount in healthcare, as errors in medical summary generation can lead to miscommunication between patients and clinicians, leading to incorrect diagnosis and treatment decisions and compromising patient safety. LLMs are well-known to produce a variety of errors. Currently, there is no established clinical framework for assessing the safety and accuracy of LLM-generated medical text.
We have developed a new approach to: a) categorise LLM errors within the clinical documentation context, b) establish clinical safety metrics for the live usage phase, and c) suggest a framework named CREOLA for assessing the safety risk for errors. We present clinical error metrics over 18 different LLM experimental configurations for the clinical note generation task, consisting of 12,999 clinician-annotated sentences. We illustrate the utility of using our platform CREOLA for iteration over LLM architectures with two experiments. Overall, we find our best-performing experiments outperform previously reported model error rates in the note generation literature, and additionally outperform human annotators. Our suggested framework can be used to assess the accuracy and safety of LLM output in the clinical context.