Jasmine Chiat Ling Ong, Michael Chen, Ning Ng, Kabilan Elangovan, Nichole Yue Ting Tan, Liyuan Jin, Qihuang Xie, Daniel Shu Wei Ting, Rosa Rodriguez-Monguio, David Bates, Nan Liu
{"title":"Generative AI and Large Language Models in Reducing Medication Related Harm and Adverse Drug Events - A Scoping Review","authors":"Jasmine Chiat Ling Ong, Michael Chen, Ning Ng, Kabilan Elangovan, Nichole Yue Ting Tan, Liyuan Jin, Qihuang Xie, Daniel Shu Wei Ting, Rosa Rodriguez-Monguio, David Bates, Nan Liu","doi":"10.1101/2024.09.13.24313606","DOIUrl":null,"url":null,"abstract":"Background: Medication-related harm has a significant impact on global healthcare costs and patient outcomes, accounting for deaths in 4.3 per 1000 patients. Generative artificial intelligence (GenAI) has emerged as a promising tool in mitigating risks of medication-related harm. In particular, large language models (LLMs) and well-developed generative adversarial networks (GANs) showing promise for healthcare related tasks. This review aims to explore the scope and effectiveness of generative AI in reducing medication-related harm, identifying existing development and challenges in research. Methods: We searched for peer reviewed articles in PubMed, Web of Science, Embase, and Scopus for literature published from January 2012 to February 2024. We included studies focusing on the development or application of generative AI in mitigating risk for medication-related harm during the entire medication use process. We excluded studies using traditional AI methods only, those unrelated to healthcare settings, or concerning non-prescribed medication uses such as supplements. Extracted variables included study characteristics, AI model specifics and performance, application settings, and any patient outcome evaluated. Findings: A total of 2203 articles were identified, and 14 met the criteria for inclusion into final review. We found that generative AI and large language models were used in a few key applications: drug-drug interaction identification and prediction; clinical decision support and pharmacovigilance. While the performance and utility of these models varied, they generally showed promise in areas like early identification and classification of adverse drug events and support in decision-making for medication management. However, no studies tested these models prospectively, suggesting a need for further investigation into the integration and real-world application of generative AI tools to improve patient safety and healthcare outcomes effectively. Interpretation: Generative AI shows promise in mitigating medication-related harms, but there are gaps in research rigor and ethical considerations. Future research should focus on creation of high-quality, task-specific benchmarking datasets for medication safety and real-world implementation outcomes.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"17 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Health Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.09.13.24313606","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Medication-related harm has a significant impact on global healthcare costs and patient outcomes, accounting for deaths in 4.3 per 1000 patients. Generative artificial intelligence (GenAI) has emerged as a promising tool in mitigating risks of medication-related harm. In particular, large language models (LLMs) and well-developed generative adversarial networks (GANs) showing promise for healthcare related tasks. This review aims to explore the scope and effectiveness of generative AI in reducing medication-related harm, identifying existing development and challenges in research. Methods: We searched for peer reviewed articles in PubMed, Web of Science, Embase, and Scopus for literature published from January 2012 to February 2024. We included studies focusing on the development or application of generative AI in mitigating risk for medication-related harm during the entire medication use process. We excluded studies using traditional AI methods only, those unrelated to healthcare settings, or concerning non-prescribed medication uses such as supplements. Extracted variables included study characteristics, AI model specifics and performance, application settings, and any patient outcome evaluated. Findings: A total of 2203 articles were identified, and 14 met the criteria for inclusion into final review. We found that generative AI and large language models were used in a few key applications: drug-drug interaction identification and prediction; clinical decision support and pharmacovigilance. While the performance and utility of these models varied, they generally showed promise in areas like early identification and classification of adverse drug events and support in decision-making for medication management. However, no studies tested these models prospectively, suggesting a need for further investigation into the integration and real-world application of generative AI tools to improve patient safety and healthcare outcomes effectively. Interpretation: Generative AI shows promise in mitigating medication-related harms, but there are gaps in research rigor and ethical considerations. Future research should focus on creation of high-quality, task-specific benchmarking datasets for medication safety and real-world implementation outcomes.