{"title":"比较大型语言模型在具有挑战性的临床病例中的诊断能力。","authors":"Maria Palwasha Khan, Eoin Daniel O'Sullivan","doi":"10.3389/frai.2024.1379297","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>The rise of accessible, consumer facing large language models (LLM) provides an opportunity for immediate diagnostic support for clinicians.</p><p><strong>Objectives: </strong>To compare the different performance characteristics of common LLMS utility in solving complex clinical cases and assess the utility of a novel tool to grade LLM output.</p><p><strong>Methods: </strong>Using a newly developed rubric to assess the models' diagnostic utility, we measured to models' ability to answer cases according to accuracy, readability, clinical interpretability, and an assessment of safety. Here we present a comparative analysis of three LLM models-Bing, Chat GPT, and Gemini-across a diverse set of clinical cases as presented in the New England Journal of Medicines case series.</p><p><strong>Results: </strong>Our results suggest that models performed differently when presented with identical clinical information, with Gemini performing best. Our grading tool had low interobserver variability and proved a reliable tool to grade LLM clinical output.</p><p><strong>Conclusion: </strong>This research underscores the variation in model performance in clinical scenarios and highlights the importance of considering diagnostic model performance in diverse clinical scenarios prior to deployment. Furthermore, we provide a new tool to assess LLM output.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1379297"},"PeriodicalIF":3.0000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11330891/pdf/","citationCount":"0","resultStr":"{\"title\":\"A comparison of the diagnostic ability of large language models in challenging clinical cases.\",\"authors\":\"Maria Palwasha Khan, Eoin Daniel O'Sullivan\",\"doi\":\"10.3389/frai.2024.1379297\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>The rise of accessible, consumer facing large language models (LLM) provides an opportunity for immediate diagnostic support for clinicians.</p><p><strong>Objectives: </strong>To compare the different performance characteristics of common LLMS utility in solving complex clinical cases and assess the utility of a novel tool to grade LLM output.</p><p><strong>Methods: </strong>Using a newly developed rubric to assess the models' diagnostic utility, we measured to models' ability to answer cases according to accuracy, readability, clinical interpretability, and an assessment of safety. Here we present a comparative analysis of three LLM models-Bing, Chat GPT, and Gemini-across a diverse set of clinical cases as presented in the New England Journal of Medicines case series.</p><p><strong>Results: </strong>Our results suggest that models performed differently when presented with identical clinical information, with Gemini performing best. Our grading tool had low interobserver variability and proved a reliable tool to grade LLM clinical output.</p><p><strong>Conclusion: </strong>This research underscores the variation in model performance in clinical scenarios and highlights the importance of considering diagnostic model performance in diverse clinical scenarios prior to deployment. Furthermore, we provide a new tool to assess LLM output.</p>\",\"PeriodicalId\":33315,\"journal\":{\"name\":\"Frontiers in Artificial Intelligence\",\"volume\":\"7 \",\"pages\":\"1379297\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11330891/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/frai.2024.1379297\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2024.1379297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A comparison of the diagnostic ability of large language models in challenging clinical cases.
Introduction: The rise of accessible, consumer facing large language models (LLM) provides an opportunity for immediate diagnostic support for clinicians.
Objectives: To compare the different performance characteristics of common LLMS utility in solving complex clinical cases and assess the utility of a novel tool to grade LLM output.
Methods: Using a newly developed rubric to assess the models' diagnostic utility, we measured to models' ability to answer cases according to accuracy, readability, clinical interpretability, and an assessment of safety. Here we present a comparative analysis of three LLM models-Bing, Chat GPT, and Gemini-across a diverse set of clinical cases as presented in the New England Journal of Medicines case series.
Results: Our results suggest that models performed differently when presented with identical clinical information, with Gemini performing best. Our grading tool had low interobserver variability and proved a reliable tool to grade LLM clinical output.
Conclusion: This research underscores the variation in model performance in clinical scenarios and highlights the importance of considering diagnostic model performance in diverse clinical scenarios prior to deployment. Furthermore, we provide a new tool to assess LLM output.