Cameron C. Young, Ellie Enichen, Christian Rivera, Corinne A. Auger, Nathan Grant, Arya Rao, Marc D. Succi
{"title":"定制大语言模型对罕见儿科疾病病例报告的诊断准确性","authors":"Cameron C. Young, Ellie Enichen, Christian Rivera, Corinne A. Auger, Nathan Grant, Arya Rao, Marc D. Succi","doi":"10.1002/ajmg.a.63878","DOIUrl":null,"url":null,"abstract":"Accurately diagnosing rare pediatric diseases frequently represent a clinical challenge due to their complex and unusual clinical presentations. Here, we explore the capabilities of three large language models (LLMs), GPT‐4, Gemini Pro, and a custom‐built LLM (GPT‐4 integrated with the Human Phenotype Ontology [GPT‐4 HPO]), by evaluating their diagnostic performance on 61 rare pediatric disease case reports. The performance of the LLMs were assessed for accuracy in identifying specific diagnoses, listing the correct diagnosis among a differential list, and broad disease categories. In addition, GPT‐4 HPO was tested on 100 general pediatrics case reports previously assessed on other LLMs to further validate its performance. The results indicated that GPT‐4 was able to predict the correct diagnosis with a diagnostic accuracy of 13.1%, whereas both GPT‐4 HPO and Gemini Pro had diagnostic accuracies of 8.2%. Further, GPT‐4 HPO showed an improved performance compared with the other two LLMs in identifying the correct diagnosis among its differential list and the broad disease category. Although these findings underscore the potential of LLMs for diagnostic support, particularly when enhanced with domain‐specific ontologies, they also stress the need for further improvement prior to integration into clinical practice.","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Diagnostic Accuracy of a Custom Large Language Model on Rare Pediatric Disease Case Reports\",\"authors\":\"Cameron C. Young, Ellie Enichen, Christian Rivera, Corinne A. Auger, Nathan Grant, Arya Rao, Marc D. Succi\",\"doi\":\"10.1002/ajmg.a.63878\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Accurately diagnosing rare pediatric diseases frequently represent a clinical challenge due to their complex and unusual clinical presentations. Here, we explore the capabilities of three large language models (LLMs), GPT‐4, Gemini Pro, and a custom‐built LLM (GPT‐4 integrated with the Human Phenotype Ontology [GPT‐4 HPO]), by evaluating their diagnostic performance on 61 rare pediatric disease case reports. The performance of the LLMs were assessed for accuracy in identifying specific diagnoses, listing the correct diagnosis among a differential list, and broad disease categories. In addition, GPT‐4 HPO was tested on 100 general pediatrics case reports previously assessed on other LLMs to further validate its performance. The results indicated that GPT‐4 was able to predict the correct diagnosis with a diagnostic accuracy of 13.1%, whereas both GPT‐4 HPO and Gemini Pro had diagnostic accuracies of 8.2%. Further, GPT‐4 HPO showed an improved performance compared with the other two LLMs in identifying the correct diagnosis among its differential list and the broad disease category. Although these findings underscore the potential of LLMs for diagnostic support, particularly when enhanced with domain‐specific ontologies, they also stress the need for further improvement prior to integration into clinical practice.\",\"PeriodicalId\":1,\"journal\":{\"name\":\"Accounts of Chemical Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.4000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Accounts of Chemical Research\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://doi.org/10.1002/ajmg.a.63878\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1002/ajmg.a.63878","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
Diagnostic Accuracy of a Custom Large Language Model on Rare Pediatric Disease Case Reports
Accurately diagnosing rare pediatric diseases frequently represent a clinical challenge due to their complex and unusual clinical presentations. Here, we explore the capabilities of three large language models (LLMs), GPT‐4, Gemini Pro, and a custom‐built LLM (GPT‐4 integrated with the Human Phenotype Ontology [GPT‐4 HPO]), by evaluating their diagnostic performance on 61 rare pediatric disease case reports. The performance of the LLMs were assessed for accuracy in identifying specific diagnoses, listing the correct diagnosis among a differential list, and broad disease categories. In addition, GPT‐4 HPO was tested on 100 general pediatrics case reports previously assessed on other LLMs to further validate its performance. The results indicated that GPT‐4 was able to predict the correct diagnosis with a diagnostic accuracy of 13.1%, whereas both GPT‐4 HPO and Gemini Pro had diagnostic accuracies of 8.2%. Further, GPT‐4 HPO showed an improved performance compared with the other two LLMs in identifying the correct diagnosis among its differential list and the broad disease category. Although these findings underscore the potential of LLMs for diagnostic support, particularly when enhanced with domain‐specific ontologies, they also stress the need for further improvement prior to integration into clinical practice.
期刊介绍:
Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance.
Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.