Thomas P Reith, Donna M D'Alessandro, Michael P D'Alessandro
{"title":"多模态大语言模型解读儿科放射影像的能力。","authors":"Thomas P Reith, Donna M D'Alessandro, Michael P D'Alessandro","doi":"10.1007/s00247-024-06025-0","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>There is a dearth of artificial intelligence (AI) development and research dedicated to pediatric radiology. The newest iterations of large language models (LLMs) like ChatGPT can process image and video input in addition to text. They are thus theoretically capable of providing impressions of input radiological images.</p><p><strong>Objective: </strong>To assess the ability of multimodal LLMs to interpret pediatric radiological images.</p><p><strong>Materials and methods: </strong>Thirty medically significant cases were collected and submitted to GPT-4 (OpenAI, San Francisco, CA), Gemini 1.5 Pro (Google, Mountain View, CA), and Claude 3 Opus (Anthropic, San Francisco, CA) with a short history for a total of 90 images. AI responses were recorded and independently assessed for accuracy by a resident and attending physician. 95% confidence intervals were determined using the adjusted Wald method.</p><p><strong>Results: </strong>Overall, the models correctly diagnosed 27.8% (25/90) of images (95% CI=19.5-37.8%), were partially correct for 13.3% (12/90) of images (95% CI=2.7-26.4%), and were incorrect for 58.9% (53/90) of images (95% CI=48.6-68.5%).</p><p><strong>Conclusion: </strong>Multimodal LLMs are not yet capable of interpreting pediatric radiological images.</p>","PeriodicalId":19755,"journal":{"name":"Pediatric Radiology","volume":" ","pages":"1729-1737"},"PeriodicalIF":2.1000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Capability of multimodal large language models to interpret pediatric radiological images.\",\"authors\":\"Thomas P Reith, Donna M D'Alessandro, Michael P D'Alessandro\",\"doi\":\"10.1007/s00247-024-06025-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>There is a dearth of artificial intelligence (AI) development and research dedicated to pediatric radiology. The newest iterations of large language models (LLMs) like ChatGPT can process image and video input in addition to text. They are thus theoretically capable of providing impressions of input radiological images.</p><p><strong>Objective: </strong>To assess the ability of multimodal LLMs to interpret pediatric radiological images.</p><p><strong>Materials and methods: </strong>Thirty medically significant cases were collected and submitted to GPT-4 (OpenAI, San Francisco, CA), Gemini 1.5 Pro (Google, Mountain View, CA), and Claude 3 Opus (Anthropic, San Francisco, CA) with a short history for a total of 90 images. AI responses were recorded and independently assessed for accuracy by a resident and attending physician. 95% confidence intervals were determined using the adjusted Wald method.</p><p><strong>Results: </strong>Overall, the models correctly diagnosed 27.8% (25/90) of images (95% CI=19.5-37.8%), were partially correct for 13.3% (12/90) of images (95% CI=2.7-26.4%), and were incorrect for 58.9% (53/90) of images (95% CI=48.6-68.5%).</p><p><strong>Conclusion: </strong>Multimodal LLMs are not yet capable of interpreting pediatric radiological images.</p>\",\"PeriodicalId\":19755,\"journal\":{\"name\":\"Pediatric Radiology\",\"volume\":\" \",\"pages\":\"1729-1737\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pediatric Radiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s00247-024-06025-0\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/12 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"PEDIATRICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pediatric Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00247-024-06025-0","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/12 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"PEDIATRICS","Score":null,"Total":0}
Capability of multimodal large language models to interpret pediatric radiological images.
Background: There is a dearth of artificial intelligence (AI) development and research dedicated to pediatric radiology. The newest iterations of large language models (LLMs) like ChatGPT can process image and video input in addition to text. They are thus theoretically capable of providing impressions of input radiological images.
Objective: To assess the ability of multimodal LLMs to interpret pediatric radiological images.
Materials and methods: Thirty medically significant cases were collected and submitted to GPT-4 (OpenAI, San Francisco, CA), Gemini 1.5 Pro (Google, Mountain View, CA), and Claude 3 Opus (Anthropic, San Francisco, CA) with a short history for a total of 90 images. AI responses were recorded and independently assessed for accuracy by a resident and attending physician. 95% confidence intervals were determined using the adjusted Wald method.
Results: Overall, the models correctly diagnosed 27.8% (25/90) of images (95% CI=19.5-37.8%), were partially correct for 13.3% (12/90) of images (95% CI=2.7-26.4%), and were incorrect for 58.9% (53/90) of images (95% CI=48.6-68.5%).
Conclusion: Multimodal LLMs are not yet capable of interpreting pediatric radiological images.
期刊介绍:
Official Journal of the European Society of Pediatric Radiology, the Society for Pediatric Radiology and the Asian and Oceanic Society for Pediatric Radiology
Pediatric Radiology informs its readers of new findings and progress in all areas of pediatric imaging and in related fields. This is achieved by a blend of original papers, complemented by reviews that set out the present state of knowledge in a particular area of the specialty or summarize specific topics in which discussion has led to clear conclusions. Advances in technology, methodology, apparatus and auxiliary equipment are presented, and modifications of standard techniques are described.
Manuscripts submitted for publication must contain a statement to the effect that all human studies have been reviewed by the appropriate ethics committee and have therefore been performed in accordance with the ethical standards laid down in an appropriate version of the 1964 Declaration of Helsinki. It should also be stated clearly in the text that all persons gave their informed consent prior to their inclusion in the study. Details that might disclose the identity of the subjects under study should be omitted.