求助PDF
{"title":"Optimizing Large Language Models in Radiology and Mitigating Pitfalls: Prompt Engineering and Fine-tuning.","authors":"Theodore Taehoon Kim, Michael Makutonin, Reza Sirous, Ramin Javan","doi":"10.1148/rg.240073","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) such as generative pretrained transformers (GPTs) have had a major impact on society, and there is increasing interest in using these models for applications in medicine and radiology. This article presents techniques to optimize these models and describes their known challenges and limitations. Specifically, the authors explore how to best craft natural language prompts, a process known as prompt engineering, for these models to elicit more accurate and desirable responses. The authors also explain how fine-tuning is conducted, in which a more general model, such as GPT-4, is further trained on a more specific use case, such as summarizing clinical notes, to further improve reliability and relevance. Despite the enormous potential of these models, substantial challenges limit their widespread implementation. These tools differ substantially from traditional health technology in their complexity and their probabilistic and nondeterministic nature, and these differences lead to issues such as \"hallucinations,\" biases, lack of reliability, and security risks. Therefore, the authors provide radiologists with baseline knowledge of the technology underpinning these models and an understanding of how to use them, in addition to exploring best practices in prompt engineering and fine-tuning. Also discussed are current proof-of-concept use cases of LLMs in the radiology literature, such as in clinical decision support and report generation, and the limitations preventing their current adoption in medicine and radiology. <sup>©</sup>RSNA, 2025 See invited commentary by Chung and Mongan in this issue.</p>","PeriodicalId":54512,"journal":{"name":"Radiographics","volume":"45 4","pages":"e240073"},"PeriodicalIF":5.2000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiographics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1148/rg.240073","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
引用
批量引用
Abstract
Large language models (LLMs) such as generative pretrained transformers (GPTs) have had a major impact on society, and there is increasing interest in using these models for applications in medicine and radiology. This article presents techniques to optimize these models and describes their known challenges and limitations. Specifically, the authors explore how to best craft natural language prompts, a process known as prompt engineering, for these models to elicit more accurate and desirable responses. The authors also explain how fine-tuning is conducted, in which a more general model, such as GPT-4, is further trained on a more specific use case, such as summarizing clinical notes, to further improve reliability and relevance. Despite the enormous potential of these models, substantial challenges limit their widespread implementation. These tools differ substantially from traditional health technology in their complexity and their probabilistic and nondeterministic nature, and these differences lead to issues such as "hallucinations," biases, lack of reliability, and security risks. Therefore, the authors provide radiologists with baseline knowledge of the technology underpinning these models and an understanding of how to use them, in addition to exploring best practices in prompt engineering and fine-tuning. Also discussed are current proof-of-concept use cases of LLMs in the radiology literature, such as in clinical decision support and report generation, and the limitations preventing their current adoption in medicine and radiology. © RSNA, 2025 See invited commentary by Chung and Mongan in this issue.