首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
"You'll Never Look Alone": Embedding Second-Look AI into the Radiologist's Workflow. “你永远不会孤单”:将第二眼人工智能嵌入放射科医生的工作流程。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.250575
Riccardo Levi, Andrea Laghi
{"title":"\"You'll Never Look Alone\": Embedding Second-Look AI into the Radiologist's Workflow.","authors":"Riccardo Levi, Andrea Laghi","doi":"10.1148/ryai.250575","DOIUrl":"10.1148/ryai.250575","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250575"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI. 多序列MRI对腰椎棘旁肌肉自动分割和组成分析的多中心验证。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.240833
Zhongyi Zhang, Julie A Hides, Enrico De Martino, Janet R Millner, Gervase Tuxworth

Chronic low back pain is a global health issue with considerable socioeconomic burdens and is associated with changes in lumbar paraspinal muscles (LPMs). In this retrospective study, a deep learning method was trained and externally validated for automated LPM segmentation, muscle volume quantification, and fatty infiltration assessment across multisequence MR images. A total of 1302 MR images from 641 participants across five centers were included. Data from two centers were used for model training and tuning, while data from the remaining three centers were used for external testing. Model segmentation performance was evaluated against manual segmentation using the Dice similarity coefficient (DSC), and measurement accuracy was assessed using two one-sided tests and intraclass correlation coefficients (ICCs). The model achieved global DSC values of 0.98 on the internal test set and 0.93 to 0.97 on external test sets. Statistical equivalence between automated and manual measurements of muscle volume and fat ratio was confirmed in most regions (P < .05). Agreement between automated and manual measurements was high (ICCs > 0.92). In conclusion, the proposed automated method accurately segmented LPM and demonstrated statistical equivalence to manual measurements of muscle volume and fatty infiltration ratio across multisequence, multicenter MR images. Keywords: MR-Imaging, Muscular, Volume Analysis, Segmentation, Vision, Application Domain, Quantification, Supervised Learning Type of Machine Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。慢性腰痛是一个全球性的健康问题,具有相当大的社会经济负担,并与腰椎棘旁肌(LPM)的变化有关。在这项回顾性研究中,对一种深度学习方法进行了训练并进行了外部验证,用于跨多序列mri的自动LPM分割、肌肉体积量化和脂肪浸润评估。来自五个中心641名参与者的1302份核磁共振成像被纳入研究。来自两个中心的数据用于模型训练和调优,而来自其余三个中心的数据用于外部测试。使用Dice相似系数(DSC)评估模型分割性能,使用两个单侧检验和类内相关系数(ICCs)评估测量精度。模型在内部测试集上实现了0.98的全局DSC值,在外部测试集上实现了0.93 ~ 0.97的全局DSC值。在大多数地区,自动和人工测量肌肉体积和脂肪比的统计等效性得到证实(P < 0.05)。自动测量和人工测量的一致性很高(ICCs > 0.92)。综上所述,所提出的自动化方法准确地分割了LPM,并且在多序列、多中心mri上显示了与手动测量肌肉体积和脂肪浸润比的统计等效。©RSNA, 2025年。
{"title":"Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI.","authors":"Zhongyi Zhang, Julie A Hides, Enrico De Martino, Janet R Millner, Gervase Tuxworth","doi":"10.1148/ryai.240833","DOIUrl":"10.1148/ryai.240833","url":null,"abstract":"<p><p>Chronic low back pain is a global health issue with considerable socioeconomic burdens and is associated with changes in lumbar paraspinal muscles (LPMs). In this retrospective study, a deep learning method was trained and externally validated for automated LPM segmentation, muscle volume quantification, and fatty infiltration assessment across multisequence MR images. A total of 1302 MR images from 641 participants across five centers were included. Data from two centers were used for model training and tuning, while data from the remaining three centers were used for external testing. Model segmentation performance was evaluated against manual segmentation using the Dice similarity coefficient (DSC), and measurement accuracy was assessed using two one-sided tests and intraclass correlation coefficients (ICCs). The model achieved global DSC values of 0.98 on the internal test set and 0.93 to 0.97 on external test sets. Statistical equivalence between automated and manual measurements of muscle volume and fat ratio was confirmed in most regions (<i>P</i> < .05). Agreement between automated and manual measurements was high (ICCs > 0.92). In conclusion, the proposed automated method accurately segmented LPM and demonstrated statistical equivalence to manual measurements of muscle volume and fatty infiltration ratio across multisequence, multicenter MR images. <b>Keywords:</b> MR-Imaging, Muscular, Volume Analysis, Segmentation, Vision, Application Domain, Quantification, Supervised Learning Type of Machine Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240833"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Performance Open-Source AI for Breast Cancer Detection and Localization in MRI. 用于MRI乳腺癌检测与定位的高性能开源AI。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.240550
Lukas Hirsch, Elizabeth J Sutton, Yu Huang, Beliz Kayis, Mary Hughes, Danny Martinez, Hernan A Makse, Lucas C Parra

Purpose To develop and evaluate an open-source deep learning model for detection and localization of breast cancer on MRI scans. Materials and Methods In this retrospective study, a deep learning model for breast cancer detection and localization was trained on the largest breast MRI dataset to date. Data included all breast MRI examinations conducted at a tertiary cancer center in the United States between 2002 and 2019. The model was validated on sagittal MRI scans from the primary site (n = 6615 breasts). Generalizability was assessed by evaluating model performance on axial data from the primary site (n = 7058 breasts) and a second clinical site (n = 1840 breasts). Results The primary site dataset included 30 672 sagittal MRI examinations (52 598 breasts) from 9986 female patients (mean age, 52.1 years ± 11.2 [SD]). The model achieved an area under the receiver operating characteristic curve of 0.95 for detecting cancer in the primary site. At 90% specificity (5717 of 6353), model sensitivity was 83% (217 of 262), which was comparable to historical performance data for radiologists. The model generalized well to axial examinations, achieving an area under the receiver operating characteristic curve of 0.92 on data from the same clinical site and 0.92 on data from a secondary site. The model accurately located the tumor in 88.5% (232 of 262) of sagittal images, 92.8% (272 of 293) of axial images from the primary site, and 87.7% (807 of 920) of secondary site axial images. Conclusion The model demonstrated state-of-the-art performance on breast cancer detection. Code and weights are openly available to stimulate further development and validation. Keywords: Computer-aided Diagnosis (CAD), MRI, Neural Networks, Breast Supplemental material is available for this article. See also commentary by Moassefi and Xiao in this issue. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的开发并评估一种用于乳腺癌MRI检测和定位的开源深度学习模型。在这项回顾性研究中,在迄今为止最大的乳房MRI数据集上训练了一个用于乳腺癌检测和定位的深度学习模型。数据包括2002年至2019年在美国一家三级癌症中心进行的所有乳房核磁共振成像。该模型在原发部位(n = 6,615个乳房)的矢状面mri上得到验证。通过评估模型对原发部位(n = 7058个乳房)和第二个临床部位(n = 1840个乳房)的轴向数据的表现来评估其普遍性。结果原发部位数据包括来自9986例女性患者(平均[SD]年龄,53岁)的30,672次矢状位MRI检查(52,598个乳房)。该模型实现了在原发部位检测癌症的受试者工作特征曲线下面积(AUC)为0.95。特异性为90%(5717/6353),模型敏感性为83%(217/262),与放射科医生的历史表现数据相当。该模型可以很好地推广到轴向检查,同一临床部位的AUC为0.92,次要部位的AUC为0.92。该模型在88.5%(232/262)的矢状位图像、92.8%(272/293)的原发部位轴向图像和87.7%(807/920)的继发部位轴向图像上准确定位肿瘤。结论该模型在乳腺癌检测中具有最先进的性能。代码和权重都是公开的,以刺激进一步的开发和验证。©RSNA, 2025年。
{"title":"High-Performance Open-Source AI for Breast Cancer Detection and Localization in MRI.","authors":"Lukas Hirsch, Elizabeth J Sutton, Yu Huang, Beliz Kayis, Mary Hughes, Danny Martinez, Hernan A Makse, Lucas C Parra","doi":"10.1148/ryai.240550","DOIUrl":"10.1148/ryai.240550","url":null,"abstract":"<p><p>Purpose To develop and evaluate an open-source deep learning model for detection and localization of breast cancer on MRI scans. Materials and Methods In this retrospective study, a deep learning model for breast cancer detection and localization was trained on the largest breast MRI dataset to date. Data included all breast MRI examinations conducted at a tertiary cancer center in the United States between 2002 and 2019. The model was validated on sagittal MRI scans from the primary site (<i>n</i> = 6615 breasts). Generalizability was assessed by evaluating model performance on axial data from the primary site (<i>n</i> = 7058 breasts) and a second clinical site (<i>n</i> = 1840 breasts). Results The primary site dataset included 30 672 sagittal MRI examinations (52 598 breasts) from 9986 female patients (mean age, 52.1 years ± 11.2 [SD]). The model achieved an area under the receiver operating characteristic curve of 0.95 for detecting cancer in the primary site. At 90% specificity (5717 of 6353), model sensitivity was 83% (217 of 262), which was comparable to historical performance data for radiologists. The model generalized well to axial examinations, achieving an area under the receiver operating characteristic curve of 0.92 on data from the same clinical site and 0.92 on data from a secondary site. The model accurately located the tumor in 88.5% (232 of 262) of sagittal images, 92.8% (272 of 293) of axial images from the primary site, and 87.7% (807 of 920) of secondary site axial images. Conclusion The model demonstrated state-of-the-art performance on breast cancer detection. Code and weights are openly available to stimulate further development and validation. <b>Keywords:</b> Computer-aided Diagnosis (CAD), MRI, Neural Networks, Breast <i>Supplemental material is available for this article.</i> See also commentary by Moassefi and Xiao in this issue. © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240550"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464713/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144486216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR-Transformer: A Vision Transformer-based Deep Learning Model for Total Knee Replacement Prediction Using MRI. MR-Transformer:一种基于视觉变压器的深度学习模型,用于MRI全膝关节置换术预测。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.240373
Chaojie Zhang, Shengjia Chen, Ozkan Cigdem, Haresh Rengaraj Rajamohan, Kyunghyun Cho, Richard Kijowski, Cem M Deniz

Purpose To develop a transformer-based deep learning model-MR-Transformer-that leverages ImageNet pretraining and three-dimensional spatial correlations to predict the progression of knee osteoarthritis to total knee replacement using MRI. Materials and Methods This retrospective study included 353 case-control matched pairs of coronal intermediate-weighted turbo spin-echo (COR-IW-TSE) and sagittal intermediate-weighted turbo spin-echo with fat suppression (SAG-IW-TSE-FS) knee MRI scans from the Osteoarthritis Initiative database, with a follow-up period up to 9 years, and 270 case-control matched pairs of coronal short-tau inversion recovery (COR-STIR) and sagittal proton-density fat-saturated (SAG-PD-FAT-SAT) knee MRI scans from the Multicenter Osteoarthritis Study database, with a follow-up period up to 7 years. Performance of the MR-Transformer to predict the progression of knee osteoarthritis was compared with that of existing state-of-the-art deep learning models (TSE-Net, 3DMeT, and MRNet) using sevenfold nested cross-validation across the four MRI tissue sequences. Results Among the 353 Osteoarthritis Initiative case-control pairs, 215 were women (mean age, 63 years ± 8 [SD]); among the 270 Multicenter Osteoarthritis Study case-control pairs, 203 were women (mean age, 65 years ± 7). The MR-Transformer achieved areas under the receiver operating characteristic curve (AUCs) of 0.88 (95% CI: 0.85, 0.91), 0.88 (95% CI: 0.85, 0.90), 0.86 (95% CI: 0.82, 0.89), and 0.84 (95% CI: 0.81, 0.87) for COR-IW-TSE, SAG-IW-TSE-FS, COR-STIR, and SAG-PD-FAT-SAT, respectively. The model achieved a higher AUC than that of 3DMeT for all MRI sequences (P < .001). The model showed the highest sensitivity of 83% (95% CI: 78, 87) and specificity of 83% (95% CI: 76, 88) for the COR-IW-TSE MRI sequence. Conclusion Compared with the existing deep learning models, the MR-Transformer exhibited state-of-the-art performance in predicting the progression of knee osteoarthritis to total knee replacement using MRI scans. Keywords: MRI, Knee, Prognosis, Supervised Learning Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的:开发基于变压器的深度学习模型mr - transformer,该模型利用ImageNet预训练和三维(3D)空间相关性,通过MRI预测膝关节骨关节炎向TKR的进展。材料和方法本回顾性研究包括353对病例对照匹配的冠状中权重涡轮自旋回波(COR-IW-TSE)和矢状中权重涡轮自旋回波伴脂肪抑制(sagw - tse - fs)膝关节mri,来自骨关节炎倡议(OAI)数据库,随访时间长达9年。以及来自多中心骨关节炎研究(MOST)数据库的270对匹配病例对照的冠状面短tau反转恢复(COR-STIR)和矢状面质子密度脂肪饱和(SAG-PD-FAT-SAT)膝关节mri,随访期长达7年。通过对四个MRI组织序列进行七倍嵌套交叉验证,将MR-Transformer预测膝关节骨性关节炎进展的性能与现有最先进的深度学习模型(TSE-Net、3DMeT和MRNet)进行比较。结果MR-Transformer在COR-IW-TSE、SAG-IW-TSE-FS、COR-STIR和SAG-PD-FAT-SAT的受试者工作特征曲线(auc)下的面积分别为0.88 (95% CI: 0.85, 0.91)、0.88 (95% CI: 0.85, 0.90)、0.86 (95% CI: 0.82, 0.89)和0.84 (95% CI: 0.81, 0.87)。在所有MRI序列上,该模型的AUC均高于3DMeT (P < 0.001)。该模型对co - iw - tse MRI序列的最高敏感性为83% (95% CI: 78,87%),特异性为83% (95% CI: 76,88%)。与现有的深度学习模型相比,MR-Transformer在利用mri预测膝关节骨关节炎向TKR的进展方面表现出了最先进的性能。©RSNA, 2025年。
{"title":"MR-Transformer: A Vision Transformer-based Deep Learning Model for Total Knee Replacement Prediction Using MRI.","authors":"Chaojie Zhang, Shengjia Chen, Ozkan Cigdem, Haresh Rengaraj Rajamohan, Kyunghyun Cho, Richard Kijowski, Cem M Deniz","doi":"10.1148/ryai.240373","DOIUrl":"10.1148/ryai.240373","url":null,"abstract":"<p><p>Purpose To develop a transformer-based deep learning model-MR-Transformer-that leverages ImageNet pretraining and three-dimensional spatial correlations to predict the progression of knee osteoarthritis to total knee replacement using MRI. Materials and Methods This retrospective study included 353 case-control matched pairs of coronal intermediate-weighted turbo spin-echo (COR-IW-TSE) and sagittal intermediate-weighted turbo spin-echo with fat suppression (SAG-IW-TSE-FS) knee MRI scans from the Osteoarthritis Initiative database, with a follow-up period up to 9 years, and 270 case-control matched pairs of coronal short-tau inversion recovery (COR-STIR) and sagittal proton-density fat-saturated (SAG-PD-FAT-SAT) knee MRI scans from the Multicenter Osteoarthritis Study database, with a follow-up period up to 7 years. Performance of the MR-Transformer to predict the progression of knee osteoarthritis was compared with that of existing state-of-the-art deep learning models (TSE-Net, 3DMeT, and MRNet) using sevenfold nested cross-validation across the four MRI tissue sequences. Results Among the 353 Osteoarthritis Initiative case-control pairs, 215 were women (mean age, 63 years ± 8 [SD]); among the 270 Multicenter Osteoarthritis Study case-control pairs, 203 were women (mean age, 65 years ± 7). The MR-Transformer achieved areas under the receiver operating characteristic curve (AUCs) of 0.88 (95% CI: 0.85, 0.91), 0.88 (95% CI: 0.85, 0.90), 0.86 (95% CI: 0.82, 0.89), and 0.84 (95% CI: 0.81, 0.87) for COR-IW-TSE, SAG-IW-TSE-FS, COR-STIR, and SAG-PD-FAT-SAT, respectively. The model achieved a higher AUC than that of 3DMeT for all MRI sequences (<i>P</i> < .001). The model showed the highest sensitivity of 83% (95% CI: 78, 87) and specificity of 83% (95% CI: 76, 88) for the COR-IW-TSE MRI sequence. Conclusion Compared with the existing deep learning models, the MR-Transformer exhibited state-of-the-art performance in predicting the progression of knee osteoarthritis to total knee replacement using MRI scans. <b>Keywords:</b> MRI, Knee, Prognosis, Supervised Learning <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240373"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sections Don't Lie: AI-driven Breast Cancer Detection Using MRI. 不要说谎:人工智能驱动的MRI乳腺癌检测。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.250520
Mana Moassefi, Lekui Xiao
{"title":"Sections Don't Lie: AI-driven Breast Cancer Detection Using MRI.","authors":"Mana Moassefi, Lekui Xiao","doi":"10.1148/ryai.250520","DOIUrl":"10.1148/ryai.250520","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250520"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single Inspiratory Chest CT-based Generative Deep Learning Models to Evaluate Functional Small Airways Disease. 基于单吸气胸部ct的生成深度学习模型评估功能性小气道疾病。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.240680
Di Zhang, Mingyue Zhao, Xiuxiu Zhou, Yiwei Li, Yu Guan, Yi Xia, Jin Zhang, Qi Dai, Jingfeng Zhang, Li Fan, S Kevin Zhou, Shiyuan Liu

Purpose To develop a deep learning model that uses a single inspiratory chest CT scan to perform parametric response mapping (PRM) and predict functional small airways disease (fSAD). Materials and Methods In this retrospective study, predictive and generative deep learning models for PRM using inspiratory chest CT were developed using a model development dataset with fivefold cross-validation, with PRM derived from paired respiratory CT as the reference standard. Voxelwise metrics, including sensitivity, area under the receiver operating characteristic curve (AUC), and structural similarity index measure, were used to evaluate model performance in predicting PRM and generating expiratory CT images. The best-performing model was tested on three internal test sets and an external test set. Results The model development dataset of 308 individuals (median age, 67 years [IQR: 62-70 years]; 113 female) was divided into the training set (n = 216), the internal validation set (n = 31), and the first internal test set (n = 61). The generative model outperformed the predictive model in detecting fSAD (sensitivity, 86.3% vs 38.9%; AUC, 0.86 vs 0.70). The generative model performed well in the second internal (AUCs of 0.64, 0.84, and 0.97 for emphysema, fSAD, and normal lung tissue, respectively), the third internal (AUCs of 0.63, 0.83, and 0.97), and the external (AUCs of 0.58, 0.85, and 0.94) test sets. Notably, the model exhibited exceptional performance in the preserved ratio impaired spirometry group of the fourth internal test set (AUCs of 0.62, 0.88, and 0.96). Conclusion The proposed generative model, using a single inspiratory CT scan, outperformed existing algorithms in PRM evaluation and achieved comparable results to paired respiratory CT. Keywords: CT, Lung, Chronic Obstructive Pulmonary Disease, Diagnosis, Reconstruction Algorithms, Deep Learning, Parametric Response Mapping, X-ray Computed Tomography, Small Airways Supplemental material is available for this article. © The Author(s) 2025. Published by the Radiological Society of North America under a CC BY 4.0 license. See also the commentary by Hathaway and Singh in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的建立一种深度学习模型,利用单次吸气式胸部CT扫描生成参数反应图(PRM)并预测功能性小气道疾病(fSAD)。在这项回顾性研究中,使用五倍交叉验证的模型开发数据集,以来自配对呼吸CT的PRM作为参考标准,开发了基于吸气式胸部CT的PRM预测和生成深度学习模型。体素指标包括灵敏度、受试者工作特征曲线下面积(AUC)和结构相似性,用于评估模型在预测PRM和呼气CT图像方面的性能。在三个内部测试集和一个外部测试集上对表现最佳的模型进行了测试。结果308例患者的模型开发数据集(中位年龄67岁,[IQR: 62-70岁];113名女性)分为训练集(n = 216)、内部验证集(n = 31)和第一内部测试集(n = 61)。生成模型在检测fSAD方面优于预测模型(敏感性86.3% vs 38.9%;AUC 0.86 vs 0.70)。生成模型在第二组内部测试(肺气肿、fSAD和正常肺组织的auc分别为0.64、0.84、0.97)、第三组内部测试(auc分别为0.63、0.83、0.97)和外部测试(auc分别为0.58、0.85、0.94)中表现良好。值得注意的是,该模型在第四个内部测试集的PRISm组中表现优异(AUC = 0.62, 0.88, 0.96)。结论基于单吸气CT的生成模型在PRM评估中优于现有算法,其结果与配对呼吸CT相当。在CC BY 4.0许可下发布。
{"title":"Single Inspiratory Chest CT-based Generative Deep Learning Models to Evaluate Functional Small Airways Disease.","authors":"Di Zhang, Mingyue Zhao, Xiuxiu Zhou, Yiwei Li, Yu Guan, Yi Xia, Jin Zhang, Qi Dai, Jingfeng Zhang, Li Fan, S Kevin Zhou, Shiyuan Liu","doi":"10.1148/ryai.240680","DOIUrl":"10.1148/ryai.240680","url":null,"abstract":"<p><p>Purpose To develop a deep learning model that uses a single inspiratory chest CT scan to perform parametric response mapping (PRM) and predict functional small airways disease (fSAD). Materials and Methods In this retrospective study, predictive and generative deep learning models for PRM using inspiratory chest CT were developed using a model development dataset with fivefold cross-validation, with PRM derived from paired respiratory CT as the reference standard. Voxelwise metrics, including sensitivity, area under the receiver operating characteristic curve (AUC), and structural similarity index measure, were used to evaluate model performance in predicting PRM and generating expiratory CT images. The best-performing model was tested on three internal test sets and an external test set. Results The model development dataset of 308 individuals (median age, 67 years [IQR: 62-70 years]; 113 female) was divided into the training set (<i>n</i> = 216), the internal validation set (<i>n</i> = 31), and the first internal test set (<i>n</i> = 61). The generative model outperformed the predictive model in detecting fSAD (sensitivity, 86.3% vs 38.9%; AUC, 0.86 vs 0.70). The generative model performed well in the second internal (AUCs of 0.64, 0.84, and 0.97 for emphysema, fSAD, and normal lung tissue, respectively), the third internal (AUCs of 0.63, 0.83, and 0.97), and the external (AUCs of 0.58, 0.85, and 0.94) test sets. Notably, the model exhibited exceptional performance in the preserved ratio impaired spirometry group of the fourth internal test set (AUCs of 0.62, 0.88, and 0.96). Conclusion The proposed generative model, using a single inspiratory CT scan, outperformed existing algorithms in PRM evaluation and achieved comparable results to paired respiratory CT. <b>Keywords:</b> CT, Lung, Chronic Obstructive Pulmonary Disease, Diagnosis, Reconstruction Algorithms, Deep Learning, Parametric Response Mapping, X-ray Computed Tomography, Small Airways <i>Supplemental material is available for this article.</i> © The Author(s) 2025. Published by the Radiological Society of North America under a CC BY 4.0 license. See also the commentary by Hathaway and Singh in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240680"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study. 优化联邦学习配置的MRI前列腺分割和癌症检测:模拟研究。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.240485
Ashkan Moradi, Fadila Zerka, Joeran Sander Bosma, Mohammed R S Sunoqrot, Bendik S Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

Purpose To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods A retrospective study was conducted using Flower FL (Flower.ai) to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MR images (four clients, 1294 patients) and csPCa detection using biparametric MR images (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P values for performance differences were calculated using permutation testing. Results The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch (300 rounds) using FedMedian for prostate segmentation and 5 epochs (200 rounds) using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation (Dice score, increase from 0.73 ± 0.06 [SD] to 0.88 ± 0.03; P ≤ .01) and csPCa detection (PI-CAI score, increase from 0.63 ± 0.07 to 0.74 ± 0.06; P ≤ .01) on the independent test set. The optimized FL model showed higher lesion detection performance compared with the FL-baseline model (PI-CAI score, increase from 0.72 ± 0.06 to 0.74 ± 0.06; P ≤ .01), but no evidence of a difference was observed for prostate segmentation (Dice scores, 0.87 ± 0.03 vs 0.88 ± 03; P > .05). Conclusion FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance. Keywords: Federated Learning, Prostate Cancer, MRI, Cancer Detection, Deep Learning Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的开发和优化跨多个客户端的联邦学习(FL)框架,用于双参数MRI前列腺分割和临床显著前列腺癌(csPCa)检测。材料和方法回顾性研究使用Flower FL训练基于nnu - net的MRI前列腺分割和csPCa检测架构,数据收集时间为2010年1月至2021年8月。模型开发包括训练和优化局部epoch、联合回合和基于fl的t2加权mri前列腺分割(4名客户,1294名患者)和使用双参数mri检测csPCa(3名客户,1440名患者)的聚合策略。使用分割的Dice评分和前列腺成像:癌症人工智能(PI-CAI)评分在独立测试集上评估性能,PI-CAI评分定义为接受者工作特征曲线下面积和平均精度的平均值,用于csPCa检测。使用排列检验计算性能差异的P值。结果FL配置对两项任务都进行了独立优化,使用FedMedian进行前列腺分割时的1 epoch 300轮和使用FedAdagrad进行csPCa检测时的5 epoch 200轮的性能都有所提高。与客户的平均性能相比,优化后的FL模型显著提高了前列腺分割的性能(Dice评分从0.73±0.06提高到0.88±0.03;P≤0.01)和csPCa检测(PI-CAI评分由0.63±0.07上升至0.74±0.06;P≤0.01)。与FL-基线模型相比,优化后的FL模型具有更高的病变检测性能(PICAI评分从0.72±0.06提高到0.74±0.06;P≤0.01),但前列腺分割无差异(Dice评分:0.87±0.03 vs 0.88±03;P < 0.05)。结论与局部模型相比,FL增强了MRI前列腺分割和csPCa检测的性能和通用性,优化其配置进一步提高了病变检测性能。©RSNA, 2025年。
{"title":"Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study.","authors":"Ashkan Moradi, Fadila Zerka, Joeran Sander Bosma, Mohammed R S Sunoqrot, Bendik S Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot","doi":"10.1148/ryai.240485","DOIUrl":"10.1148/ryai.240485","url":null,"abstract":"<p><p>Purpose To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods A retrospective study was conducted using Flower FL (Flower.ai) to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MR images (four clients, 1294 patients) and csPCa detection using biparametric MR images (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. <i>P</i> values for performance differences were calculated using permutation testing. Results The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch (300 rounds) using FedMedian for prostate segmentation and 5 epochs (200 rounds) using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation (Dice score, increase from 0.73 ± 0.06 [SD] to 0.88 ± 0.03; <i>P</i> ≤ .01) and csPCa detection (PI-CAI score, increase from 0.63 ± 0.07 to 0.74 ± 0.06; <i>P</i> ≤ .01) on the independent test set. The optimized FL model showed higher lesion detection performance compared with the FL-baseline model (PI-CAI score, increase from 0.72 ± 0.06 to 0.74 ± 0.06; <i>P</i> ≤ .01), but no evidence of a difference was observed for prostate segmentation (Dice scores, 0.87 ± 0.03 vs 0.88 ± 03; <i>P</i> > .05). Conclusion FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance. <b>Keywords:</b> Federated Learning, Prostate Cancer, MRI, Cancer Detection, Deep Learning <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240485"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Early Detection of Chronic Obstructive Pulmonary Disease Using Generative AI. 利用生成式人工智能推进慢性阻塞性肺疾病的早期检测。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.250555
Quincy A Hathaway, Yashbir Singh
{"title":"Advancing Early Detection of Chronic Obstructive Pulmonary Disease Using Generative AI.","authors":"Quincy A Hathaway, Yashbir Singh","doi":"10.1148/ryai.250555","DOIUrl":"10.1148/ryai.250555","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250555"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of Early Neoadjuvant Chemotherapy Response of Breast Cancer through Deep Learning-based Pharmacokinetic Quantification of DCE MRI. 基于深度学习的DCE MRI药代动力学量化预测乳腺癌早期新辅助化疗反应
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-01 DOI: 10.1148/ryai.240769
Chaowei Wu, Lixia Wang, Nan Wang, Stephen Shiao, Tai Dou, Yin-Chen Hsu, Anthony G Christodoulou, Yibin Xie, Debiao Li

Purpose To improve the generalizability of pathologic complete response prediction following neoadjuvant chemotherapy using deep learning-based retrospective pharmacokinetic quantification of early treatment dynamic contrast-enhanced MRI. Materials and Methods This multicenter retrospective study included breast MRI data from four publicly available datasets of patients with breast cancer acquired from May 2002 to November 2016. Pharmacokinetic quantification was performed using a previously developed deep learning model for clinical multiphasic dynamic contrast-enhanced MRI datasets. Radiomic analysis was performed on pharmacokinetic quantification maps and conventional enhancement maps. These data, together with clinicopathologic variables and shape-based radiomic analysis, were subsequently applied for pathologic complete response prediction using logistic regression. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC). Results A total of 1073 female patients with breast cancer were included. The proposed method showed improved consistency and generalizability compared with the reference method, achieving higher AUC values across external datasets (0.82 [95% CI: 0.72, 0.91], 0.75 [95% CI: 0.71, 0.79], and 0.77 [95% CI: 0.66, 0.86] for datasets A2, B, and C, respectively). For dataset A2 (from the same study as the training dataset), there was no significant difference in performance between the proposed method and reference method (P = .80). Notably, on the combined external datasets, the proposed method significantly outperformed the reference method (AUC, 0.75 [95% CI: 0.72, 0.79] vs AUC, 0.71 [95% CI: 0.68, 0.76]; P = .003). Conclusion This work offers an approach to improve the generalizability and predictive accuracy of pathologic complete response for breast cancer across diverse datasets, achieving higher and more consistent AUC scores than existing methods. Keywords: Tumor Response, Breast, Prognosis, Dynamic Contrast-enhanced MRI Supplemental material is available for this article. © RSNA, 2025 See also commentary by Schnitzler in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的利用基于深度学习(DL)的早期治疗动态对比增强(DCE) MRI回顾性药代动力学量化(RoQ)提高新辅助化疗后病理完全缓解(pCR)预测的泛化性。材料与方法本多中心回顾性研究纳入了2002年5月至2016年11月期间获得的四个公开数据集的乳腺癌患者的乳房MRI数据。RoQ使用先前开发的临床多相DCE-MRI数据集的DL模型进行。对RoQ图和常规增强图进行放射学分析。这些数据,连同临床病理变量和基于形状的放射组学分析,随后应用逻辑回归进行pCR预测。用受试者工作特征曲线下面积(AUC)评价预测效果。结果共纳入1073例女性乳腺癌患者。与参考方法相比,该方法具有更好的一致性和通用性,在外部数据集上获得更高的auc(数据集A2、B和C分别为0.82 [CI: 0.72-0.91]、0.75 [CI: 0.71-0.79]和0.77 [CI: 0.66-0.86])。在数据集A2(来自与训练数据集相同的研究)上,所提出的方法与参考方法在性能上没有显著差异(P = 0.80)。值得注意的是,在合并的外部数据集上,该方法显著优于参考方法(AUC: 0.75 [CI: 0.72- 0.79] vs 0.71 [CI: 0.68-0.76], P = 0.003)。这项工作提供了一种新的方法来提高乳腺癌pCR反应在不同数据集中的普遍性和预测准确性,获得比现有方法更高和更一致的AUC评分。©RSNA, 2025年。
{"title":"Prediction of Early Neoadjuvant Chemotherapy Response of Breast Cancer through Deep Learning-based Pharmacokinetic Quantification of DCE MRI.","authors":"Chaowei Wu, Lixia Wang, Nan Wang, Stephen Shiao, Tai Dou, Yin-Chen Hsu, Anthony G Christodoulou, Yibin Xie, Debiao Li","doi":"10.1148/ryai.240769","DOIUrl":"10.1148/ryai.240769","url":null,"abstract":"<p><p>Purpose To improve the generalizability of pathologic complete response prediction following neoadjuvant chemotherapy using deep learning-based retrospective pharmacokinetic quantification of early treatment dynamic contrast-enhanced MRI. Materials and Methods This multicenter retrospective study included breast MRI data from four publicly available datasets of patients with breast cancer acquired from May 2002 to November 2016. Pharmacokinetic quantification was performed using a previously developed deep learning model for clinical multiphasic dynamic contrast-enhanced MRI datasets. Radiomic analysis was performed on pharmacokinetic quantification maps and conventional enhancement maps. These data, together with clinicopathologic variables and shape-based radiomic analysis, were subsequently applied for pathologic complete response prediction using logistic regression. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC). Results A total of 1073 female patients with breast cancer were included. The proposed method showed improved consistency and generalizability compared with the reference method, achieving higher AUC values across external datasets (0.82 [95% CI: 0.72, 0.91], 0.75 [95% CI: 0.71, 0.79], and 0.77 [95% CI: 0.66, 0.86] for datasets A2, B, and C, respectively). For dataset A2 (from the same study as the training dataset), there was no significant difference in performance between the proposed method and reference method (<i>P</i> = .80). Notably, on the combined external datasets, the proposed method significantly outperformed the reference method (AUC, 0.75 [95% CI: 0.72, 0.79] vs AUC, 0.71 [95% CI: 0.68, 0.76]; <i>P</i> = .003). Conclusion This work offers an approach to improve the generalizability and predictive accuracy of pathologic complete response for breast cancer across diverse datasets, achieving higher and more consistent AUC scores than existing methods. <b>Keywords:</b> Tumor Response, Breast, Prognosis, Dynamic Contrast-enhanced MRI <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Schnitzler in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240769"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Evolution of Radiology Image Annotation in the Era of Large Language Models. 大语言模型时代放射学图像标注的演变
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-01 DOI: 10.1148/ryai.240631
Adam E Flanders, Xindi Wang, Carol C Wu, Felipe C Kitamura, George Shih, John Mongan, Yifan Peng

Although there are relatively few diverse, high-quality medical imaging datasets on which to train computer vision artificial intelligence models, even fewer datasets contain expertly classified observations that can be repurposed to train or test such models. The traditional annotation process is laborious and time-consuming. Repurposing annotations and consolidating similar types of annotations from disparate sources has never been practical. Until recently, the use of natural language processing to convert a clinical radiology report into labels required custom training of a language model for each use case. Newer technologies such as large language models have made it possible to generate accurate and normalized labels at scale, using only clinical reports and specific prompt engineering. The combination of automatically generated labels extracted and normalized from reports in conjunction with foundational image models provides a means to create labels for model training. This article provides a short history and review of the annotation and labeling process of medical images, from the traditional manual methods to the newest semiautomated methods that provide a more scalable solution for creating useful models more efficiently. Keywords: Feature Detection, Diagnosis, Semi-supervised Learning © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。尽管用于训练计算机视觉人工智能模型的多样化、高质量的医学成像数据集相对较少,但包含可重新用于训练或测试此类模型的专业分类观察结果的数据集就更少了。传统的注释过程既费力又耗时。重新利用注释和整合来自不同来源的类似类型的注释从来都不现实。直到最近,使用自然语言处理将临床放射学报告转换为标签需要对每个用例的语言模型进行定制培训。大型语言模型等新技术使得仅使用临床报告和特定提示工程就可以大规模地生成准确和规范化的标签成为可能。从报告中提取和规范化的自动生成标签与基础图像模型相结合,提供了一种为模型训练创建标签的方法。本文简要介绍了医学图像注释和标记过程的历史和回顾,从传统的手动方法到最新的半自动方法,这些方法为更有效地创建有用的模型提供了更具可扩展性的解决方案。©RSNA, 2025年。
{"title":"The Evolution of Radiology Image Annotation in the Era of Large Language Models.","authors":"Adam E Flanders, Xindi Wang, Carol C Wu, Felipe C Kitamura, George Shih, John Mongan, Yifan Peng","doi":"10.1148/ryai.240631","DOIUrl":"10.1148/ryai.240631","url":null,"abstract":"<p><p>Although there are relatively few diverse, high-quality medical imaging datasets on which to train computer vision artificial intelligence models, even fewer datasets contain expertly classified observations that can be repurposed to train or test such models. The traditional annotation process is laborious and time-consuming. Repurposing annotations and consolidating similar types of annotations from disparate sources has never been practical. Until recently, the use of natural language processing to convert a clinical radiology report into labels required custom training of a language model for each use case. Newer technologies such as large language models have made it possible to generate accurate and normalized labels at scale, using only clinical reports and specific prompt engineering. The combination of automatically generated labels extracted and normalized from reports in conjunction with foundational image models provides a means to create labels for model training. This article provides a short history and review of the annotation and labeling process of medical images, from the traditional manual methods to the newest semiautomated methods that provide a more scalable solution for creating useful models more efficiently. <b>Keywords:</b> Feature Detection, Diagnosis, Semi-supervised Learning © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240631"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319696/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1