首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Pseudo-Contrast-enhanced US via Enhanced Generative Adversarial Networks for Evaluating Tumor Ablation Efficacy. 伪对比增强US通过增强生成对抗网络评估肿瘤消融疗效。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.240370
Chen Chen, Jiabin Yu, Zhikang Xu, Changsong Xu, Zubang Zhou, Jindong Hao, Vicky Yang Wang, Jincao Yao, Lingyan Zhou, Chenke Xu, Mei Song, Qi Zhang, Xiaofang Liu, Lin Sui, Yuqi Yan, Tian Jiang, Yahan Zhou, Yingtianqi Wu, Binggang Xiao, Chenjie Xu, Hongmei Mi, Li Yang, Zhiwei Wu, Qingquan He, Jian Chen, Qi Liu, Dong Xu

Purpose To develop methods for creating pseudo-contrast-enhanced US (CEUS) by using an enhanced generative adversarial network and evaluate its ability to assess tumor ablation effectiveness. Materials and Methods This retrospective study included 1030 patients who underwent thyroid nodule ablation across seven centers from January 2020 to April 2023. A generative adversarial network-based model was developed for direct pseudo-CEUS generation from B-mode US and tested on thyroid, breast, and liver ablation datasets. The reliability of pseudo-CEUS was assessed using structural similarity index (SSIM), color histogram correlation, and mean absolute percentage error against real CEUS. Additionally, a subjective evaluation system was devised to validate its clinical value. The Wilcoxon signed rank test was employed to analyze differences in the data. Results The study included 1030 patients (mean age, 46.9 years ± 12.5 [SD]; 799 female and 231 male patients). For internal test set 1, the mean SSIM was 0.89 ± 0.05, while across external test sets 1-6, mean SSIM values ranged from 0.84 ± 0.08 to 0.88 ± 0.04. Subjective assessments affirmed the method's stability and near-realistic performance in evaluating ablation effectiveness. The thyroid ablation datasets had an average identification score of 0.49 (0.5 indicates indistinguishability), while the similarity average score for all datasets was 4.75 out of 5. Radiologists' assessments of residual blood supply were nearly consistent, with no differences in defining ablation zones between real and pseudo-CEUS. Conclusion The pseudo-CEUS method demonstrated high similarity to real CEUS in evaluating tumor ablation effectiveness. Keywords: Ablation Techniques, Ultrasound, Computer Applications-Virtual Imaging Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的:开发一种使用增强型生成对抗网络创建伪对比增强超声(CEUS)的方法,并评估其评估肿瘤消融效果的能力。材料和方法本回顾性研究纳入了2020年1月至2023年4月在7个中心接受甲状腺结节消融治疗的1030例患者。开发了基于生成对抗网络的模型,用于从b模式US直接生成伪ceus,并在甲状腺、乳房和肝脏消融数据集上进行了测试。采用结构相似指数(SSIM)、颜色直方图相关性(CHC)和相对于真实CEUS的平均绝对百分比误差(MAPE)来评估伪CEUS的可靠性。此外,还设计了一个主观评价系统来验证其临床价值。采用Wilcoxon符号秩检验分析数据差异。结果纳入1030例患者,平均年龄46.9岁±12.5岁;799名女性和231名男性)。内部测试集1的平均SSIM为0.89±0.05,而外部测试集1-6的平均SSIM值为0.84±0.08至0.88±0.04。主观评价肯定了该方法在评估消融效果方面的稳定性和接近真实的性能。甲状腺消融数据集的平均识别得分为0.49(0.5表示无法区分),而所有数据集的相似度平均得分为4.75(满分5分)。放射科医生对残余血供的评估几乎一致,在确定真超声和假超声消融区域方面没有差异。结论伪超声造影与真超声造影在评价肿瘤消融效果方面具有较高的相似性。在CC BY 4.0许可下发布。
{"title":"Pseudo-Contrast-enhanced US via Enhanced Generative Adversarial Networks for Evaluating Tumor Ablation Efficacy.","authors":"Chen Chen, Jiabin Yu, Zhikang Xu, Changsong Xu, Zubang Zhou, Jindong Hao, Vicky Yang Wang, Jincao Yao, Lingyan Zhou, Chenke Xu, Mei Song, Qi Zhang, Xiaofang Liu, Lin Sui, Yuqi Yan, Tian Jiang, Yahan Zhou, Yingtianqi Wu, Binggang Xiao, Chenjie Xu, Hongmei Mi, Li Yang, Zhiwei Wu, Qingquan He, Jian Chen, Qi Liu, Dong Xu","doi":"10.1148/ryai.240370","DOIUrl":"10.1148/ryai.240370","url":null,"abstract":"<p><p>Purpose To develop methods for creating pseudo-contrast-enhanced US (CEUS) by using an enhanced generative adversarial network and evaluate its ability to assess tumor ablation effectiveness. Materials and Methods This retrospective study included 1030 patients who underwent thyroid nodule ablation across seven centers from January 2020 to April 2023. A generative adversarial network-based model was developed for direct pseudo-CEUS generation from B-mode US and tested on thyroid, breast, and liver ablation datasets. The reliability of pseudo-CEUS was assessed using structural similarity index (SSIM), color histogram correlation, and mean absolute percentage error against real CEUS. Additionally, a subjective evaluation system was devised to validate its clinical value. The Wilcoxon signed rank test was employed to analyze differences in the data. Results The study included 1030 patients (mean age, 46.9 years ± 12.5 [SD]; 799 female and 231 male patients). For internal test set 1, the mean SSIM was 0.89 ± 0.05, while across external test sets 1-6, mean SSIM values ranged from 0.84 ± 0.08 to 0.88 ± 0.04. Subjective assessments affirmed the method's stability and near-realistic performance in evaluating ablation effectiveness. The thyroid ablation datasets had an average identification score of 0.49 (0.5 indicates indistinguishability), while the similarity average score for all datasets was 4.75 out of 5. Radiologists' assessments of residual blood supply were nearly consistent, with no differences in defining ablation zones between real and pseudo-CEUS. Conclusion The pseudo-CEUS method demonstrated high similarity to real CEUS in evaluating tumor ablation effectiveness. <b>Keywords:</b> Ablation Techniques, Ultrasound, Computer Applications-Virtual Imaging <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240370"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Better Data and Smarter AI: Automated Quality Control for Chest Radiographs. 更好的数据和更智能的人工智能:胸部x光片的自动质量控制。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-01 DOI: 10.1148/ryai.250135
Masahiro Yanagawa, Junya Sato
{"title":"Better Data and Smarter AI: Automated Quality Control for Chest Radiographs.","authors":"Masahiro Yanagawa, Junya Sato","doi":"10.1148/ryai.250135","DOIUrl":"10.1148/ryai.250135","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250135"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images. 评估Skellytour对全身CT图像的自动骨骼分割。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.240050
Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell

Purpose To construct and evaluate the performance of a machine learning model for bone segmentation using whole-body CT images. Materials and Methods In this retrospective study, whole-body CT scans (from June 2010 to January 2018) from 90 patients (mean age, 61 years ± 9 [SD]; 45 male, 45 female) with multiple myeloma were manually segmented using 60 labels and subsegmented into cortical and trabecular bone. Segmentations were verified by board-certified radiology and nuclear medicine physicians. The impacts of isotropy, resolution, multiple labeling schemes, and postprocessing were assessed. Model performance was assessed on internal and external test datasets (362 scans) and benchmarked against the TotalSegmentator segmentation model. Performance was assessed using Dice similarity coefficient (DSC), normalized surface distance (NSD), and manual inspection. Results Skellytour achieved consistently high segmentation performance on the internal dataset (DSC: 0.94, NSD: 0.99) and two external datasets (DSC: 0.94, 0.96; NSD: 0.999, 1.0), outperforming TotalSegmentator on the first two datasets. Subsegmentation performance was also high (DSC: 0.95, NSD: 0.995). Skellytour produced finely detailed segmentations, even in low-density bones. Conclusion The study demonstrates that Skellytour is an accurate and generalizable bone segmentation and subsegmentation model for CT data; it is available as a Python package via GitHub (https://github.com/cpwardell/Skellytour). Keywords: CT, Informatics, Skeletal-Axial, Demineralization-Bone, Comparative Studies, Segmentation, Supervised Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. Published under a CC BY 4.0 license. See also commentary by Khosravi and Rouzrokh in this issue.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的构建并评价基于全身CT图像的骨分割机器学习模型的性能。材料与方法在这项回顾性研究中,90例患者(平均年龄61±[SD] 9岁;45例(男45例,女45例)多发性骨髓瘤患者使用60个标签进行手工分割,并将其亚分割为皮质骨和小梁骨。分割由委员会认证的放射学和核医学医生进行验证。评估了各向同性、分辨率、多种标记方案和后处理的影响。模型性能在内部和外部测试数据集(n = 362次扫描)上进行评估,并针对TotalSegmentator分割模型进行基准测试。使用Dice相似系数(DSC)、归一化表面距离(NSD)和人工检查来评估性能。结果Skellytour在内部数据集(DSC: 0.94, NSD: 0.99)和两个外部数据集(DSC: 0.94, 0.96, NSD: 0.999, 1.0)上取得了一致的高分割性能,优于前两个数据集上的TotalSegmentator。细分性能也很高(DSC: 0.95, NSD: 0.995)。即使在低密度的骨骼中,Skellytour也能产生精细的分割。研究表明,Skellytour是一种准确、通用的CT数据骨分割和亚分割模型,可以通过GitHub (https://github.com/cpwardell/Skellytour)获得Python包。在CC BY 4.0许可下发布。
{"title":"Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images.","authors":"Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell","doi":"10.1148/ryai.240050","DOIUrl":"10.1148/ryai.240050","url":null,"abstract":"<p><p>Purpose To construct and evaluate the performance of a machine learning model for bone segmentation using whole-body CT images. Materials and Methods In this retrospective study, whole-body CT scans (from June 2010 to January 2018) from 90 patients (mean age, 61 years ± 9 [SD]; 45 male, 45 female) with multiple myeloma were manually segmented using 60 labels and subsegmented into cortical and trabecular bone. Segmentations were verified by board-certified radiology and nuclear medicine physicians. The impacts of isotropy, resolution, multiple labeling schemes, and postprocessing were assessed. Model performance was assessed on internal and external test datasets (362 scans) and benchmarked against the TotalSegmentator segmentation model. Performance was assessed using Dice similarity coefficient (DSC), normalized surface distance (NSD), and manual inspection. Results Skellytour achieved consistently high segmentation performance on the internal dataset (DSC: 0.94, NSD: 0.99) and two external datasets (DSC: 0.94, 0.96; NSD: 0.999, 1.0), outperforming TotalSegmentator on the first two datasets. Subsegmentation performance was also high (DSC: 0.95, NSD: 0.995). Skellytour produced finely detailed segmentations, even in low-density bones. Conclusion The study demonstrates that Skellytour is an accurate and generalizable bone segmentation and subsegmentation model for CT data; it is available as a Python package via GitHub <i>(https://github.com/cpwardell/Skellytour)</i>. <b>Keywords:</b> CT, Informatics, Skeletal-Axial, Demineralization-Bone, Comparative Studies, Segmentation, Supervised Learning, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Khosravi and Rouzrokh in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240050"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum for: CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI. CMRxRecon2024:一个多模态,多视图k-空间数据集,促进加速心脏MRI的通用机器学习。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.259001
Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang
{"title":"Erratum for: CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI.","authors":"Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang","doi":"10.1148/ryai.259001","DOIUrl":"10.1148/ryai.259001","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e259001"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bone Appetit: Skellytour Sets the Table for Robust Skeletal Segmentation. 骨骼开胃:Skellytour 为稳健的骨骼分割提供了平台。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.250057
Bardia Khosravi, Pouria Rouzrokh
{"title":"Bone Appetit: Skellytour Sets the Table for Robust Skeletal Segmentation.","authors":"Bardia Khosravi, Pouria Rouzrokh","doi":"10.1148/ryai.250057","DOIUrl":"10.1148/ryai.250057","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250057"},"PeriodicalIF":13.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950880/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NNFit: A Self-Supervised Deep Learning Method for Accelerated Quantification of High-Resolution Short-Echo-Time MR Spectroscopy Datasets. NNFit:一种加速量化高分辨率短回波时间MR光谱数据集的自监督深度学习方法。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.230579
Alexander S Giuffrida, Sulaiman Sheriff, Vicki Huang, Brent D Weinberg, Lee A D Cooper, Yuan Liu, Brian J Soher, Michael Treadway, Andrew A Maudsley, Hyunsuk Shim

Purpose To develop and evaluate the performance of NNFit, a self-supervised deep learning method for quantification of high-resolution short-echo-time (TE) echo-planar spectroscopic imaging (EPSI) datasets, with the goal of addressing the computational bottleneck of conventional spectral quantification methods in the clinical workflow. Materials and Methods This retrospective study included 89 short-TE whole-brain EPSI/generalized autocalibrating partial parallel acquisition scans from clinical trials for glioblastoma (trial 1, May 2014-October 2018) and major depressive disorder (trial 2, 2022-2023). The training dataset included 685 000 spectra from 20 participants (60 scans) in trial 1. The testing dataset included 115 000 spectra from five participants (13 scans) in trial 1 and 145 000 spectra from seven participants (16 scans) in trial 2. A comparative analysis was performed between NNFit and a widely used parametric-modeling spectral quantitation method (FITT). Metabolite maps generated by each method were compared using the structural similarity index measure (SSIM) and linear correlation coefficient (R2). Radiation treatment volumes for glioblastoma based on metabolite maps were compared using the Dice coefficient and a two-tailed t test. Results Mean SSIMs and R2 values for trial 1 test set data were 0.91 and 0.90 for choline, 0.93 and 0.93 for creatine, 0.93 and 0.93 for N-acetylaspartate, 0.80 and 0.72 for myo-inositol, and 0.59 and 0.47 for glutamate plus glutamine. Mean values for trial 2 test set data were 0.95 and 0.95, 0.98 and 0.97, 0.98 and 0.98, 0.92 and 0.92, and 0.79 and 0.81, respectively. The treatment volumes had a mean Dice coefficient of 0.92. The mean processing times were 90.1 seconds for NNFit and 52.9 minutes for FITT. Conclusion A deep learning approach to spectral quantitation offers performance similar to that of conventional quantification methods for EPSI data, but with faster processing at short TE. Keywords: MR Spectroscopy, Neural Networks, Brain/Brain Stem Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的开发并评估用于高分辨率短回波时间(TE)回波平面光谱成像(EPSI)数据集量化的自监督深度学习方法NNFit的性能,以解决常规光谱量化方法在临床工作流程中的计算瓶颈。本回顾性研究包括来自胶质母细胞瘤(试验1,2014年5月至2018年10月)和重度抑郁症(试验2,2022年至2023年)临床试验的89次短te全脑EPSI/GRAPPA扫描。在试验1中,训练数据集包括来自20名参与者(60次扫描)的685k光谱。测试数据集包括试验1中来自5名参与者(13次扫描)的115k光谱和试验2中来自7名参与者(16次扫描)的145k光谱。将NNFit与广泛使用的参数建模光谱定量方法(FITT)进行了比较分析。利用结构相似指数(SSIM)和线性相关系数(R2)对每种方法生成的代谢物图谱进行比较。基于代谢物图谱的胶质母细胞瘤放射治疗量与dice系数和双尾t检验进行比较。结果试验1组数据的SSIM和R2平均评分分别为0.91/0.90(胆碱)、0.93/0.93(肌酸)、0.93/0.93 (n-乙酰天冬氨酸)、0.80/0.72(肌醇)和0.59/0.47(谷氨酸+谷氨酰胺)。试验2检验集数据的平均得分分别为0.95/0.95、0.98/0.97、0.98/0.98、0.92/0.92、0.79/0.81。处理量的平均Dice系数为0.92。NNFit的平均处理时间为90.1秒,而FITT的平均处理时间为52.9分钟。该研究表明,深度学习光谱定量方法与传统的EPSI数据定量方法具有相当的性能,但在短te下处理速度更快。©RSNA, 2025年。
{"title":"NNFit: A Self-Supervised Deep Learning Method for Accelerated Quantification of High-Resolution Short-Echo-Time MR Spectroscopy Datasets.","authors":"Alexander S Giuffrida, Sulaiman Sheriff, Vicki Huang, Brent D Weinberg, Lee A D Cooper, Yuan Liu, Brian J Soher, Michael Treadway, Andrew A Maudsley, Hyunsuk Shim","doi":"10.1148/ryai.230579","DOIUrl":"10.1148/ryai.230579","url":null,"abstract":"<p><p>Purpose To develop and evaluate the performance of NNFit, a self-supervised deep learning method for quantification of high-resolution short-echo-time (TE) echo-planar spectroscopic imaging (EPSI) datasets, with the goal of addressing the computational bottleneck of conventional spectral quantification methods in the clinical workflow. Materials and Methods This retrospective study included 89 short-TE whole-brain EPSI/generalized autocalibrating partial parallel acquisition scans from clinical trials for glioblastoma (trial 1, May 2014-October 2018) and major depressive disorder (trial 2, 2022-2023). The training dataset included 685 000 spectra from 20 participants (60 scans) in trial 1. The testing dataset included 115 000 spectra from five participants (13 scans) in trial 1 and 145 000 spectra from seven participants (16 scans) in trial 2. A comparative analysis was performed between NNFit and a widely used parametric-modeling spectral quantitation method (FITT). Metabolite maps generated by each method were compared using the structural similarity index measure (SSIM) and linear correlation coefficient (<i>R<sup>2</sup></i>). Radiation treatment volumes for glioblastoma based on metabolite maps were compared using the Dice coefficient and a two-tailed <i>t</i> test. Results Mean SSIMs and <i>R</i><sup>2</sup> values for trial 1 test set data were 0.91 and 0.90 for choline, 0.93 and 0.93 for creatine, 0.93 and 0.93 for <i>N</i>-acetylaspartate, 0.80 and 0.72 for myo-inositol, and 0.59 and 0.47 for glutamate plus glutamine. Mean values for trial 2 test set data were 0.95 and 0.95, 0.98 and 0.97, 0.98 and 0.98, 0.92 and 0.92, and 0.79 and 0.81, respectively. The treatment volumes had a mean Dice coefficient of 0.92. The mean processing times were 90.1 seconds for NNFit and 52.9 minutes for FITT. Conclusion A deep learning approach to spectral quantitation offers performance similar to that of conventional quantification methods for EPSI data, but with faster processing at short TE. <b>Keywords:</b> MR Spectroscopy, Neural Networks, Brain/Brain Stem <i>Supplemental material is available for this article</i>. © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230579"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950874/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Post-Training Network Compression for 3D Medical Image Segmentation: Reducing Computational Efforts via Tucker Decomposition. 三维医学图像分割的训练后网络压缩:通过Tucker分解减少计算量。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.240353
Tobias Weber, Jakob Dexl, David Rügamer, Michael Ingrisch

Purpose To investigate whether the computational effort of three-dimensional CT-based multiorgan segmentation with TotalSegmentator can be reduced via Tucker decomposition-based network compression. Materials and Methods In this retrospective study, Tucker decomposition was applied to the convolutional kernels of the TotalSegmentator model, an nnU-Net model trained on a comprehensive CT dataset for automatic segmentation of 117 anatomic structures. The proposed approach reduced the floating-point operations and memory required during inference, offering an adjustable trade-off between computational efficiency and segmentation quality. This study used the publicly available TotalSegmentator dataset containing 1228 segmented CT scans and a test subset of 89 CT scans and used various downsampling factors to explore the relationship between model size, inference speed, and segmentation accuracy. Segmentation performance was evaluated using the Dice score. Results The application of Tucker decomposition to the TotalSegmentator model substantially reduced the model parameters and floating-point operations across various compression ratios, with limited loss in segmentation accuracy. Up to 88.17% of the model's parameters were removed, with no evidence of differences in performance compared with the original model for 113 of 117 classes after fine-tuning. Practical benefits varied across different graphics processing unit architectures, with more distinct speedups on less powerful hardware. Conclusion The study demonstrated that post hoc network compression via Tucker decomposition presents a viable strategy for reducing the computational demand of medical image segmentation models without substantially impacting model accuracy. Keywords: Deep Learning, Segmentation, Network Compression, Convolution, Tucker Decomposition Supplemental material is available for this article. © RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。目的研究基于Tucker分解的网络压缩能否减少基于TotalSegmentator的三维ct多器官分割的计算量。在这项回顾性研究中,Tucker分解应用于TotalSegmentator模型的卷积核,TotalSegmentator模型是一个在综合CT数据集上训练的nnU-Net模型,用于自动分割117个解剖结构。该方法减少了推理过程中所需的浮点操作(FLOPs)和内存,在计算效率和分段质量之间提供了可调整的权衡。本研究利用了公开可用的TotalSegmentator数据集,其中包含1228个分割ct和89个ct的测试子集,采用各种降采样因素来探索模型大小、推理速度和分割精度之间的关系,并使用Dice评分进行评估。结果将Tucker分解应用于TotalSegmentator模型,大大降低了不同压缩比下的模型参数和FLOPs,分割精度损失有限。高达88%的模型参数被删除,在微调后,117个类别中的113个与原始模型相比,没有证据表明性能有差异。在不同的图形处理单元体系结构上,实际的好处是不同的,在性能较差的硬件上,加速效果更明显。结论基于Tucker分解的后thoc网络压缩是一种可行的策略,可以在不显著影响医学图像分割模型精度的情况下减少计算量。©RSNA, 2025年。
{"title":"Post-Training Network Compression for 3D Medical Image Segmentation: Reducing Computational Efforts via Tucker Decomposition.","authors":"Tobias Weber, Jakob Dexl, David Rügamer, Michael Ingrisch","doi":"10.1148/ryai.240353","DOIUrl":"10.1148/ryai.240353","url":null,"abstract":"<p><p>Purpose To investigate whether the computational effort of three-dimensional CT-based multiorgan segmentation with TotalSegmentator can be reduced via Tucker decomposition-based network compression. Materials and Methods In this retrospective study, Tucker decomposition was applied to the convolutional kernels of the TotalSegmentator model, an nnU-Net model trained on a comprehensive CT dataset for automatic segmentation of 117 anatomic structures. The proposed approach reduced the floating-point operations and memory required during inference, offering an adjustable trade-off between computational efficiency and segmentation quality. This study used the publicly available TotalSegmentator dataset containing 1228 segmented CT scans and a test subset of 89 CT scans and used various downsampling factors to explore the relationship between model size, inference speed, and segmentation accuracy. Segmentation performance was evaluated using the Dice score. Results The application of Tucker decomposition to the TotalSegmentator model substantially reduced the model parameters and floating-point operations across various compression ratios, with limited loss in segmentation accuracy. Up to 88.17% of the model's parameters were removed, with no evidence of differences in performance compared with the original model for 113 of 117 classes after fine-tuning. Practical benefits varied across different graphics processing unit architectures, with more distinct speedups on less powerful hardware. Conclusion The study demonstrated that post hoc network compression via Tucker decomposition presents a viable strategy for reducing the computational demand of medical image segmentation models without substantially impacting model accuracy. <b>Keywords:</b> Deep Learning, Segmentation, Network Compression, Convolution, Tucker Decomposition <i>Supplemental material is available for this article</i>. © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240353"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editor's Recognition Awards. 编辑嘉许奖。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.250164
Charles E Kahn
{"title":"Editor's Recognition Awards.","authors":"Charles E Kahn","doi":"10.1148/ryai.250164","DOIUrl":"https://doi.org/10.1148/ryai.250164","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250164"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based Brain Age Prediction Using MRI to Identify Fetuses with Cerebral Ventriculomegaly. 基于深度学习的脑年龄预测应用MRI识别脑室肿大胎儿。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.240115
Hyuk Jin Yun, Han-Jui Lee, Sungmin You, Joo Young Lee, Jerjes Aguirre-Chavez, Lana Vasung, Hyun Ju Lee, Tomo Tarui, Henry A Feldman, P Ellen Grant, Kiho Im

Fetal ventriculomegaly (VM) and its severity and associated central nervous system (CNS) abnormalities are important indicators of high risk for impaired neurodevelopmental outcomes. Recently, a novel fetal brain age prediction method using a two-dimensional (2D) single-channel convolutional neural network (CNN) with multiplanar MRI sections showed the potential to detect fetuses with VM. This study examines the diagnostic performance of a deep learning-based fetal brain age prediction model to distinguish fetuses with VM (n = 317) from typically developing fetuses (n = 183), the severity of VM, and the presence of associated CNS abnormalities. The predicted age difference (PAD) was measured by subtracting the predicted brain age from the gestational age in fetuses with VM and typical development. PAD and absolute value of PAD (AAD) were compared between VM and typically developing fetuses. In addition, PAD and AAD were compared between subgroups by VM severity and the presence of associated CNS abnormalities in VM. Fetuses with VM showed significantly larger AAD than typically developing fetuses (P < .001), and fetuses with severe VM showed larger AAD than those with moderate VM (P = .004). Fetuses with VM and associated CNS abnormalities had significantly lower PAD than fetuses with isolated VM (P = .005). These findings suggest that fetal brain age prediction using the 2D single-channel CNN method has the clinical ability to assist in identifying not only the enlargement of the ventricles but also the presence of associated CNS abnormalities. Keywords: MR-Fetal (Fetal MRI), Brain/Brain Stem, Fetus, Supervised Learning, Machine Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms Supplemental material is available for this article. ©RSNA, 2025.

“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。胎儿脑室肿大(VM)及其严重程度和相关中枢神经系统(CNS)异常是神经发育结果受损高风险的重要指标。最近,一种利用二维单通道卷积神经网络(CNN)结合多平面MRI切片预测胎儿脑年龄的新方法显示出了检测VM胎儿的潜力。本研究的目的是检验基于深度学习的胎儿脑龄预测模型的诊断性能,以区分VM胎儿(n = 317)和正常发育胎儿(n = 183), VM的严重程度以及相关中枢神经系统异常的存在。预测的年龄差异(PAD)是通过从VM和典型发育胎儿的胎龄中减去预测的脑龄来测量的。比较VM与正常发育胎儿的PAD及PAD (AAD)绝对值。此外,通过VM严重程度和VM中相关中枢神经系统异常的存在,比较PAD和AAD亚组之间的差异。VM胎儿的AAD明显高于正常发育(P < 0.001),重度VM胎儿的AAD明显高于中度VM胎儿(P = 0.004)。伴有VM和相关CNS异常的胎儿的PAD明显低于分离VM的胎儿(P = 0.005)。这些发现表明,使用2D单通道CNN方法预测胎儿脑年龄具有临床能力,不仅有助于识别脑室增大,还有助于识别相关中枢神经系统异常的存在。©RSNA, 2025年。
{"title":"Deep Learning-based Brain Age Prediction Using MRI to Identify Fetuses with Cerebral Ventriculomegaly.","authors":"Hyuk Jin Yun, Han-Jui Lee, Sungmin You, Joo Young Lee, Jerjes Aguirre-Chavez, Lana Vasung, Hyun Ju Lee, Tomo Tarui, Henry A Feldman, P Ellen Grant, Kiho Im","doi":"10.1148/ryai.240115","DOIUrl":"10.1148/ryai.240115","url":null,"abstract":"<p><p>Fetal ventriculomegaly (VM) and its severity and associated central nervous system (CNS) abnormalities are important indicators of high risk for impaired neurodevelopmental outcomes. Recently, a novel fetal brain age prediction method using a two-dimensional (2D) single-channel convolutional neural network (CNN) with multiplanar MRI sections showed the potential to detect fetuses with VM. This study examines the diagnostic performance of a deep learning-based fetal brain age prediction model to distinguish fetuses with VM (<i>n</i> = 317) from typically developing fetuses (<i>n</i> = 183), the severity of VM, and the presence of associated CNS abnormalities. The predicted age difference (PAD) was measured by subtracting the predicted brain age from the gestational age in fetuses with VM and typical development. PAD and absolute value of PAD (AAD) were compared between VM and typically developing fetuses. In addition, PAD and AAD were compared between subgroups by VM severity and the presence of associated CNS abnormalities in VM. Fetuses with VM showed significantly larger AAD than typically developing fetuses (<i>P</i> < .001), and fetuses with severe VM showed larger AAD than those with moderate VM (<i>P</i> = .004). Fetuses with VM and associated CNS abnormalities had significantly lower PAD than fetuses with isolated VM (<i>P</i> = .005). These findings suggest that fetal brain age prediction using the 2D single-channel CNN method has the clinical ability to assist in identifying not only the enlargement of the ventricles but also the presence of associated CNS abnormalities. <b>Keywords:</b> MR-Fetal (Fetal MRI), Brain/Brain Stem, Fetus, Supervised Learning, Machine Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms <i>Supplemental material is available for this article.</i> ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240115"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950871/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI. CMRxRecon2024:一个多模态,多视图k-空间数据集,用于加速心脏MRI的通用机器学习。
IF 13.2 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1148/ryai.240443
Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang
{"title":"CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI.","authors":"Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang","doi":"10.1148/ryai.240443","DOIUrl":"10.1148/ryai.240443","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240443"},"PeriodicalIF":13.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950877/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1