首页 > 最新文献

Zeitschrift fur Medizinische Physik最新文献

英文 中文
Predicting disease-related MRI patterns of multiple sclerosis through GAN-based image editing 通过基于 GAN 的图像编辑预测多发性硬化症的疾病相关 MRI 模式
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.12.001
Daniel Güllmar , Wei-Chan Hsu , Jürgen R. Reichenbach

Introduction

Multiple sclerosis (MS) is a complex neurodegenerative disorder that affects the brain and spinal cord. In this study, we applied a deep learning-based approach using the StyleGAN model to explore patterns related to MS and predict disease progression in magnetic resonance images (MRI).

Methods

We trained the StyleGAN model unsupervised using T1-weighted GRE MR images and diffusion-based ADC maps of MS patients and healthy controls. We then used the trained model to resample MR images from real input data and modified them by manipulations in the latent space to simulate MS progression. We analyzed the resulting simulation-related patterns mimicking disease progression by comparing the intensity profiles of the original and manipulated images and determined the brain parenchymal fraction (BPF).

Results

Our results show that MS progression can be simulated by manipulating MR images in the latent space, as evidenced by brain volume loss on both T1-weighted and ADC maps and increasing lesion extent on ADC maps.

Conclusion

Overall, this study demonstrates the potential of the StyleGAN model in medical imaging to study image markers and to shed more light on the relationship between brain atrophy and MS progression through corresponding manipulations in the latent space.

导言多发性硬化症(MS)是一种影响大脑和脊髓的复杂神经退行性疾病。在这项研究中,我们利用 StyleGAN 模型,采用基于深度学习的方法来探索多发性硬化症的相关模式,并预测磁共振图像(MRI)中的疾病进展。然后,我们使用训练有素的模型对真实输入数据中的 MR 图像进行重采样,并通过在潜空间中的操作对其进行修改,以模拟多发性硬化症的进展。结果我们的研究结果表明,多发性硬化症的进展可以通过在潜空间操作磁共振图像来模拟,表现为 T1 加权图和 ADC 图上的脑容量损失,以及 ADC 图上病变范围的扩大。结论总之,本研究证明了 StyleGAN 模型在医学成像中研究图像标记的潜力,并通过在潜空间中的相应操作,进一步阐明了脑萎缩与多发性硬化症进展之间的关系。
{"title":"Predicting disease-related MRI patterns of multiple sclerosis through GAN-based image editing","authors":"Daniel Güllmar ,&nbsp;Wei-Chan Hsu ,&nbsp;Jürgen R. Reichenbach","doi":"10.1016/j.zemedi.2023.12.001","DOIUrl":"10.1016/j.zemedi.2023.12.001","url":null,"abstract":"<div><h3>Introduction</h3><p>Multiple sclerosis (MS) is a complex neurodegenerative disorder that affects the brain and spinal cord. In this study, we applied a deep learning-based approach using the StyleGAN model to explore patterns related to MS and predict disease progression in magnetic resonance images (MRI).</p></div><div><h3>Methods</h3><p>We trained the StyleGAN model unsupervised using T<sub>1</sub>-weighted GRE MR images and diffusion-based ADC maps of MS patients and healthy controls. We then used the trained model to resample MR images from real input data and modified them by manipulations in the latent space to simulate MS progression. We analyzed the resulting simulation-related patterns mimicking disease progression by comparing the intensity profiles of the original and manipulated images and determined the brain parenchymal fraction (BPF).</p></div><div><h3>Results</h3><p>Our results show that MS progression can be simulated by manipulating MR images in the latent space, as evidenced by brain volume loss on both T<sub>1</sub>-weighted and ADC maps and increasing lesion extent on ADC maps.</p></div><div><h3>Conclusion</h3><p>Overall, this study demonstrates the potential of the StyleGAN model in medical imaging to study image markers and to shed more light on the relationship between brain atrophy and MS progression through corresponding manipulations in the latent space.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 318-329"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923001484/pdfft?md5=41054e941858901ec78e1d44ca3d8f6d&pid=1-s2.0-S0939388923001484-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139030422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A generalization performance study on the boosting radiotherapy dose calculation engine based on super-resolution 基于超分辨率的增强放射治疗剂量计算引擎的通用性能研究。
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2022.10.006
Yewei Wang , Yaoying Liu , Yanlin Bai , Qichao Zhou , Shouping Xu , Xueying Pang

Purpose

During the radiation treatment planning process, one of the time-consuming procedures is the final high-resolution dose calculation, which obstacles the wide application of the emerging online adaptive radiotherapy techniques (OLART). There is an urgent desire for highly accurate and efficient dose calculation methods. This study aims to develop a dose super resolution-based deep learning model for fast and accurate dose prediction in clinical practice.

Method

A Multi-stage Dose Super-Resolution Network (MDSR Net) architecture with sparse masks module and multi-stage progressive dose distribution restoration method were developed to predict high-resolution dose distribution using low-resolution data. A total of 340 VMAT plans from different disease sites were used, among which 240 randomly selected nasopharyngeal, lung, and cervix cases were used for model training, and the remaining 60 cases from the same sites for model benchmark testing, and additional 40 cases from the unseen site (breast and rectum) was used for model generalizability evaluation. The clinical calculated dose with a grid size of 2 mm was used as baseline dose distribution. The input included the dose distribution with 4 mm grid size and CT images. The model performance was compared with HD U-Net and cubic interpolation methods using Dose-volume histograms (DVH) metrics and global gamma analysis with 1%/1 mm and 10% low dose threshold. The correlation between the prediction error and the dose, dose gradient, and CT values was also evaluated.

Results

The prediction errors of MDSR were 0.06–0.84% of Dmean indices, and the gamma passing rate was 83.1–91.0% on the benchmark testing dataset, and 0.02–1.03% and 71.3–90.3% for the generalization dataset respectively. The model performance was significantly higher than the HD U-Net and interpolation methods (p < 0.05). The mean errors of the MDSR model decreased (monotonously by 0.03–0.004%) with dose and increased (by 0.01–0.73%) with the dose gradient. There was no correlation between prediction errors and the CT values.

Conclusion

The proposed MDSR model achieved good agreement with the baseline high-resolution dose distribution, with small prediction errors for DVH indices and high gamma passing rate for both seen and unseen sites, indicating a robust and generalizable dose prediction model. The model can provide fast and accurate high-resolution dose distribution for clinical dose calculation, particularly for the routine practice of OLART.

目的:在放射治疗规划过程中,耗时的程序之一是最终的高分辨率剂量计算,这阻碍了新兴的在线自适应放射治疗技术(OLART)的广泛应用。人们迫切需要高精度、高效率的剂量计算方法。本研究旨在开发一种基于剂量超分辨率的深度学习模型,用于临床实践中快速准确的剂量预测:方法:开发了一种带有稀疏掩模模块的多级剂量超分辨率网络(MDSR Net)架构和多级渐进剂量分布还原方法,利用低分辨率数据预测高分辨率剂量分布。共使用了 340 份来自不同疾病部位的 VMAT 图,其中 240 份随机选取的鼻咽、肺和宫颈病例用于模型训练,其余 60 份来自相同部位的病例用于模型基准测试,另外 40 份来自未见部位(乳腺和直肠)的病例用于模型普适性评估。临床计算剂量的网格大小为 2 毫米,作为基线剂量分布。输入包括网格尺寸为 4 毫米的剂量分布和 CT 图像。利用剂量-体积直方图(DVH)指标和全局伽玛分析(1%/1 毫米和 10%低剂量阈值),将模型性能与 HD U-Net 和立方插值法进行了比较。此外,还评估了预测误差与剂量、剂量梯度和 CT 值之间的相关性:在基准测试数据集上,MDSR的预测误差为Dmean指数的0.06%-0.84%,伽马通过率为83.1%-91.0%;在泛化数据集上,MDSR的预测误差为0.02%-1.03%,伽马通过率为71.3%-90.3%。该模型的性能明显高于 HD U-Net 和插值法(p 结论):所提出的 MDSR 模型与基线高分辨率剂量分布具有良好的一致性,DVH 指数的预测误差小,可见和未可见部位的伽马通过率高,表明该模型是一个稳健且可泛化的剂量预测模型。该模型可为临床剂量计算提供快速、准确的高分辨率剂量分布,尤其适用于 OLART 的常规应用。
{"title":"A generalization performance study on the boosting radiotherapy dose calculation engine based on super-resolution","authors":"Yewei Wang ,&nbsp;Yaoying Liu ,&nbsp;Yanlin Bai ,&nbsp;Qichao Zhou ,&nbsp;Shouping Xu ,&nbsp;Xueying Pang","doi":"10.1016/j.zemedi.2022.10.006","DOIUrl":"10.1016/j.zemedi.2022.10.006","url":null,"abstract":"<div><h3>Purpose</h3><p>During the radiation treatment planning process, one of the time-consuming procedures is the final high-resolution dose calculation, which obstacles the wide application of the emerging online adaptive radiotherapy techniques (OLART). There is an urgent desire for highly accurate and efficient dose calculation methods. This study aims to develop a dose super resolution-based deep learning model for fast and accurate dose prediction in clinical practice.</p></div><div><h3>Method</h3><p>A Multi-stage Dose Super-Resolution Network (MDSR Net) architecture with sparse masks module and multi-stage progressive dose distribution restoration method were developed to predict high-resolution dose distribution using low-resolution data. A total of 340 VMAT plans from different disease sites were used, among which 240 randomly selected nasopharyngeal, lung, and cervix cases were used for model training, and the remaining 60 cases from the same sites for model benchmark testing, and additional 40 cases from the unseen site (breast and rectum) was used for model generalizability evaluation. The clinical calculated dose with a grid size of 2 mm was used as baseline dose distribution. The input included the dose distribution with 4 mm grid size and CT images. The model performance was compared with HD U-Net and cubic interpolation methods using Dose-volume histograms (DVH) metrics and global gamma analysis with 1%/1 mm and 10% low dose threshold. The correlation between the prediction error and the dose, dose gradient, and CT values was also evaluated.</p></div><div><h3>Results</h3><p>The prediction errors of MDSR were 0.06–0.84% of D<sub>mean</sub> indices, and the gamma passing rate was 83.1–91.0% on the benchmark testing dataset, and 0.02–1.03% and 71.3–90.3% for the generalization dataset respectively. The model performance was significantly higher than the HD U-Net and interpolation methods (<em>p</em> &lt; 0.05). The mean errors of the MDSR model decreased (monotonously by 0.03–0.004%) with dose and increased (by 0.01–0.73%) with the dose gradient. There was no correlation between prediction errors and the CT values.</p></div><div><h3>Conclusion</h3><p>The proposed MDSR model achieved good agreement with the baseline high-resolution dose distribution, with small prediction errors for DVH indices and high gamma passing rate for both seen and unseen sites, indicating a robust and generalizable dose prediction model. The model can provide fast and accurate high-resolution dose distribution for clinical dose calculation, particularly for the routine practice of OLART.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 208-217"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388922001003/pdfft?md5=5beaf64e5d3600c18adc8f8420659d04&pid=1-s2.0-S0939388922001003-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10511675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated prognosis of renal function decline in ADPKD patients using deep learning 利用深度学习自动预测 ADPKD 患者肾功能衰退情况
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.08.001
Anish Raj , Fabian Tollens , Anna Caroli , Dominik Nörenberg , Frank G. Zöllner

An accurate prognosis of renal function decline in Autosomal Dominant Polycystic Kidney Disease (ADPKD) is crucial for early intervention. Current biomarkers used are height-adjusted total kidney volume (HtTKV), estimated glomerular filtration rate (eGFR), and patient age. However, manually measuring kidney volume is time-consuming and subject to observer variability. Additionally, incorporating automatically generated features from kidney MRI images, along with conventional biomarkers, can enhance prognostic improvement. To address these issues, we developed two deep-learning algorithms. Firstly, an automated kidney volume segmentation model accurately calculates HtTKV. Secondly, we utilize segmented kidney volumes, predicted HtTKV, age, and baseline eGFR to predict chronic kidney disease (CKD) stages >=3A, >=3B, and a 30% decline in eGFR after 8 years from the baseline visit. Our approach combines a convolutional neural network (CNN) and a multi-layer perceptron (MLP). Our study included 135 subjects and the AUC scores obtained were 0.96, 0.96, and 0.95 for CKD stages >=3A, >=3B, and a 30% decline in eGFR, respectively. Furthermore, our algorithm achieved a Pearson correlation coefficient of 0.81 between predicted and measured eGFR decline. We extended our approach to predict distinct CKD stages after eight years with an AUC of 0.97. The proposed approach has the potential to enhance monitoring and facilitate prognosis in ADPKD patients, even in the early disease stages.

准确预测常染色体显性多囊肾病(ADPKD)肾功能衰退对早期干预至关重要。目前使用的生物标志物包括身高调整肾脏总体积(HtTKV)、估计肾小球滤过率(eGFR)和患者年龄。然而,手动测量肾脏体积既费时又受观察者差异性的影响。此外,将肾脏核磁共振成像图像自动生成的特征与传统的生物标志物结合起来,可以提高预后效果。为了解决这些问题,我们开发了两种深度学习算法。首先,自动肾脏体积分割模型可精确计算 HtTKV。其次,我们利用分割后的肾脏体积、预测的 HtTKV、年龄和基线 eGFR 预测慢性肾脏病(CKD)分期>=3A、>=3B 和自基线检查起 8 年后 eGFR 下降 30%。我们的方法结合了卷积神经网络(CNN)和多层感知器(MLP)。我们的研究包括 135 名受试者,对于 CKD 阶段>=3A、>=3B 和 eGFR 下降 30% 的受试者,所获得的 AUC 分数分别为 0.96、0.96 和 0.95。此外,我们的算法在预测和测量的 eGFR 下降之间达到了 0.81 的皮尔逊相关系数。我们扩展了我们的方法,以预测八年后不同的 CKD 阶段,AUC 为 0.97。所提出的方法有望加强对 ADPKD 患者的监测并促进其预后,即使是在疾病的早期阶段。
{"title":"Automated prognosis of renal function decline in ADPKD patients using deep learning","authors":"Anish Raj ,&nbsp;Fabian Tollens ,&nbsp;Anna Caroli ,&nbsp;Dominik Nörenberg ,&nbsp;Frank G. Zöllner","doi":"10.1016/j.zemedi.2023.08.001","DOIUrl":"10.1016/j.zemedi.2023.08.001","url":null,"abstract":"<div><p>An accurate prognosis of renal function decline in Autosomal Dominant Polycystic Kidney Disease (ADPKD) is crucial for early intervention. Current biomarkers used are height-adjusted total kidney volume (HtTKV), estimated glomerular filtration rate (eGFR), and patient age. However, manually measuring kidney volume is time-consuming and subject to observer variability. Additionally, incorporating automatically generated features from kidney MRI images, along with conventional biomarkers, can enhance prognostic improvement. To address these issues, we developed two deep-learning algorithms. Firstly, an automated kidney volume segmentation model accurately calculates HtTKV. Secondly, we utilize segmented kidney volumes, predicted HtTKV, age, and baseline eGFR to predict chronic kidney disease (CKD) stages <span><math><mrow><mo>&gt;</mo></mrow></math></span>=3A, <span><math><mrow><mo>&gt;</mo></mrow></math></span>=3B, and a 30% decline in eGFR after 8 years from the baseline visit. Our approach combines a convolutional neural network (CNN) and a multi-layer perceptron (MLP). Our study included 135 subjects and the AUC scores obtained were 0.96, 0.96, and 0.95 for CKD stages <span><math><mrow><mo>&gt;</mo></mrow></math></span>=3A, <span><math><mrow><mo>&gt;</mo></mrow></math></span>=3B, and a 30% decline in eGFR, respectively. Furthermore, our algorithm achieved a Pearson correlation coefficient of 0.81 between predicted and measured eGFR decline. We extended our approach to predict distinct CKD stages after eight years with an AUC of 0.97. The proposed approach has the potential to enhance monitoring and facilitate prognosis in ADPKD patients, even in the early disease stages.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 330-342"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000909/pdfft?md5=f7bc065601b8dfd2bfb240a0fa1328c0&pid=1-s2.0-S0939388923000909-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10056256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based approach reveals essential features for simplified TSPO PET quantification in ischemic stroke patients 基于机器学习的方法揭示了简化缺血性中风患者 TSPO PET 定量的基本特征。
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2022.11.008
Artem Zatcepin , Anna Kopczak , Adrien Holzgreve , Sandra Hein , Andreas Schindler , Marco Duering , Lena Kaiser , Simon Lindner , Martin Schidlowski , Peter Bartenstein , Nathalie Albert , Matthias Brendel , Sibylle I. Ziegler

Introduction

Neuroinflammation evaluation after acute ischemic stroke is a promising option for selecting an appropriate post-stroke treatment strategy. To assess neuroinflammation in vivo, translocator protein PET (TSPO PET) can be used. However, the gold standard TSPO PET quantification method includes a 90 min scan and continuous arterial blood sampling, which is challenging to perform on a routine basis. In this work, we determine what information is required for a simplified quantification approach using a machine learning algorithm.

Materials and Methods

We analyzed data from 18 patients with ischemic stroke who received 0–90 min [18F]GE-180 PET as well as T1-weigted (T1w), FLAIR, and arterial spin labeling (ASL) MRI scans. During PET scans, five manual venous blood samples at 5, 15, 30, 60, and 85 min post injection (p.i.) were drawn, and plasma activity concentration was measured. Total distribution volume (VT) was calculated using Logan plot with the full dynamic PET and an image-derived input function (IDIF) from the carotid arteries. IDIF was scaled by a calibration factor derived from all the measured plasma activity concentrations. The calculated VT values were used for training a random forest regressor. As input features for the model, we used three late PET frames (60–70, 70–80, and 80–90 min p.i.), the ASL image reflecting perfusion, the voxel coordinates, the lesion mask, and the five plasma activity concentrations. The algorithm was validated with the leave-one-out approach. To estimate the impact of the individual features on the algorithm’s performance, we used Shapley Additive Explanations (SHAP). Having determined that the three late PET frames and the plasma activity concentrations were the most important features, we tested a simplified quantification approach consisting of dividing a late PET frame by a plasma activity concentration. All the combinations of frames/samples were compared by means of concordance correlation coefficient and Bland-Altman plots.

Results

When using all the input features, the algorithm predicted VT values with high accuracy (87.8 ± 8.3%) for both lesion and non-lesion voxels. The SHAP values demonstrated high impact of the late PET frames (60–70, 70–80, and 80–90 min p.i.) and plasma activity concentrations on the VT prediction, while the influence of the ASL-derived perfusion, voxel coordinates, and the lesion mask was low. Among all the combinations of the late PET frames and plasma activity concentrations, the 70–80 min p.i. frame divided by the 30 min p.i. plasma sample produced the closest VT estimate in the ischemic lesion.

Conclusion

Reliable TSPO PET quantification is achievable by using a single late PET frame divided by a late blood sample activity concentration.

简介急性缺血性中风后的神经炎症评估是选择适当的中风后治疗策略的一个很有前景的选择。要评估体内的神经炎症,可以使用转运蛋白 PET(TSPO PET)。然而,金标准 TSPO PET 定量方法包括 90 分钟扫描和连续动脉血采样,这对常规操作具有挑战性。在这项工作中,我们利用机器学习算法确定了简化量化方法所需的信息:我们分析了 18 位缺血性中风患者的数据,这些患者接受了 0-90 分钟[18F]GE-180 PET 以及 T1 权衡 (T1w)、FLAIR 和动脉自旋标记 (ASL) MRI 扫描。在 PET 扫描期间,分别在注射后 5、15、30、60 和 85 分钟(p.i.)手动抽取五份静脉血样本,并测量血浆活性浓度。总分布容积(VT)是通过全动态 PET Logan 图和颈动脉图像输入函数(IDIF)计算得出的。IDIF 根据所有测量的血浆活性浓度得出的校准因子进行缩放。计算出的 VT 值用于训练随机森林回归器。作为模型的输入特征,我们使用了三个晚期 PET 帧(60-70、70-80 和 80-90 分钟 p.i.)、反映灌注的 ASL 图像、体素坐标、病灶掩膜和五个血浆活性浓度。该算法通过 "留一弃一 "方法进行了验证。为了估计各个特征对算法性能的影响,我们使用了夏普利相加解释(SHAP)。在确定三个晚期 PET 帧和血浆活性浓度是最重要的特征后,我们测试了一种简化的量化方法,即用晚期 PET 帧除以血浆活性浓度。我们通过一致性相关系数和布兰-阿尔特曼图对所有帧/样本组合进行了比较:结果:当使用所有输入特征时,该算法预测病变和非病变体素的 VT 值的准确率都很高(87.8 ± 8.3%)。SHAP值显示晚期PET帧(60-70、70-80和80-90分钟p.i.)和血浆活性浓度对VT预测的影响较大,而ASL衍生灌注、体素坐标和病变掩膜的影响较小。在PET晚期帧和血浆活动浓度的所有组合中,70-80分钟p.i.帧除以30分钟p.i.血浆样本得出的缺血性病变VT估计值最接近:结论:通过使用单个晚期 PET 帧除以晚期血样活性浓度,可以实现可靠的 TSPO PET 定量。
{"title":"Machine learning-based approach reveals essential features for simplified TSPO PET quantification in ischemic stroke patients","authors":"Artem Zatcepin ,&nbsp;Anna Kopczak ,&nbsp;Adrien Holzgreve ,&nbsp;Sandra Hein ,&nbsp;Andreas Schindler ,&nbsp;Marco Duering ,&nbsp;Lena Kaiser ,&nbsp;Simon Lindner ,&nbsp;Martin Schidlowski ,&nbsp;Peter Bartenstein ,&nbsp;Nathalie Albert ,&nbsp;Matthias Brendel ,&nbsp;Sibylle I. Ziegler","doi":"10.1016/j.zemedi.2022.11.008","DOIUrl":"10.1016/j.zemedi.2022.11.008","url":null,"abstract":"<div><h3>Introduction</h3><p>Neuroinflammation evaluation after acute ischemic stroke is a promising option for selecting an appropriate post-stroke treatment strategy. To assess neuroinflammation <em>in vivo</em>, translocator protein PET (TSPO PET) can be used. However, the gold standard TSPO PET quantification method includes a 90 min scan and continuous arterial blood sampling, which is challenging to perform on a routine basis. In this work, we determine what information is required for a simplified quantification approach using a machine learning algorithm.</p></div><div><h3>Materials and Methods</h3><p>We analyzed data from 18 patients with ischemic stroke who received 0–90 min [<sup>18</sup>F]GE-180 PET as well as T1-weigted (T1w), FLAIR, and arterial spin labeling (ASL) MRI scans. During PET scans, five manual venous blood samples at 5, 15, 30, 60, and 85 min post injection (p.i.) were drawn, and plasma activity concentration was measured. Total distribution volume (V<sub>T</sub>) was calculated using Logan plot with the full dynamic PET and an image-derived input function (IDIF) from the carotid arteries. IDIF was scaled by a calibration factor derived from all the measured plasma activity concentrations. The calculated V<sub>T</sub> values were used for training a random forest regressor. As input features for the model, we used three late PET frames (60–70, 70–80, and 80–90 min p.i.), the ASL image reflecting perfusion, the voxel coordinates, the lesion mask, and the five plasma activity concentrations. The algorithm was validated with the leave-one-out approach. To estimate the impact of the individual features on the algorithm’s performance, we used Shapley Additive Explanations (SHAP). Having determined that the three late PET frames and the plasma activity concentrations were the most important features, we tested a simplified quantification approach consisting of dividing a late PET frame by a plasma activity concentration. All the combinations of frames/samples were compared by means of concordance correlation coefficient and Bland-Altman plots.</p></div><div><h3>Results</h3><p>When using all the input features, the algorithm predicted V<sub>T</sub> values with high accuracy (87.8 ± 8.3%) for both lesion and non-lesion voxels. The SHAP values demonstrated high impact of the late PET frames (60–70, 70–80, and 80–90 min p.i.) and plasma activity concentrations on the V<sub>T</sub> prediction, while the influence of the ASL-derived perfusion, voxel coordinates, and the lesion mask was low. Among all the combinations of the late PET frames and plasma activity concentrations, the 70–80 min p.i. frame divided by the 30 min p.i. plasma sample produced the closest V<sub>T</sub> estimate in the ischemic lesion.</p></div><div><h3>Conclusion</h3><p>Reliable TSPO PET quantification is achievable by using a single late PET frame divided by a late blood sample activity concentration.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 218-230"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388922001283/pdfft?md5=f1499fef4d918f1109e5e15fcbad1787&pid=1-s2.0-S0939388922001283-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10567400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based affine medical image registration for multimodal minimal-invasive image-guided interventions – A comparative study on generalizability 基于深度学习的仿射医学图像配准用于多模态微创图像引导干预--可推广性比较研究
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.05.003
Anika Strittmatter, Lothar R. Schad, Frank G. Zöllner

Multimodal image registration is applied in medical image analysis as it allows the integration of complementary data from multiple imaging modalities. In recent years, various neural network-based approaches for medical image registration have been presented in papers, but due to the use of different datasets, a fair comparison is not possible. In this research 20 different neural networks for an affine registration of medical images were implemented. The networks’ performance and the networks’ generalizability to new datasets were evaluated using two multimodal datasets - a synthetic and a real patient dataset - of three-dimensional CT and MR images of the liver. The networks were first trained semi-supervised using the synthetic dataset and then evaluated on the synthetic dataset and the unseen patient dataset. Afterwards, the networks were finetuned on the patient dataset and subsequently evaluated on the patient dataset. The networks were compared using our own developed CNN as benchmark and a conventional affine registration with SimpleElastix as baseline. Six networks improved the pre-registration Dice coefficient of the synthetic dataset significantly (p-value < 0.05) and nine networks improved the pre-registration Dice coefficient of the patient dataset significantly and are therefore able to generalize to the new datasets used in our experiments. Many different machine learning-based methods have been proposed for affine multimodal medical image registration, but few are generalizable to new data and applications. It is therefore necessary to conduct further research in order to develop medical image registration techniques that can be applied more widely.

多模态图像配准适用于医学图像分析,因为它可以整合多种成像模式的互补数据。近年来,已有论文介绍了各种基于神经网络的医学图像配准方法,但由于使用的数据集不同,因此无法进行公平的比较。在这项研究中,采用了 20 种不同的神经网络对医学图像进行仿射配准。使用肝脏三维 CT 和 MR 图像的两个多模态数据集(一个合成数据集和一个真实患者数据集)评估了这些网络的性能和对新数据集的通用性。首先使用合成数据集对网络进行半监督训练,然后在合成数据集和未见患者数据集上进行评估。然后,在患者数据集上对网络进行微调,随后在患者数据集上进行评估。以我们自己开发的 CNN 为基准,以 SimpleElastix 的传统仿射配准为基线,对这些网络进行了比较。六个网络显著提高了合成数据集的预注册 Dice 系数(p 值为 0.05),九个网络显著提高了患者数据集的预注册 Dice 系数,因此能够推广到我们实验中使用的新数据集。针对仿射多模态医学影像配准,已经提出了许多不同的基于机器学习的方法,但很少有方法可以推广到新的数据和应用中。因此,有必要开展进一步的研究,以开发出可更广泛应用的医学图像配准技术。
{"title":"Deep learning-based affine medical image registration for multimodal minimal-invasive image-guided interventions – A comparative study on generalizability","authors":"Anika Strittmatter,&nbsp;Lothar R. Schad,&nbsp;Frank G. Zöllner","doi":"10.1016/j.zemedi.2023.05.003","DOIUrl":"10.1016/j.zemedi.2023.05.003","url":null,"abstract":"<div><p>Multimodal image registration is applied in medical image analysis as it allows the integration of complementary data from multiple imaging modalities. In recent years, various neural network-based approaches for medical image registration have been presented in papers, but due to the use of different datasets, a fair comparison is not possible. In this research 20 different neural networks for an affine registration of medical images were implemented. The networks’ performance and the networks’ generalizability to new datasets were evaluated using two multimodal datasets - a synthetic and a real patient dataset - of three-dimensional CT and MR images of the liver. The networks were first trained semi-supervised using the synthetic dataset and then evaluated on the synthetic dataset and the unseen patient dataset. Afterwards, the networks were finetuned on the patient dataset and subsequently evaluated on the patient dataset. The networks were compared using our own developed CNN as benchmark and a conventional affine registration with SimpleElastix as baseline. Six networks improved the pre-registration Dice coefficient of the synthetic dataset significantly (<em>p</em>-value <span><math><mrow><mo>&lt;</mo></mrow></math></span> 0.05) and nine networks improved the pre-registration Dice coefficient of the patient dataset significantly and are therefore able to generalize to the new datasets used in our experiments. Many different machine learning-based methods have been proposed for affine multimodal medical image registration, but few are generalizable to new data and applications. It is therefore necessary to conduct further research in order to develop medical image registration techniques that can be applied more widely.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 291-317"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000715/pdfft?md5=8bc88c35e2779691cc7ef560e61e14e3&pid=1-s2.0-S0939388923000715-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9683952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic detection of brain tumors with the aid of ensemble deep learning architectures and class activation map indicators by employing magnetic resonance images 利用磁共振图像,借助集合深度学习架构和类激活图指标自动检测脑肿瘤。
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2022.11.010
Omer Turk , Davut Ozhan , Emrullah Acar , Tahir Cetin Akinci , Musa Yilmaz

Today, as in every life-threatening disease, early diagnosis of brain tumors plays a life-saving role. The brain tumor is formed by the transformation of brain cells from their normal structures into abnormal cell structures. These formed abnormal cells begin to form in masses in the brain regions. Nowadays, many different techniques are employed to detect these tumor masses, and the most common of these techniques is Magnetic Resonance Imaging (MRI). In this study, it is aimed to automatically detect brain tumors with the help of ensemble deep learning architectures (ResNet50, VGG19, InceptionV3 and MobileNet) and Class Activation Maps (CAMs) indicators by employing MRI images. The proposed system was implemented in three stages. In the first stage, it was determined whether there was a tumor in the MR images (Binary Approach). In the second stage, different tumor types (Normal, Glioma Tumor, Meningioma Tumor, Pituitary Tumor) were detected from MR images (Multi-class Approach). In the last stage, CAMs of each tumor group were created as an alternative tool to facilitate the work of specialists in tumor detection. The results showed that the overall accuracy of the binary approach was calculated as 100% on the ResNet50, InceptionV3 and MobileNet architectures, and 99.71% on the VGG19 architecture. Moreover, the accuracy values of 96.45% with ResNet50, 93.40% with VGG19, 85.03% with InceptionV3 and 89.34% with MobileNet architectures were obtained in the multi-class approach.

如今,与所有危及生命的疾病一样,脑肿瘤的早期诊断起着挽救生命的作用。脑肿瘤是由脑细胞从正常结构转变为异常细胞结构而形成的。这些形成的异常细胞开始在脑区形成肿块。如今,许多不同的技术被用来检测这些肿瘤肿块,其中最常见的技术是磁共振成像(MRI)。本研究旨在利用核磁共振成像图像,在集合深度学习架构(ResNet50、VGG19、InceptionV3 和 MobileNet)和类激活图(CAM)指标的帮助下,自动检测脑肿瘤。所提议的系统分三个阶段实施。第一阶段,确定核磁共振图像中是否存在肿瘤(二元方法)。第二阶段,从磁共振图像中检测出不同的肿瘤类型(正常、胶质瘤、脑膜瘤、垂体瘤)(多类法)。在最后阶段,创建了每个肿瘤组的 CAM,作为替代工具,以方便专家进行肿瘤检测工作。结果显示,二元方法在 ResNet50、InceptionV3 和 MobileNet 架构上的总体准确率为 100%,在 VGG19 架构上的准确率为 99.71%。此外,在多类方法中,ResNet50 的准确率为 96.45%,VGG19 的准确率为 93.40%,InceptionV3 的准确率为 85.03%,MobileNet 架构的准确率为 89.34%。
{"title":"Automatic detection of brain tumors with the aid of ensemble deep learning architectures and class activation map indicators by employing magnetic resonance images","authors":"Omer Turk ,&nbsp;Davut Ozhan ,&nbsp;Emrullah Acar ,&nbsp;Tahir Cetin Akinci ,&nbsp;Musa Yilmaz","doi":"10.1016/j.zemedi.2022.11.010","DOIUrl":"10.1016/j.zemedi.2022.11.010","url":null,"abstract":"<div><p>Today, as in every life-threatening disease, early diagnosis of brain tumors plays a life-saving role. The brain tumor is formed by the transformation of brain cells from their normal structures into abnormal cell structures. These formed abnormal cells begin to form in masses in the brain regions. Nowadays, many different techniques are employed to detect these tumor masses, and the most common of these techniques is Magnetic Resonance Imaging (MRI). In this study, it is aimed to automatically detect brain tumors with the help of ensemble deep learning architectures (ResNet50, VGG19, InceptionV3 and MobileNet) and Class Activation Maps (CAMs) indicators by employing MRI images. The proposed system was implemented in three stages. In the first stage, it was determined whether there was a tumor in the MR images (Binary Approach). In the second stage, different tumor types (Normal, Glioma Tumor, Meningioma Tumor, Pituitary Tumor) were detected from MR images (Multi-class Approach). In the last stage, CAMs of each tumor group were created as an alternative tool to facilitate the work of specialists in tumor detection. The results showed that the overall accuracy of the binary approach was calculated as 100% on the ResNet50, InceptionV3 and MobileNet architectures, and 99.71% on the VGG19 architecture. Moreover, the accuracy values of 96.45% with ResNet50, 93.40% with VGG19, 85.03% with InceptionV3 and 89.34% with MobileNet architectures were obtained in the multi-class approach.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 278-290"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388922001313/pdfft?md5=e55da35d209b688226a3577197edb180&pid=1-s2.0-S0939388922001313-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10466945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-guided deep learning reduces signal loss and increases lesion CNR in diffusion-weighted imaging of the liver 在肝脏弥散加权成像中,特征引导的深度学习可减少信号丢失并提高病变CNR
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.07.005
Tobit Führes , Marc Saake , Jennifer Lorenz , Hannes Seuss , Sebastian Bickelhaupt , Michael Uder , Frederik Bernd Laun

Purpose

This research aims to develop a feature-guided deep learning approach and compare it with an optimized conventional post-processing algorithm in order to enhance the image quality of diffusion-weighted liver images and, in particular, to reduce the pulsation-induced signal loss occurring predominantly in the left liver lobe.

Methods

Data from 40 patients with liver lesions were used. For the conventional approach, the best-suited out of five examined algorithms was chosen. For the deep learning approach, a U-Net was trained. Instead of learning “gold-standard” target images, the network was trained to optimize four image features (lesion CNR, vessel darkness, data consistency, and pulsation artifact reduction), which could be assessed quantitatively using manually drawn ROIs. A quality score was calculated from these four features. As an additional quality assessment, three radiologists rated different features of the resulting images.

Results

The conventional approach could substantially increase the lesion CNR and reduce the pulsation-induced signal loss. However, the vessel darkness was reduced. The deep learning approach increased the lesion CNR and reduced the signal loss to a slightly lower extent, but it could additionally increase the vessel darkness. According to the image quality score, the quality of the deep-learning images was higher than that of the images obtained using the conventional approach. The radiologist ratings were mostly consistent with the quantitative scores, but the overall quality ratings differed among the readers.

Conclusion

Unlike the conventional algorithm, the deep-learning algorithm increased the vessel darkness. Therefore, it may be a viable alternative to conventional algorithms.

目的 本研究旨在开发一种以特征为导向的深度学习方法,并将其与优化的传统后处理算法进行比较,以提高扩散加权肝脏图像的质量,尤其是减少主要发生在左肝叶的脉动引起的信号损失。在传统方法中,选择了五种已研究过的算法中最合适的一种。对于深度学习方法,则采用 U-Net 进行训练。该网络不是学习 "黄金标准 "目标图像,而是通过训练来优化四个图像特征(病变 CNR、血管暗度、数据一致性和脉动伪影减少),这些特征可通过手动绘制的 ROI 进行定量评估。根据这四个特征计算出质量分数。作为额外的质量评估,三位放射科医生对所得图像的不同特征进行了评分。但是,血管的暗度降低了。深度学习方法提高了病变 CNR,减少了信号损失,但程度略低,而且还增加了血管暗度。根据图像质量评分,深度学习图像的质量高于使用传统方法获得的图像。结论与传统算法不同,深度学习算法增加了血管暗度。因此,它可能是传统算法的可行替代方案。
{"title":"Feature-guided deep learning reduces signal loss and increases lesion CNR in diffusion-weighted imaging of the liver","authors":"Tobit Führes ,&nbsp;Marc Saake ,&nbsp;Jennifer Lorenz ,&nbsp;Hannes Seuss ,&nbsp;Sebastian Bickelhaupt ,&nbsp;Michael Uder ,&nbsp;Frederik Bernd Laun","doi":"10.1016/j.zemedi.2023.07.005","DOIUrl":"10.1016/j.zemedi.2023.07.005","url":null,"abstract":"<div><h3><strong>Purpose</strong></h3><p>This research aims to develop a feature-guided deep learning approach and compare it with an optimized conventional post-processing algorithm in order to enhance the image quality of diffusion-weighted liver images and, in particular, to reduce the pulsation-induced signal loss occurring predominantly in the left liver lobe.</p></div><div><h3><strong>Methods</strong></h3><p>Data from 40 patients with liver lesions were used. For the conventional approach, the best-suited out of five examined algorithms was chosen. For the deep learning approach, a U-Net was trained. Instead of learning “gold-standard” target images, the network was trained to optimize four image features (lesion CNR, vessel darkness, data consistency, and pulsation artifact reduction), which could be assessed quantitatively using manually drawn ROIs. A quality score was calculated from these four features. As an additional quality assessment, three radiologists rated different features of the resulting images.</p></div><div><h3><strong>Results</strong></h3><p>The conventional approach could substantially increase the lesion CNR and reduce the pulsation-induced signal loss. However, the vessel darkness was reduced. The deep learning approach increased the lesion CNR and reduced the signal loss to a slightly lower extent, but it could additionally increase the vessel darkness. According to the image quality score, the quality of the deep-learning images was higher than that of the images obtained using the conventional approach. The radiologist ratings were mostly consistent with the quantitative scores, but the overall quality ratings differed among the readers.</p></div><div><h3><strong>Conclusion</strong></h3><p>Unlike the conventional algorithm, the deep-learning algorithm increased the vessel darkness. Therefore, it may be a viable alternative to conventional algorithms.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 258-269"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000879/pdfft?md5=b3e5b6c0be696f64222a77e9bdedeec2&pid=1-s2.0-S0939388923000879-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9931929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in medical physics 医学物理学中的人工智能。
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2024.03.002
Steffen Bollmann, Thomas Küstner, Qian Tao, Frank G Zöllner
{"title":"Artificial intelligence in medical physics","authors":"Steffen Bollmann,&nbsp;Thomas Küstner,&nbsp;Qian Tao,&nbsp;Frank G Zöllner","doi":"10.1016/j.zemedi.2024.03.002","DOIUrl":"10.1016/j.zemedi.2024.03.002","url":null,"abstract":"","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 177-178"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S093938892400028X/pdfft?md5=a92b28a357f1d136a6ea66579890411c&pid=1-s2.0-S093938892400028X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140208799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic AI-based contouring of prostate MRI for online adaptive radiotherapy 基于人工智能的前列腺 MRI 自动轮廓分析,用于在线自适应放疗
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.05.001
Marcel Nachbar , Monica lo Russo , Cihan Gani , Simon Boeke , Daniel Wegener , Frank Paulsen , Daniel Zips , Thais Roque , Nikos Paragios , Daniela Thorwarth
<div><h3>Background and purpose</h3><p>MR-guided radiotherapy (MRgRT) online plan adaptation accounts for tumor volume changes, interfraction motion and thus allows daily sparing of relevant organs at risk. Due to the high interfraction variability of bladder and rectum, patients with tumors in the pelvic region may strongly benefit from adaptive MRgRT. Currently, fast automatic annotation of anatomical structures is not available within the online MRgRT workflow. Therefore, the aim of this study was to train and validate a fast, accurate deep learning model for automatic MRI segmentation at the MR-Linac for future implementation in a clinical MRgRT workflow.</p></div><div><h3>Materials and methods</h3><p>For a total of 47 patients, T2w MRI data were acquired on a 1.5 T MR-Linac (Unity, Elekta) on five different days. Prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, body and bony structures were manually annotated. These training data consisting of 232 data sets in total was used for the generation of a deep learning based autocontouring model and validated on 20 unseen T2w-MRIs. For quantitative evaluation the validation set was contoured by a radiation oncologist as gold standard contours (GSC) and compared in MATLAB to the automatic contours (AIC). For the evaluation, dice similarity coefficients (DSC), and 95% Hausdorff distances (95% HD), added path length (APL) and surface DSC (sDSC) were calculated in a caudal-cranial window of <span><math><mrow><mo>±</mo></mrow></math></span> 4 cm with respect to the prostate ends. For qualitative evaluation, five radiation oncologists scored the AIC on the possible usage within an online adaptive workflow as follows: (1) no modifications needed, (2) minor adjustments needed, (3) major adjustments/ multiple minor adjustments needed, (4) not usable.</p></div><div><h3>Results</h3><p>The quantitative evaluation revealed a maximum median 95% HD of 6.9 mm for the rectum and minimum median 95% HD of 2.7 mm for the bladder. Maximal and minimal median DSC were detected for bladder with 0.97 and for penile bulb with 0.73, respectively. Using a tolerance level of 3 mm, the highest and lowest sDSC were determined for rectum (0.94) and anal canal (0.68), respectively.</p><p>Qualitative evaluation resulted in a mean score of 1.2 for AICs over all organs and patients across all expert ratings. For the different autocontoured structures, the highest mean score of 1.0 was observed for anal canal, sacrum, femur left and right, and pelvis left, whereas for prostate the lowest mean score of 2.0 was detected. In total, 80% of the contours were rated be clinically acceptable, 16% to require minor and 4% major adjustments for online adaptive MRgRT.</p></div><div><h3>Conclusion</h3><p>In this study, an AI-based autocontouring was successfully trained for online adaptive MR-guided radiotherapy on the 1.5 T MR-Linac system. The developed model can automatically generate contours accepted by physicians (80%) o
背景和目的MR引导放射治疗(MRgRT)的在线计划适应性考虑了肿瘤体积的变化和折射运动,因此可以每天对有风险的相关器官进行放疗。由于膀胱和直肠的折射运动变化较大,盆腔肿瘤患者可从自适应 MRgRT 中获益匪浅。目前,在线 MRgRT 工作流程还不能快速自动标注解剖结构。因此,本研究的目的是训练和验证一个快速、准确的深度学习模型,用于在 MR-Linac 上进行自动 MRI 分割,以便将来在临床 MRgRT 工作流程中实施。材料和方法在五个不同的日子里,在 1.5 T MR-Linac (Unity,Elekta)上共采集了 47 名患者的 T2w MRI 数据。人工标注了前列腺、精囊、直肠、肛管、膀胱、阴茎球部、身体和骨骼结构。这些训练数据总共包括 232 个数据集,用于生成基于深度学习的自动构图模型,并在 20 个未见过的 T2w-MRI 上进行了验证。为了进行定量评估,验证集由放射肿瘤学家绘制黄金标准轮廓(GSC),并在 MATLAB 中与自动轮廓(AIC)进行比较。为了进行评估,在前列腺两端± 4 厘米的尾颅窗(caudal-cranial window)内计算了骰子相似系数(DSC)、95% Hausdorff 距离(95% HD)、附加路径长度(APL)和表面 DSC(sDSC)。为了进行定性评估,五位放射肿瘤专家对在线自适应工作流程中可能使用的 AIC 进行了如下评分:(结果定量评估显示,直肠最大 95% HD 中值为 6.9 毫米,膀胱最小 95% HD 中值为 2.7 毫米。膀胱和阴茎球的最大和最小中位 DSC 分别为 0.97 和 0.73。以 3 毫米为容差水平,直肠(0.94)和肛管(0.68)的 sDSC 分别最高和最低。在不同的自动描绘结构中,肛管、骶骨、左右股骨和左骨盆的平均得分最高,为 1.0 分,而前列腺的平均得分最低,为 2.0 分。总之,80% 的轮廓被评为临床可接受,16% 的轮廓需要微调,4% 的轮廓需要对在线自适应 MRgRT 进行重大调整。所开发的模型可自动生成医生认可的轮廓(80%),或只需进行小幅修正(16%),即可用于临床使用序列的原发性前列腺照射。
{"title":"Automatic AI-based contouring of prostate MRI for online adaptive radiotherapy","authors":"Marcel Nachbar ,&nbsp;Monica lo Russo ,&nbsp;Cihan Gani ,&nbsp;Simon Boeke ,&nbsp;Daniel Wegener ,&nbsp;Frank Paulsen ,&nbsp;Daniel Zips ,&nbsp;Thais Roque ,&nbsp;Nikos Paragios ,&nbsp;Daniela Thorwarth","doi":"10.1016/j.zemedi.2023.05.001","DOIUrl":"10.1016/j.zemedi.2023.05.001","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Background and purpose&lt;/h3&gt;&lt;p&gt;MR-guided radiotherapy (MRgRT) online plan adaptation accounts for tumor volume changes, interfraction motion and thus allows daily sparing of relevant organs at risk. Due to the high interfraction variability of bladder and rectum, patients with tumors in the pelvic region may strongly benefit from adaptive MRgRT. Currently, fast automatic annotation of anatomical structures is not available within the online MRgRT workflow. Therefore, the aim of this study was to train and validate a fast, accurate deep learning model for automatic MRI segmentation at the MR-Linac for future implementation in a clinical MRgRT workflow.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Materials and methods&lt;/h3&gt;&lt;p&gt;For a total of 47 patients, T2w MRI data were acquired on a 1.5 T MR-Linac (Unity, Elekta) on five different days. Prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, body and bony structures were manually annotated. These training data consisting of 232 data sets in total was used for the generation of a deep learning based autocontouring model and validated on 20 unseen T2w-MRIs. For quantitative evaluation the validation set was contoured by a radiation oncologist as gold standard contours (GSC) and compared in MATLAB to the automatic contours (AIC). For the evaluation, dice similarity coefficients (DSC), and 95% Hausdorff distances (95% HD), added path length (APL) and surface DSC (sDSC) were calculated in a caudal-cranial window of &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mo&gt;±&lt;/mo&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; 4 cm with respect to the prostate ends. For qualitative evaluation, five radiation oncologists scored the AIC on the possible usage within an online adaptive workflow as follows: (1) no modifications needed, (2) minor adjustments needed, (3) major adjustments/ multiple minor adjustments needed, (4) not usable.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;p&gt;The quantitative evaluation revealed a maximum median 95% HD of 6.9 mm for the rectum and minimum median 95% HD of 2.7 mm for the bladder. Maximal and minimal median DSC were detected for bladder with 0.97 and for penile bulb with 0.73, respectively. Using a tolerance level of 3 mm, the highest and lowest sDSC were determined for rectum (0.94) and anal canal (0.68), respectively.&lt;/p&gt;&lt;p&gt;Qualitative evaluation resulted in a mean score of 1.2 for AICs over all organs and patients across all expert ratings. For the different autocontoured structures, the highest mean score of 1.0 was observed for anal canal, sacrum, femur left and right, and pelvis left, whereas for prostate the lowest mean score of 2.0 was detected. In total, 80% of the contours were rated be clinically acceptable, 16% to require minor and 4% major adjustments for online adaptive MRgRT.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;p&gt;In this study, an AI-based autocontouring was successfully trained for online adaptive MR-guided radiotherapy on the 1.5 T MR-Linac system. The developed model can automatically generate contours accepted by physicians (80%) o","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 197-207"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000533/pdfft?md5=4c8a5787fe97a32ec18b4426b3597127&pid=1-s2.0-S0939388923000533-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9562444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards quality management of artificial intelligence systems for medical applications 实现医疗应用人工智能系统的质量管理。
IF 2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2024.02.001
Lorenzo Mercolli, Axel Rominger, Kuangyu Shi

The use of artificial intelligence systems in clinical routine is still hampered by the necessity of a medical device certification and/or by the difficulty of implementing these systems in a clinic’s quality management system. In this context, the key questions for a user are how to ensure robust model predictions and how to appraise the quality of a model’s results on a regular basis.

In this paper we discuss some conceptual foundation for a clinical implementation of a machine learning system and argue that both vendors and users should take certain responsibilities, as is already common practice for high-risk medical equipment.

We propose the methodology from AAPM Task Group 100 report No. 283 as a conceptual framework for developing risk-driven a quality management program for a clinical process that encompasses a machine learning system. This is illustrated with an example of a clinical workflow. Our analysis shows how the risk evaluation in this framework can accommodate artificial intelligence based systems independently of their robustness evaluation or the user’s in–house expertise. In particular, we highlight how the degree of interpretability of a machine learning system can be systematically accounted for within the risk evaluation and in the development of a quality management system.

人工智能系统在临床常规工作中的应用仍然受到医疗设备认证和/或在诊所质量管理系统中实施这些系统的困难的阻碍。在这种情况下,用户面临的关键问题是如何确保模型预测的准确性,以及如何定期评估模型结果的质量。在本文中,我们讨论了机器学习系统临床实施的一些概念基础,并认为供应商和用户都应承担一定的责任,这已是高风险医疗设备的普遍做法。我们提出了 AAPM 第 100 工作组第 283 号报告中的方法,作为为包含机器学习系统的临床流程制定风险驱动型质量管理计划的概念框架。我们以临床工作流程为例进行说明。我们的分析表明了该框架中的风险评估如何能够独立于人工智能系统的稳健性评估或用户的内部专业知识而适应人工智能系统。我们特别强调了如何在风险评估和质量管理系统开发过程中系统地考虑机器学习系统的可解释性程度。
{"title":"Towards quality management of artificial intelligence systems for medical applications","authors":"Lorenzo Mercolli,&nbsp;Axel Rominger,&nbsp;Kuangyu Shi","doi":"10.1016/j.zemedi.2024.02.001","DOIUrl":"10.1016/j.zemedi.2024.02.001","url":null,"abstract":"<div><p>The use of artificial intelligence systems in clinical routine is still hampered by the necessity of a medical device certification and/or by the difficulty of implementing these systems in a clinic’s quality management system. In this context, the key questions for a user are how to ensure robust model predictions and how to appraise the quality of a model’s results on a regular basis.</p><p>In this paper we discuss some conceptual foundation for a clinical implementation of a machine learning system and argue that both vendors and users should take certain responsibilities, as is already common practice for high-risk medical equipment.</p><p>We propose the methodology from AAPM Task Group 100 report No. 283 as a conceptual framework for developing risk-driven a quality management program for a clinical process that encompasses a machine learning system. This is illustrated with an example of a clinical workflow. Our analysis shows how the risk evaluation in this framework can accommodate artificial intelligence based systems independently of their robustness evaluation or the user’s in–house expertise. In particular, we highlight how the degree of interpretability of a machine learning system can be systematically accounted for within the risk evaluation and in the development of a quality management system.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":"34 2","pages":"Pages 343-352"},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388924000242/pdfft?md5=309f6a0c3aedbe399d5a372c060278f6&pid=1-s2.0-S0939388924000242-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Zeitschrift fur Medizinische Physik
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1