首页 > 最新文献

Zeitschrift fur Medizinische Physik最新文献

英文 中文
Automatic detection of brain tumors with the aid of ensemble deep learning architectures and class activation map indicators by employing magnetic resonance images 利用磁共振图像,借助集合深度学习架构和类激活图指标自动检测脑肿瘤。
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2022.11.010
Omer Turk , Davut Ozhan , Emrullah Acar , Tahir Cetin Akinci , Musa Yilmaz

Today, as in every life-threatening disease, early diagnosis of brain tumors plays a life-saving role. The brain tumor is formed by the transformation of brain cells from their normal structures into abnormal cell structures. These formed abnormal cells begin to form in masses in the brain regions. Nowadays, many different techniques are employed to detect these tumor masses, and the most common of these techniques is Magnetic Resonance Imaging (MRI). In this study, it is aimed to automatically detect brain tumors with the help of ensemble deep learning architectures (ResNet50, VGG19, InceptionV3 and MobileNet) and Class Activation Maps (CAMs) indicators by employing MRI images. The proposed system was implemented in three stages. In the first stage, it was determined whether there was a tumor in the MR images (Binary Approach). In the second stage, different tumor types (Normal, Glioma Tumor, Meningioma Tumor, Pituitary Tumor) were detected from MR images (Multi-class Approach). In the last stage, CAMs of each tumor group were created as an alternative tool to facilitate the work of specialists in tumor detection. The results showed that the overall accuracy of the binary approach was calculated as 100% on the ResNet50, InceptionV3 and MobileNet architectures, and 99.71% on the VGG19 architecture. Moreover, the accuracy values of 96.45% with ResNet50, 93.40% with VGG19, 85.03% with InceptionV3 and 89.34% with MobileNet architectures were obtained in the multi-class approach.

如今,与所有危及生命的疾病一样,脑肿瘤的早期诊断起着挽救生命的作用。脑肿瘤是由脑细胞从正常结构转变为异常细胞结构而形成的。这些形成的异常细胞开始在脑区形成肿块。如今,许多不同的技术被用来检测这些肿瘤肿块,其中最常见的技术是磁共振成像(MRI)。本研究旨在利用核磁共振成像图像,在集合深度学习架构(ResNet50、VGG19、InceptionV3 和 MobileNet)和类激活图(CAM)指标的帮助下,自动检测脑肿瘤。所提议的系统分三个阶段实施。第一阶段,确定核磁共振图像中是否存在肿瘤(二元方法)。第二阶段,从磁共振图像中检测出不同的肿瘤类型(正常、胶质瘤、脑膜瘤、垂体瘤)(多类法)。在最后阶段,创建了每个肿瘤组的 CAM,作为替代工具,以方便专家进行肿瘤检测工作。结果显示,二元方法在 ResNet50、InceptionV3 和 MobileNet 架构上的总体准确率为 100%,在 VGG19 架构上的准确率为 99.71%。此外,在多类方法中,ResNet50 的准确率为 96.45%,VGG19 的准确率为 93.40%,InceptionV3 的准确率为 85.03%,MobileNet 架构的准确率为 89.34%。
{"title":"Automatic detection of brain tumors with the aid of ensemble deep learning architectures and class activation map indicators by employing magnetic resonance images","authors":"Omer Turk ,&nbsp;Davut Ozhan ,&nbsp;Emrullah Acar ,&nbsp;Tahir Cetin Akinci ,&nbsp;Musa Yilmaz","doi":"10.1016/j.zemedi.2022.11.010","DOIUrl":"10.1016/j.zemedi.2022.11.010","url":null,"abstract":"<div><p>Today, as in every life-threatening disease, early diagnosis of brain tumors plays a life-saving role. The brain tumor is formed by the transformation of brain cells from their normal structures into abnormal cell structures. These formed abnormal cells begin to form in masses in the brain regions. Nowadays, many different techniques are employed to detect these tumor masses, and the most common of these techniques is Magnetic Resonance Imaging (MRI). In this study, it is aimed to automatically detect brain tumors with the help of ensemble deep learning architectures (ResNet50, VGG19, InceptionV3 and MobileNet) and Class Activation Maps (CAMs) indicators by employing MRI images. The proposed system was implemented in three stages. In the first stage, it was determined whether there was a tumor in the MR images (Binary Approach). In the second stage, different tumor types (Normal, Glioma Tumor, Meningioma Tumor, Pituitary Tumor) were detected from MR images (Multi-class Approach). In the last stage, CAMs of each tumor group were created as an alternative tool to facilitate the work of specialists in tumor detection. The results showed that the overall accuracy of the binary approach was calculated as 100% on the ResNet50, InceptionV3 and MobileNet architectures, and 99.71% on the VGG19 architecture. Moreover, the accuracy values of 96.45% with ResNet50, 93.40% with VGG19, 85.03% with InceptionV3 and 89.34% with MobileNet architectures were obtained in the multi-class approach.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388922001313/pdfft?md5=e55da35d209b688226a3577197edb180&pid=1-s2.0-S0939388922001313-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10466945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-guided deep learning reduces signal loss and increases lesion CNR in diffusion-weighted imaging of the liver 在肝脏弥散加权成像中,特征引导的深度学习可减少信号丢失并提高病变CNR
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.07.005
Tobit Führes , Marc Saake , Jennifer Lorenz , Hannes Seuss , Sebastian Bickelhaupt , Michael Uder , Frederik Bernd Laun

Purpose

This research aims to develop a feature-guided deep learning approach and compare it with an optimized conventional post-processing algorithm in order to enhance the image quality of diffusion-weighted liver images and, in particular, to reduce the pulsation-induced signal loss occurring predominantly in the left liver lobe.

Methods

Data from 40 patients with liver lesions were used. For the conventional approach, the best-suited out of five examined algorithms was chosen. For the deep learning approach, a U-Net was trained. Instead of learning “gold-standard” target images, the network was trained to optimize four image features (lesion CNR, vessel darkness, data consistency, and pulsation artifact reduction), which could be assessed quantitatively using manually drawn ROIs. A quality score was calculated from these four features. As an additional quality assessment, three radiologists rated different features of the resulting images.

Results

The conventional approach could substantially increase the lesion CNR and reduce the pulsation-induced signal loss. However, the vessel darkness was reduced. The deep learning approach increased the lesion CNR and reduced the signal loss to a slightly lower extent, but it could additionally increase the vessel darkness. According to the image quality score, the quality of the deep-learning images was higher than that of the images obtained using the conventional approach. The radiologist ratings were mostly consistent with the quantitative scores, but the overall quality ratings differed among the readers.

Conclusion

Unlike the conventional algorithm, the deep-learning algorithm increased the vessel darkness. Therefore, it may be a viable alternative to conventional algorithms.

目的 本研究旨在开发一种以特征为导向的深度学习方法,并将其与优化的传统后处理算法进行比较,以提高扩散加权肝脏图像的质量,尤其是减少主要发生在左肝叶的脉动引起的信号损失。在传统方法中,选择了五种已研究过的算法中最合适的一种。对于深度学习方法,则采用 U-Net 进行训练。该网络不是学习 "黄金标准 "目标图像,而是通过训练来优化四个图像特征(病变 CNR、血管暗度、数据一致性和脉动伪影减少),这些特征可通过手动绘制的 ROI 进行定量评估。根据这四个特征计算出质量分数。作为额外的质量评估,三位放射科医生对所得图像的不同特征进行了评分。但是,血管的暗度降低了。深度学习方法提高了病变 CNR,减少了信号损失,但程度略低,而且还增加了血管暗度。根据图像质量评分,深度学习图像的质量高于使用传统方法获得的图像。结论与传统算法不同,深度学习算法增加了血管暗度。因此,它可能是传统算法的可行替代方案。
{"title":"Feature-guided deep learning reduces signal loss and increases lesion CNR in diffusion-weighted imaging of the liver","authors":"Tobit Führes ,&nbsp;Marc Saake ,&nbsp;Jennifer Lorenz ,&nbsp;Hannes Seuss ,&nbsp;Sebastian Bickelhaupt ,&nbsp;Michael Uder ,&nbsp;Frederik Bernd Laun","doi":"10.1016/j.zemedi.2023.07.005","DOIUrl":"10.1016/j.zemedi.2023.07.005","url":null,"abstract":"<div><h3><strong>Purpose</strong></h3><p>This research aims to develop a feature-guided deep learning approach and compare it with an optimized conventional post-processing algorithm in order to enhance the image quality of diffusion-weighted liver images and, in particular, to reduce the pulsation-induced signal loss occurring predominantly in the left liver lobe.</p></div><div><h3><strong>Methods</strong></h3><p>Data from 40 patients with liver lesions were used. For the conventional approach, the best-suited out of five examined algorithms was chosen. For the deep learning approach, a U-Net was trained. Instead of learning “gold-standard” target images, the network was trained to optimize four image features (lesion CNR, vessel darkness, data consistency, and pulsation artifact reduction), which could be assessed quantitatively using manually drawn ROIs. A quality score was calculated from these four features. As an additional quality assessment, three radiologists rated different features of the resulting images.</p></div><div><h3><strong>Results</strong></h3><p>The conventional approach could substantially increase the lesion CNR and reduce the pulsation-induced signal loss. However, the vessel darkness was reduced. The deep learning approach increased the lesion CNR and reduced the signal loss to a slightly lower extent, but it could additionally increase the vessel darkness. According to the image quality score, the quality of the deep-learning images was higher than that of the images obtained using the conventional approach. The radiologist ratings were mostly consistent with the quantitative scores, but the overall quality ratings differed among the readers.</p></div><div><h3><strong>Conclusion</strong></h3><p>Unlike the conventional algorithm, the deep-learning algorithm increased the vessel darkness. Therefore, it may be a viable alternative to conventional algorithms.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000879/pdfft?md5=b3e5b6c0be696f64222a77e9bdedeec2&pid=1-s2.0-S0939388923000879-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9931929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in medical physics 医学物理学中的人工智能。
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2024.03.002
Steffen Bollmann, Thomas Küstner, Qian Tao, Frank G Zöllner
{"title":"Artificial intelligence in medical physics","authors":"Steffen Bollmann,&nbsp;Thomas Küstner,&nbsp;Qian Tao,&nbsp;Frank G Zöllner","doi":"10.1016/j.zemedi.2024.03.002","DOIUrl":"10.1016/j.zemedi.2024.03.002","url":null,"abstract":"","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S093938892400028X/pdfft?md5=a92b28a357f1d136a6ea66579890411c&pid=1-s2.0-S093938892400028X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140208799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic AI-based contouring of prostate MRI for online adaptive radiotherapy 基于人工智能的前列腺 MRI 自动轮廓分析,用于在线自适应放疗
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.05.001
Marcel Nachbar , Monica lo Russo , Cihan Gani , Simon Boeke , Daniel Wegener , Frank Paulsen , Daniel Zips , Thais Roque , Nikos Paragios , Daniela Thorwarth

Background and purpose

MR-guided radiotherapy (MRgRT) online plan adaptation accounts for tumor volume changes, interfraction motion and thus allows daily sparing of relevant organs at risk. Due to the high interfraction variability of bladder and rectum, patients with tumors in the pelvic region may strongly benefit from adaptive MRgRT. Currently, fast automatic annotation of anatomical structures is not available within the online MRgRT workflow. Therefore, the aim of this study was to train and validate a fast, accurate deep learning model for automatic MRI segmentation at the MR-Linac for future implementation in a clinical MRgRT workflow.

Materials and methods

For a total of 47 patients, T2w MRI data were acquired on a 1.5 T MR-Linac (Unity, Elekta) on five different days. Prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, body and bony structures were manually annotated. These training data consisting of 232 data sets in total was used for the generation of a deep learning based autocontouring model and validated on 20 unseen T2w-MRIs. For quantitative evaluation the validation set was contoured by a radiation oncologist as gold standard contours (GSC) and compared in MATLAB to the automatic contours (AIC). For the evaluation, dice similarity coefficients (DSC), and 95% Hausdorff distances (95% HD), added path length (APL) and surface DSC (sDSC) were calculated in a caudal-cranial window of ± 4 cm with respect to the prostate ends. For qualitative evaluation, five radiation oncologists scored the AIC on the possible usage within an online adaptive workflow as follows: (1) no modifications needed, (2) minor adjustments needed, (3) major adjustments/ multiple minor adjustments needed, (4) not usable.

Results

The quantitative evaluation revealed a maximum median 95% HD of 6.9 mm for the rectum and minimum median 95% HD of 2.7 mm for the bladder. Maximal and minimal median DSC were detected for bladder with 0.97 and for penile bulb with 0.73, respectively. Using a tolerance level of 3 mm, the highest and lowest sDSC were determined for rectum (0.94) and anal canal (0.68), respectively.

Qualitative evaluation resulted in a mean score of 1.2 for AICs over all organs and patients across all expert ratings. For the different autocontoured structures, the highest mean score of 1.0 was observed for anal canal, sacrum, femur left and right, and pelvis left, whereas for prostate the lowest mean score of 2.0 was detected. In total, 80% of the contours were rated be clinically acceptable, 16% to require minor and 4% major adjustments for online adaptive MRgRT.

Conclusion

In this study, an AI-based autocontouring was successfully trained for online adaptive MR-guided radiotherapy on the 1.5 T MR-Linac system. The developed model can automatically generate contours accepted by physicians (80%) o

背景和目的MR引导放射治疗(MRgRT)的在线计划适应性考虑了肿瘤体积的变化和折射运动,因此可以每天对有风险的相关器官进行放疗。由于膀胱和直肠的折射运动变化较大,盆腔肿瘤患者可从自适应 MRgRT 中获益匪浅。目前,在线 MRgRT 工作流程还不能快速自动标注解剖结构。因此,本研究的目的是训练和验证一个快速、准确的深度学习模型,用于在 MR-Linac 上进行自动 MRI 分割,以便将来在临床 MRgRT 工作流程中实施。材料和方法在五个不同的日子里,在 1.5 T MR-Linac (Unity,Elekta)上共采集了 47 名患者的 T2w MRI 数据。人工标注了前列腺、精囊、直肠、肛管、膀胱、阴茎球部、身体和骨骼结构。这些训练数据总共包括 232 个数据集,用于生成基于深度学习的自动构图模型,并在 20 个未见过的 T2w-MRI 上进行了验证。为了进行定量评估,验证集由放射肿瘤学家绘制黄金标准轮廓(GSC),并在 MATLAB 中与自动轮廓(AIC)进行比较。为了进行评估,在前列腺两端± 4 厘米的尾颅窗(caudal-cranial window)内计算了骰子相似系数(DSC)、95% Hausdorff 距离(95% HD)、附加路径长度(APL)和表面 DSC(sDSC)。为了进行定性评估,五位放射肿瘤专家对在线自适应工作流程中可能使用的 AIC 进行了如下评分:(结果定量评估显示,直肠最大 95% HD 中值为 6.9 毫米,膀胱最小 95% HD 中值为 2.7 毫米。膀胱和阴茎球的最大和最小中位 DSC 分别为 0.97 和 0.73。以 3 毫米为容差水平,直肠(0.94)和肛管(0.68)的 sDSC 分别最高和最低。在不同的自动描绘结构中,肛管、骶骨、左右股骨和左骨盆的平均得分最高,为 1.0 分,而前列腺的平均得分最低,为 2.0 分。总之,80% 的轮廓被评为临床可接受,16% 的轮廓需要微调,4% 的轮廓需要对在线自适应 MRgRT 进行重大调整。所开发的模型可自动生成医生认可的轮廓(80%),或只需进行小幅修正(16%),即可用于临床使用序列的原发性前列腺照射。
{"title":"Automatic AI-based contouring of prostate MRI for online adaptive radiotherapy","authors":"Marcel Nachbar ,&nbsp;Monica lo Russo ,&nbsp;Cihan Gani ,&nbsp;Simon Boeke ,&nbsp;Daniel Wegener ,&nbsp;Frank Paulsen ,&nbsp;Daniel Zips ,&nbsp;Thais Roque ,&nbsp;Nikos Paragios ,&nbsp;Daniela Thorwarth","doi":"10.1016/j.zemedi.2023.05.001","DOIUrl":"10.1016/j.zemedi.2023.05.001","url":null,"abstract":"<div><h3>Background and purpose</h3><p>MR-guided radiotherapy (MRgRT) online plan adaptation accounts for tumor volume changes, interfraction motion and thus allows daily sparing of relevant organs at risk. Due to the high interfraction variability of bladder and rectum, patients with tumors in the pelvic region may strongly benefit from adaptive MRgRT. Currently, fast automatic annotation of anatomical structures is not available within the online MRgRT workflow. Therefore, the aim of this study was to train and validate a fast, accurate deep learning model for automatic MRI segmentation at the MR-Linac for future implementation in a clinical MRgRT workflow.</p></div><div><h3>Materials and methods</h3><p>For a total of 47 patients, T2w MRI data were acquired on a 1.5 T MR-Linac (Unity, Elekta) on five different days. Prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, body and bony structures were manually annotated. These training data consisting of 232 data sets in total was used for the generation of a deep learning based autocontouring model and validated on 20 unseen T2w-MRIs. For quantitative evaluation the validation set was contoured by a radiation oncologist as gold standard contours (GSC) and compared in MATLAB to the automatic contours (AIC). For the evaluation, dice similarity coefficients (DSC), and 95% Hausdorff distances (95% HD), added path length (APL) and surface DSC (sDSC) were calculated in a caudal-cranial window of <span><math><mrow><mo>±</mo></mrow></math></span> 4 cm with respect to the prostate ends. For qualitative evaluation, five radiation oncologists scored the AIC on the possible usage within an online adaptive workflow as follows: (1) no modifications needed, (2) minor adjustments needed, (3) major adjustments/ multiple minor adjustments needed, (4) not usable.</p></div><div><h3>Results</h3><p>The quantitative evaluation revealed a maximum median 95% HD of 6.9 mm for the rectum and minimum median 95% HD of 2.7 mm for the bladder. Maximal and minimal median DSC were detected for bladder with 0.97 and for penile bulb with 0.73, respectively. Using a tolerance level of 3 mm, the highest and lowest sDSC were determined for rectum (0.94) and anal canal (0.68), respectively.</p><p>Qualitative evaluation resulted in a mean score of 1.2 for AICs over all organs and patients across all expert ratings. For the different autocontoured structures, the highest mean score of 1.0 was observed for anal canal, sacrum, femur left and right, and pelvis left, whereas for prostate the lowest mean score of 2.0 was detected. In total, 80% of the contours were rated be clinically acceptable, 16% to require minor and 4% major adjustments for online adaptive MRgRT.</p></div><div><h3>Conclusion</h3><p>In this study, an AI-based autocontouring was successfully trained for online adaptive MR-guided radiotherapy on the 1.5 T MR-Linac system. The developed model can automatically generate contours accepted by physicians (80%) o","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000533/pdfft?md5=4c8a5787fe97a32ec18b4426b3597127&pid=1-s2.0-S0939388923000533-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9562444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards quality management of artificial intelligence systems for medical applications 实现医疗应用人工智能系统的质量管理。
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2024.02.001
Lorenzo Mercolli, Axel Rominger, Kuangyu Shi

The use of artificial intelligence systems in clinical routine is still hampered by the necessity of a medical device certification and/or by the difficulty of implementing these systems in a clinic’s quality management system. In this context, the key questions for a user are how to ensure robust model predictions and how to appraise the quality of a model’s results on a regular basis.

In this paper we discuss some conceptual foundation for a clinical implementation of a machine learning system and argue that both vendors and users should take certain responsibilities, as is already common practice for high-risk medical equipment.

We propose the methodology from AAPM Task Group 100 report No. 283 as a conceptual framework for developing risk-driven a quality management program for a clinical process that encompasses a machine learning system. This is illustrated with an example of a clinical workflow. Our analysis shows how the risk evaluation in this framework can accommodate artificial intelligence based systems independently of their robustness evaluation or the user’s in–house expertise. In particular, we highlight how the degree of interpretability of a machine learning system can be systematically accounted for within the risk evaluation and in the development of a quality management system.

人工智能系统在临床常规工作中的应用仍然受到医疗设备认证和/或在诊所质量管理系统中实施这些系统的困难的阻碍。在这种情况下,用户面临的关键问题是如何确保模型预测的准确性,以及如何定期评估模型结果的质量。在本文中,我们讨论了机器学习系统临床实施的一些概念基础,并认为供应商和用户都应承担一定的责任,这已是高风险医疗设备的普遍做法。我们提出了 AAPM 第 100 工作组第 283 号报告中的方法,作为为包含机器学习系统的临床流程制定风险驱动型质量管理计划的概念框架。我们以临床工作流程为例进行说明。我们的分析表明了该框架中的风险评估如何能够独立于人工智能系统的稳健性评估或用户的内部专业知识而适应人工智能系统。我们特别强调了如何在风险评估和质量管理系统开发过程中系统地考虑机器学习系统的可解释性程度。
{"title":"Towards quality management of artificial intelligence systems for medical applications","authors":"Lorenzo Mercolli,&nbsp;Axel Rominger,&nbsp;Kuangyu Shi","doi":"10.1016/j.zemedi.2024.02.001","DOIUrl":"10.1016/j.zemedi.2024.02.001","url":null,"abstract":"<div><p>The use of artificial intelligence systems in clinical routine is still hampered by the necessity of a medical device certification and/or by the difficulty of implementing these systems in a clinic’s quality management system. In this context, the key questions for a user are how to ensure robust model predictions and how to appraise the quality of a model’s results on a regular basis.</p><p>In this paper we discuss some conceptual foundation for a clinical implementation of a machine learning system and argue that both vendors and users should take certain responsibilities, as is already common practice for high-risk medical equipment.</p><p>We propose the methodology from AAPM Task Group 100 report No. 283 as a conceptual framework for developing risk-driven a quality management program for a clinical process that encompasses a machine learning system. This is illustrated with an example of a clinical workflow. Our analysis shows how the risk evaluation in this framework can accommodate artificial intelligence based systems independently of their robustness evaluation or the user’s in–house expertise. In particular, we highlight how the degree of interpretability of a machine learning system can be systematically accounted for within the risk evaluation and in the development of a quality management system.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388924000242/pdfft?md5=309f6a0c3aedbe399d5a372c060278f6&pid=1-s2.0-S0939388924000242-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSMA-PET improves deep learning-based automated CT kidney segmentation PSMA-PET 提高了基于深度学习的 CT 自动肾脏分割能力
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.08.006
Julian Leube, Matthias Horn, Philipp E. Hartrampf, Andreas K. Buck, Michael Lassmann, Johannes Tran-Gia

For dosimetry of radiopharmaceutical therapies, it is essential to determine the volume of relevant structures exposed to therapeutic radiation. For many radiopharmaceuticals, the kidneys represent an important organ-at-risk. To reduce the time required for kidney segmentation, which is often still performed manually, numerous approaches have been presented in recent years to apply deep learning-based methods for CT-based automated segmentation. While the automatic segmentation methods presented so far have been based solely on CT information, the aim of this work is to examine the added value of incorporating PSMA-PET data in the automatic kidney segmentation.

Methods

A total of 108 PET/CT examinations (53 [68Ga]Ga-PSMA-I&T and 55 [18F]F-PSMA-1007 examinations) were grouped to create a reference data set of manual segmentations of the kidney. These segmentations were performed by a human examiner. For each subject, two segmentations were carried out: one CT-based (detailed) segmentation and one PET-based (coarser) segmentation. Five different u-net based approaches were applied to the data set to perform an automated segmentation of the kidney: CT images only, PET images only (coarse segmentation), a combination of CT and PET images, a combination of CT images and a PET-based coarse mask, and a CT image, which had been pre-segmented using a PET-based coarse mask. A quantitative assessment of these approaches was performed based on a test data set of 20 patients, including Dice score, volume deviation and average Hausdorff distance between automated and manual segmentations. Additionally, a visual evaluation of automated segmentations for 100 additional (i.e., exclusively automatically segmented) patients was performed by a nuclear physician.

Results

Out of all approaches, the best results were achieved by using CT images which had been pre-segmented using a PET-based coarse mask as input. In addition, this method performed significantly better than the segmentation based solely on CT, which was supported by the visual examination of the additional segmentations. In 80% of the cases, the segmentations created by exploiting the PET-based pre-segmentation were preferred by the nuclear physician.

Conclusion

This study shows that deep-learning based kidney segmentation can be significantly improved through the addition of a PET-based pre-segmentation. The presented method was shown to be especially beneficial for kidneys with cysts or kidneys that are closely adjacent to other organs such as the spleen, liver or pancreas. In the future, this could lead to a considerable reduction in the time required for dosimetry calculations as well as an improvement in the results.

对放射性药物疗法进行剂量测定时,必须确定相关结构暴露于治疗辐射的体积。对于许多放射性药物来说,肾脏是一个重要的危险器官。为了减少肾脏分割所需的时间(通常仍由人工完成),近年来提出了许多方法,将基于深度学习的方法应用于基于 CT 的自动分割。方法将总共 108 次 PET/CT 检查(53 次 [68Ga]Ga-PSMA-I&T 和 55 次 [18F]F-PSMA-1007 检查)进行分组,以创建肾脏手动分割的参考数据集。这些分割是由人类检查员进行的。对每个受检者进行了两次分割:一次是基于 CT 的(详细)分割,一次是基于 PET 的(较粗)分割。对数据集采用了五种不同的基于 U 网的方法,以对肾脏进行自动分割:仅 CT 图像、仅 PET 图像(粗分割)、CT 和 PET 图像组合、CT 图像和基于 PET 的粗掩膜组合,以及使用基于 PET 的粗掩膜预先分割的 CT 图像。根据 20 名患者的测试数据集对这些方法进行了定量评估,包括自动分割和手动分割之间的 Dice 评分、体积偏差和平均 Hausdorff 距离。结果在所有方法中,使用基于 PET 的粗略掩膜作为输入,预先对 CT 图像进行分割的方法取得了最佳效果。此外,该方法的效果明显优于仅根据 CT 进行的分割,这一点也得到了附加分割视觉检查的支持。在 80% 的病例中,核医生更喜欢利用基于 PET 的预分割创建的分割结果。所提出的方法尤其适用于有囊肿的肾脏或与其他器官(如脾脏、肝脏或胰腺)紧邻的肾脏。未来,这将大大缩短剂量测定计算所需的时间,并改善计算结果。
{"title":"PSMA-PET improves deep learning-based automated CT kidney segmentation","authors":"Julian Leube,&nbsp;Matthias Horn,&nbsp;Philipp E. Hartrampf,&nbsp;Andreas K. Buck,&nbsp;Michael Lassmann,&nbsp;Johannes Tran-Gia","doi":"10.1016/j.zemedi.2023.08.006","DOIUrl":"10.1016/j.zemedi.2023.08.006","url":null,"abstract":"<div><p>For dosimetry of radiopharmaceutical therapies, it is essential to determine the volume of relevant structures exposed to therapeutic radiation. For many radiopharmaceuticals, the kidneys represent an important organ-at-risk. To reduce the time required for kidney segmentation, which is often still performed manually, numerous approaches have been presented in recent years to apply deep learning-based methods for CT-based automated segmentation. While the automatic segmentation methods presented so far have been based solely on CT information, the aim of this work is to examine the added value of incorporating PSMA-PET data in the automatic kidney segmentation.</p></div><div><h3><strong>Methods</strong></h3><p>A total of 108 PET/CT examinations (53 [<sup>68</sup>Ga]Ga-PSMA-I&amp;T and 55 [<sup>18</sup>F]F-PSMA-1007 examinations) were grouped to create a reference data set of manual segmentations of the kidney. These segmentations were performed by a human examiner. For each subject, two segmentations were carried out: one CT-based (detailed) segmentation and one PET-based (coarser) segmentation. Five different u-net based approaches were applied to the data set to perform an automated segmentation of the kidney: CT images only, PET images only (coarse segmentation), a combination of CT and PET images, a combination of CT images and a PET-based coarse mask, and a CT image, which had been pre-segmented using a PET-based coarse mask. A quantitative assessment of these approaches was performed based on a test data set of 20 patients, including Dice score, volume deviation and average Hausdorff distance between automated and manual segmentations. Additionally, a visual evaluation of automated segmentations for 100 additional (i.e., exclusively automatically segmented) patients was performed by a nuclear physician.</p></div><div><h3><strong>Results</strong></h3><p>Out of all approaches, the best results were achieved by using CT images which had been pre-segmented using a PET-based coarse mask as input. In addition, this method performed significantly better than the segmentation based solely on CT, which was supported by the visual examination of the additional segmentations. In 80% of the cases, the segmentations created by exploiting the PET-based pre-segmentation were preferred by the nuclear physician.</p></div><div><h3><strong>Conclusion</strong></h3><p>This study shows that deep-learning based kidney segmentation can be significantly improved through the addition of a PET-based pre-segmentation. The presented method was shown to be especially beneficial for kidneys with cysts or kidneys that are closely adjacent to other organs such as the spleen, liver or pancreas. In the future, this could lead to a considerable reduction in the time required for dosimetry calculations as well as an improvement in the results.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000958/pdfft?md5=905c071b84bb04d8b4d49a82783a3b94&pid=1-s2.0-S0939388923000958-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10145024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence-based analysis of whole-body bone scintigraphy: The quest for the optimal deep learning algorithm and comparison with human observer performance 基于人工智能的全身骨扫描分析:探索最佳深度学习算法并与人类观察者的表现进行比较
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.01.008
Ghasem Hajianfar , Maziar Sabouri , Yazdan Salimi , Mehdi Amini , Soroush Bagheri , Elnaz Jenabi , Sepideh Hekmat , Mehdi Maghsudi , Zahra Mansouri , Maziar Khateri , Mohammad Hosein Jamshidi , Esmail Jafari , Ahmad Bitarafan Rajabi , Majid Assadi , Mehrdad Oveisi , Isaac Shiri , Habib Zaidi

Purpose

Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers.

Materials and Methods

After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers.

Results

DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time.

Conclusion

Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images.

目的全身骨闪烁扫描(WBS)是早期诊断恶性骨病最广泛使用的方法之一。然而,这种方法耗时较长,需要足够的精力和经验。此外,在疾病的早期阶段对 WBS 扫描的解读可能具有挑战性,因为其模式通常反映的是正常的外观,而这种外观很容易受到主观解读的影响。为了简化解读 WBS 扫描这一艰巨、主观且容易出错的任务,我们开发了深度学习(DL)模型来自动进行两项主要分析,即(i)将扫描分为正常和异常;(ii)区分恶性和非肿瘤性骨病,并将其性能与人类观察者进行了比较。数据分为两部分,包括训练数据和测试数据,其中一部分训练数据用于验证。在单视图和双视图(后视图和前视图)输入模式中应用了十种不同的 CNN 模型,以便为每次分析找到最佳模型。此外,还使用了三种不同的方法,包括挤压激励法(SE)、空间金字塔池化法(SPP)和注意力增强法(AA),来聚合双视角输入模型的特征。模型性能通过接收器工作特征曲线(ROC)下面积(AUC)、准确性、灵敏度和特异性进行报告,并与应用于 ROC 曲线的 DeLong 检验进行比较。测试数据集由三位具有不同经验水平的核医学医生(NMP)进行评估,以比较人工智能和人类观察者的性能。结果DenseNet121_AA(DensNet121,双视图输入由 AA 聚合)和 InceptionResNetV2_SPP 在第一次和第二次分析中分别取得了最高的性能(AUC = 0.72)。此外,平均而言,在第一次分析中,Inception V3 和 InceptionResNetV2 CNN 模型以及采用 AA 聚合方法的双视角输入具有更优越的性能。此外,在第二次分析中,作为 CNN 方法的 DenseNet121 和 InceptionResNetV2 以及采用 AA 聚合法的双视图输入取得了最佳结果。相反,在第一次分析中,人工智能模型的性能明显高于人类观察者,而在第二次分析中,虽然人工智能模型评估扫描的时间大大缩短,但两者的性能相当。通过使用更大、更多样化的队列来训练 DL 模型,人工智能有可能用于协助医生评估 WBS 图像。
{"title":"Artificial intelligence-based analysis of whole-body bone scintigraphy: The quest for the optimal deep learning algorithm and comparison with human observer performance","authors":"Ghasem Hajianfar ,&nbsp;Maziar Sabouri ,&nbsp;Yazdan Salimi ,&nbsp;Mehdi Amini ,&nbsp;Soroush Bagheri ,&nbsp;Elnaz Jenabi ,&nbsp;Sepideh Hekmat ,&nbsp;Mehdi Maghsudi ,&nbsp;Zahra Mansouri ,&nbsp;Maziar Khateri ,&nbsp;Mohammad Hosein Jamshidi ,&nbsp;Esmail Jafari ,&nbsp;Ahmad Bitarafan Rajabi ,&nbsp;Majid Assadi ,&nbsp;Mehrdad Oveisi ,&nbsp;Isaac Shiri ,&nbsp;Habib Zaidi","doi":"10.1016/j.zemedi.2023.01.008","DOIUrl":"10.1016/j.zemedi.2023.01.008","url":null,"abstract":"<div><h3>Purpose</h3><p>Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers.</p></div><div><h3>Materials and Methods</h3><p>After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers.</p></div><div><h3>Results</h3><p>DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time.</p></div><div><h3>Conclusion</h3><p>Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000089/pdfft?md5=40da3cacf80f682e80f4655f04f990de&pid=1-s2.0-S0939388923000089-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9133122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards MR contrast independent synthetic CT generation 实现独立于磁共振造影剂的合成 CT 生成
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.07.001
Attila Simkó , Mikael Bylund , Gustav Jönsson , Tommy Löfstedt , Anders Garpebring , Tufve Nyholm , Joakim Jonsson

The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics.

To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, T1 and T2 maps (i.e. contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only T2w MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose.

On T2w images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on T1w images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model.

Using a dataset of T2w MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.

在放射治疗工作流程中使用合成 CT(sCT)可以降低成本,缩短扫描时间,同时消除 MR 和 CT 两种模式工作中的不确定性。用于生成 sCT 的深度学习(DL)解决方案的性能正在稳步提高,但大多数提出的方法都是在来自单一扫描仪的单一对比度私人数据集上进行训练和验证的。这些解决方案在其他数据集上的表现可能不尽相同,从而限制了它们的普遍可用性和价值。为了提高 sCT 模型的通用性,我们建议采用预先训练的 DL 模型,通过生成人工质子密度、T1 和 T2 图(即对比度无关的定量图)对输入的 MR 图像进行预处理,然后用于生成 sCT。我们使用一个仅有 T2w 磁共振图像的数据集,将这种方法对输入磁共振对比度的鲁棒性与直接使用磁共振图像训练的模型进行了比较。我们使用像素度量和计算平均放射深度来评估生成的 sCT,作为平均投放剂量的近似值。然而,在对 T1w 图像以及来自公共和私人数据集的各种其他对比度和扫描仪进行评估时,我们的方法优于基线模型。利用 T2w MR 图像数据集,我们提出的模型实现了合成定量图,以生成 sCT 图像,从而提高了对其他对比度的通用性。我们的代码和训练有素的模型可公开获取。
{"title":"Towards MR contrast independent synthetic CT generation","authors":"Attila Simkó ,&nbsp;Mikael Bylund ,&nbsp;Gustav Jönsson ,&nbsp;Tommy Löfstedt ,&nbsp;Anders Garpebring ,&nbsp;Tufve Nyholm ,&nbsp;Joakim Jonsson","doi":"10.1016/j.zemedi.2023.07.001","DOIUrl":"10.1016/j.zemedi.2023.07.001","url":null,"abstract":"<div><p>The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics.</p><p>To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, <span><math><mrow><mi>T</mi><mn>1</mn></mrow></math></span> and <span><math><mrow><mi>T</mi><mn>2</mn></mrow></math></span> maps (<em>i.e.</em> contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only <span><math><mrow><mi>T</mi><mn>2</mn><mi>w</mi></mrow></math></span> MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose.</p><p>On <span><math><mrow><mi>T</mi><mn>2</mn><mi>w</mi></mrow></math></span> images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on <span><math><mrow><mi>T</mi><mn>1</mn><mi>w</mi></mrow></math></span> images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model.</p><p>Using a dataset of <span><math><mrow><mi>T</mi><mn>2</mn><mi>w</mi></mrow></math></span> MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000831/pdfft?md5=3e1f7674de1352aa91dfccab724c3a83&pid=1-s2.0-S0939388923000831-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9988159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Board + Consulting Editorial Board 编辑委员会 + 咨询编辑委员会
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-05-01 DOI: 10.1016/S0939-3889(24)00034-5
{"title":"Editorial Board + Consulting Editorial Board","authors":"","doi":"10.1016/S0939-3889(24)00034-5","DOIUrl":"https://doi.org/10.1016/S0939-3889(24)00034-5","url":null,"abstract":"","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388924000345/pdfft?md5=874b26dd35f0095e923d023375e4842c&pid=1-s2.0-S0939388924000345-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of multi-method-multi-model inference to radiation related solid cancer excess risks models for astronaut risk assessment 将多方法-多模型推论应用于宇航员风险评估中与辐射相关的实体癌超额风险模型。
IF 2 4区 医学 Q1 Medicine Pub Date : 2024-02-01 DOI: 10.1016/j.zemedi.2023.06.003
Luana Hafner , Linda Walsh

The impact of including model-averaged excess radiation risks (ER) into a measure of radiation attributed decrease of survival (RADS) for the outcome all solid cancer incidence and the impact on the uncertainties is demonstrated. It is shown that RADS applying weighted model averaged ER based on AIC weights result in smaller risk estimates with narrower 95% CI than RADS using ER based on BIC weights. Further a multi-method-multi-model inference approach is introduced that allows calculating one general RADS estimate providing a weighted average risk estimate for a lunar and a Mars mission. For males the general RADS estimate is found to be 0.42% (95% CI: 0.38%; 0.45%) and for females 0.67% (95% CI: 0.59%; 0.75%) for a lunar mission and 2.45% (95% CI: 2.23%; 2.67%) for males and 3.91% (95% CI: 3.44%; 4.39%) for females for a Mars mission considering an age at exposure of 40 years and an attained age of 65 years. It is recommended to include these types of uncertainties and to include model-averaged excess risks in astronaut risk assessment.

将模型平均超量辐射风险(ER)纳入所有实体癌发病率结果的辐射导致生存率下降(RADS)测量中的影响以及对不确定性的影响得到了证实。结果表明,采用基于 AIC 权重的加权模型平均超额辐射风险 RADS 比采用基于 BIC 权重的超额辐射风险 RADS 得出的风险估计值更小,95% CI 更窄。此外,还介绍了一种多方法-多模型推理方法,可以计算出一个通用的 RADS 估计值,为月球和火星任务提供加权平均风险估计值。在月球任务中,男性的一般 RADS 估计值为 0.42% (95% CI: 0.38%; 0.45%),女性为 0.67% (95% CI: 0.59%; 0.75%);在火星任务中,考虑到暴露年龄为 40 岁和达到年龄为 65 岁,男性为 2.45% (95% CI: 2.23%; 2.67%),女性为 3.91% (95% CI: 3.44%; 4.39%)。建议在宇航员风险评估中纳入这类不确定性,并纳入模型平均超额风险。
{"title":"Application of multi-method-multi-model inference to radiation related solid cancer excess risks models for astronaut risk assessment","authors":"Luana Hafner ,&nbsp;Linda Walsh","doi":"10.1016/j.zemedi.2023.06.003","DOIUrl":"10.1016/j.zemedi.2023.06.003","url":null,"abstract":"<div><p>The impact of including model-averaged excess radiation risks (ER) into a measure of radiation attributed decrease of survival (RADS) for the outcome all solid cancer incidence and the impact on the uncertainties is demonstrated. It is shown that RADS applying weighted model averaged ER based on AIC weights result in smaller risk estimates with narrower 95% CI than RADS using ER based on BIC weights. Further a multi-method-multi-model inference approach is introduced that allows calculating one general RADS estimate providing a weighted average risk estimate for a lunar and a Mars mission. For males the general RADS estimate is found to be 0.42% (95% CI: 0.38%; 0.45%) and for females 0.67% (95% CI: 0.59%; 0.75%) for a lunar mission and 2.45% (95% CI: 2.23%; 2.67%) for males and 3.91% (95% CI: 3.44%; 4.39%) for females for a Mars mission considering an age at exposure of 40 years and an attained age of 65 years. It is recommended to include these types of uncertainties and to include model-averaged excess risks in astronaut risk assessment.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000806/pdfft?md5=c3e7e327440b0492d75125a9932acf05&pid=1-s2.0-S0939388923000806-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9769766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Zeitschrift fur Medizinische Physik
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1