首页 > 最新文献

Medical physics最新文献

英文 中文
Machine learning model for fast prediction and uncertainty quantification of needle deflection during prostate biopsy. 前列腺活检中针偏转快速预测和不确定性量化的机器学习模型。
IF 3.2 Pub Date : 2026-02-01 DOI: 10.1002/mp.70314
Nathan Hoffman, Lidia Al-Zogbi, Axel Krieger, Junichi Tokuda, Pedro Moreira, Mark Fuge
<p><strong>Background: </strong>Accurate needle placement is essential for prostate biopsy. Recently, transperineal prostate biopsies are receiving renewed interest due to concern over infection from conventional transrectal biopsies. However, accurate needle placement is more challenging in the transperineal approach than in the transrectal approach due to the long insertion distance leading to a large targeting error and repeated insertion attempts. Improved procedure planning tools that can predict the deviation of the needle can potentially reduce the targeting error and number of insertion attempts. Prediction of deflection magnitude requires a model of biopsy needle deflection, which in turn requires information about tissue material properties. However, material properties of tissue in patients cannot be easily obtained. Accounting for this uncertainty in patient tissue properties requires a model capable of quantifying uncertainty in needle deflection as a function of a distribution of tissue properties. A Monte Carlo uncertainty quantification requires 1000s of samples, but it is not possible to obtain this many samples in a short enough time for intraoperative procedure planning using published needle deflection prediction models.</p><p><strong>Purpose: </strong>This work seeks to develop a model of needle deflection fast enough for use in intraoperative procedure planning, validate this model against experimental results, and integrate it into a Monte Carlo uncertainty quantification model.</p><p><strong>Methods: </strong>This work used a mechanics-based model of biopsy needle deflection to train a Fourier feature neural network (FFNN) model in order to make predictions with a low computational cost. Both models were validated against experimental data. The neural network model was used in a Monte Carlo uncertainty quantification model to quantify uncertainty in needle deflection arising from uncertain tissue mechanical properties.</p><p><strong>Results: </strong>This work (1) implemented a mechanics-based model and a FFNN model. Both models were validated against previously published experiments carried out with tissue phantoms. Both models showed close agreement with the experimental data. (2) We showed that our FFNN model was more accurate than a baseline ordinary least squares model, introducing only about 0.3-mm tip deflection error compared to the mechanics-based model. We also showed that our FFNN model makes unbiased predictions with respect to the amount of deflection. (3) We demonstrated a Monte Carlo uncertainty quantification model of needle deflection with a low computational cost of about 20 CPU s. We used our uncertainty quantification model to show how the depth, stiffness, and magnitude of uncertainty in a layer of tissue affect needle deflection. In addition, we showed a simple clinical example of the use of our model.</p><p><strong>Conclusions: </strong>This work demonstrates a Monte Carlo uncertainty quantification
背景:准确置针对前列腺活检至关重要。最近,由于担心传统经直肠活检的感染,经会阴前列腺活检重新受到关注。然而,与经直肠入路相比,经会阴入路的准确置针更具挑战性,因为较长的插入距离导致较大的靶向误差和多次插入尝试。改进的手术计划工具可以预测针头的偏差,可以潜在地减少瞄准误差和插入尝试次数。预测偏转幅度需要活检针偏转模型,这反过来又需要有关组织材料特性的信息。然而,患者组织的材料特性不容易获得。考虑到患者组织特性的这种不确定性,需要一个能够量化针偏转不确定性的模型,作为组织特性分布的函数。蒙特卡罗不确定度量化需要数千个样本,但不可能在足够短的时间内获得如此多的样本,以便使用已发表的针挠度预测模型进行术中手术计划。目的:本工作旨在建立一个足够快的用于术中程序规划的针偏转模型,根据实验结果验证该模型,并将其整合到蒙特卡罗不确定性量化模型中。方法:本工作使用基于力学的活检针偏转模型来训练傅里叶特征神经网络(FFNN)模型,以便以较低的计算成本进行预测。用实验数据对两种模型进行了验证。将神经网络模型应用于蒙特卡罗不确定度量化模型,对组织力学性能不确定引起的针挠度不确定度进行量化。结果:本工作(1)实现了一个基于力学的模型和一个FFNN模型。这两种模型都通过先前发表的组织模型实验进行了验证。两种模型均与实验数据吻合较好。(2)我们发现,我们的FFNN模型比基线普通最小二乘模型更准确,与基于力学的模型相比,只引入了约0.3 mm的尖端偏转误差。我们还表明,我们的FFNN模型对偏转量做出了无偏的预测。(3)我们展示了一种低计算成本的蒙特卡罗针挠度不确定性量化模型,计算成本约为20 CPU s。我们使用不确定性量化模型来显示组织层中不确定性的深度、刚度和大小如何影响针挠度。此外,我们还展示了一个简单的临床例子来使用我们的模型。结论:这项工作证明了蒙特卡罗不确定性量化模型的针挠度与低计算成本。这种方法在前列腺活检的程序规划以及其他使用柔性针进行的经会阴程序(如冷冻消融和近距离治疗)中显示了未来的应用前景。
{"title":"Machine learning model for fast prediction and uncertainty quantification of needle deflection during prostate biopsy.","authors":"Nathan Hoffman, Lidia Al-Zogbi, Axel Krieger, Junichi Tokuda, Pedro Moreira, Mark Fuge","doi":"10.1002/mp.70314","DOIUrl":"10.1002/mp.70314","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;Accurate needle placement is essential for prostate biopsy. Recently, transperineal prostate biopsies are receiving renewed interest due to concern over infection from conventional transrectal biopsies. However, accurate needle placement is more challenging in the transperineal approach than in the transrectal approach due to the long insertion distance leading to a large targeting error and repeated insertion attempts. Improved procedure planning tools that can predict the deviation of the needle can potentially reduce the targeting error and number of insertion attempts. Prediction of deflection magnitude requires a model of biopsy needle deflection, which in turn requires information about tissue material properties. However, material properties of tissue in patients cannot be easily obtained. Accounting for this uncertainty in patient tissue properties requires a model capable of quantifying uncertainty in needle deflection as a function of a distribution of tissue properties. A Monte Carlo uncertainty quantification requires 1000s of samples, but it is not possible to obtain this many samples in a short enough time for intraoperative procedure planning using published needle deflection prediction models.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;This work seeks to develop a model of needle deflection fast enough for use in intraoperative procedure planning, validate this model against experimental results, and integrate it into a Monte Carlo uncertainty quantification model.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;This work used a mechanics-based model of biopsy needle deflection to train a Fourier feature neural network (FFNN) model in order to make predictions with a low computational cost. Both models were validated against experimental data. The neural network model was used in a Monte Carlo uncertainty quantification model to quantify uncertainty in needle deflection arising from uncertain tissue mechanical properties.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;This work (1) implemented a mechanics-based model and a FFNN model. Both models were validated against previously published experiments carried out with tissue phantoms. Both models showed close agreement with the experimental data. (2) We showed that our FFNN model was more accurate than a baseline ordinary least squares model, introducing only about 0.3-mm tip deflection error compared to the mechanics-based model. We also showed that our FFNN model makes unbiased predictions with respect to the amount of deflection. (3) We demonstrated a Monte Carlo uncertainty quantification model of needle deflection with a low computational cost of about 20 CPU s. We used our uncertainty quantification model to show how the depth, stiffness, and magnitude of uncertainty in a layer of tissue affect needle deflection. In addition, we showed a simple clinical example of the use of our model.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;This work demonstrates a Monte Carlo uncertainty quantification ","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"53 2","pages":"e70314"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12860539/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146097623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Points of interest linear attention network for real-time non-rigid liver volume to surface registration. 用于非刚性肝脏体积与表面实时配准的兴趣点线性注意网络。
Pub Date : 2024-05-17 DOI: 10.1002/mp.17108
Zeming Chen, Beiji Zou, Xiaoyan Kui, Yangyang Shi, Ding Lv, Liming Chen

Background: In laparoscopic liver surgery, accurately predicting the displacement of key intrahepatic anatomical structures is crucial for informing the doctor's intraoperative decision-making. However, due to the constrained surgical perspective, only a partial surface of the liver is typically visible. Consequently, the utilization of non-rigid volume to surface registration methods becomes essential. But traditional registration methods lack the necessary accuracy and cannot meet real-time requirements.

Purpose: To achieve high-precision liver registration with only partial surface information and estimate the displacement of internal liver tissues in real-time.

Methods: We propose a novel neural network architecture tailored for real-time non-rigid liver volume to surface registration. The network utilizes a voxel-based method, integrating sparse convolution with the newly proposed points of interest (POI) linear attention module. POI linear attention module specifically calculates attention on the previously extracted POI. Additionally, we identified the most suitable normalization method RMSINorm.

Results: We evaluated our proposed network and other networks on a dataset generated from real liver models and two real datasets. Our method achieves an average error of 4.23 mm and a mean frame rate of 65.4 fps in the generation dataset. It also achieves an average error of 8.29 mm in the human breathing motion dataset.

Conclusions: Our network outperforms CNN-based networks and other attention networks in terms of accuracy and inference speed.

背景:在腹腔镜肝脏手术中,准确预测肝内关键解剖结构的移位对医生术中决策至关重要。然而,由于手术视角受限,通常只能看到肝脏的部分表面。因此,利用非刚性体积到表面的配准方法变得至关重要。目的:在只有部分表面信息的情况下实现高精度肝脏配准,并实时估计肝脏内部组织的位移:方法:我们提出了一种新颖的神经网络架构,专门用于非刚性肝脏体积与表面的实时配准。该网络采用基于体素的方法,将稀疏卷积与新提出的兴趣点(POI)线性注意模块相结合。兴趣点线性关注模块专门计算先前提取的兴趣点的关注度。此外,我们还确定了最合适的归一化方法 RMSINorm:我们在由真实肝脏模型生成的数据集和两个真实数据集上评估了我们提出的网络和其他网络。在生成数据集中,我们的方法实现了 4.23 mm 的平均误差和 65.4 fps 的平均帧率。在人类呼吸运动数据集中,它也实现了 8.29 毫米的平均误差:我们的网络在准确性和推理速度方面都优于基于 CNN 的网络和其他注意力网络。
{"title":"Points of interest linear attention network for real-time non-rigid liver volume to surface registration.","authors":"Zeming Chen, Beiji Zou, Xiaoyan Kui, Yangyang Shi, Ding Lv, Liming Chen","doi":"10.1002/mp.17108","DOIUrl":"https://doi.org/10.1002/mp.17108","url":null,"abstract":"<p><strong>Background: </strong>In laparoscopic liver surgery, accurately predicting the displacement of key intrahepatic anatomical structures is crucial for informing the doctor's intraoperative decision-making. However, due to the constrained surgical perspective, only a partial surface of the liver is typically visible. Consequently, the utilization of non-rigid volume to surface registration methods becomes essential. But traditional registration methods lack the necessary accuracy and cannot meet real-time requirements.</p><p><strong>Purpose: </strong>To achieve high-precision liver registration with only partial surface information and estimate the displacement of internal liver tissues in real-time.</p><p><strong>Methods: </strong>We propose a novel neural network architecture tailored for real-time non-rigid liver volume to surface registration. The network utilizes a voxel-based method, integrating sparse convolution with the newly proposed points of interest (POI) linear attention module. POI linear attention module specifically calculates attention on the previously extracted POI. Additionally, we identified the most suitable normalization method RMSINorm.</p><p><strong>Results: </strong>We evaluated our proposed network and other networks on a dataset generated from real liver models and two real datasets. Our method achieves an average error of 4.23 mm and a mean frame rate of 65.4 fps in the generation dataset. It also achieves an average error of 8.29 mm in the human breathing motion dataset.</p><p><strong>Conclusions: </strong>Our network outperforms CNN-based networks and other attention networks in terms of accuracy and inference speed.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140961267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast four-dimensional cone-beam computed tomography reconstruction using deformable convolutional networks. 使用可变形卷积网络的快速四维锥束计算机断层扫描重建。
Pub Date : 2022-10-01 Epub Date: 2022-06-22 DOI: 10.1002/mp.15806
Zhuoran Jiang, Yushi Chang, Zeyu Zhang, Fang-Fang Yin, Lei Ren

Background: Although four-dimensional cone-beam computed tomography (4D-CBCT) is valuable to provide onboard image guidance for radiotherapy of moving targets, it requires a long acquisition time to achieve sufficient image quality for target localization. To improve the utility, it is highly desirable to reduce the 4D-CBCT scanning time while maintaining high-quality images. Current motion-compensated methods are limited by slow speed and compensation errors due to the severe intraphase undersampling.

Purpose: In this work, we aim to propose an alternative feature-compensated method to realize the fast 4D-CBCT with high-quality images.

Methods: We proposed a feature-compensated deformable convolutional network (FeaCo-DCN) to perform interphase compensation in the latent feature space, which has not been explored by previous studies. In FeaCo-DCN, encoding networks extract features from each phase, and then, features of other phases are deformed to those of the target phase via deformable convolutional networks. Finally, a decoding network combines and decodes features from all phases to yield high-quality images of the target phase. The proposed FeaCo-DCN was evaluated using lung cancer patient data.

Results: (1) FeaCo-DCN generated high-quality images with accurate and clear structures for a fast 4D-CBCT scan; (2) 4D-CBCT images reconstructed by FeaCo-DCN achieved 3D tumor localization accuracy within 2.5 mm; (3) image reconstruction is nearly real time; and (4) FeaCo-DCN achieved superior performance by all metrics compared to the top-ranked techniques in the AAPM SPARE Challenge.

Conclusion: The proposed FeaCo-DCN is effective and efficient in reconstructing 4D-CBCT while reducing about 90% of the scanning time, which can be highly valuable for moving target localization in image-guided radiotherapy.

背景:尽管四维锥束计算机断层扫描(4D-CBT)在为运动目标的放射治疗提供机载图像引导方面很有价值,但它需要很长的采集时间才能获得足够的图像质量来进行目标定位。为了提高实用性,非常希望在保持高质量图像的同时减少4D-CBT扫描时间。当前的运动补偿方法受到速度慢和由于严重的相位内欠采样引起的补偿误差的限制。目的:在这项工作中,我们旨在提出一种替代的特征补偿方法,以实现高质量图像的快速4D-CBT。方法:我们提出了一种特征补偿的可变形卷积网络(FeaCo-DCN)来在潜在特征空间中进行相间补偿,这是以前的研究没有探索过的。在FeaCo DCN中,编码网络从每个相位提取特征,然后通过可变形卷积网络将其他相位的特征变形为目标相位的特征。最后,解码网络组合并解码来自所有相位的特征,以产生目标相位的高质量图像。使用癌症患者数据对所提出的FeaCo-DCN进行评估。结果:(1)FeaCo DCN生成了高质量的图像,具有准确清晰的结构,用于快速4D-CBCT扫描;(2) FeaCo DCN重建的4D-CBCT图像实现了2.5mm以内的三维肿瘤定位精度;(3) 图像重建几乎是实时的;和(4)与AAPM备用挑战赛中排名靠前的技术相比,FeaCo DCN在所有指标上都取得了卓越的性能。结论:所提出的FeaCo-DCN在重建4D-CBCT方面是有效的,同时减少了约90%的扫描时间,这对图像引导放疗中的运动目标定位具有很高的价值。
{"title":"Fast four-dimensional cone-beam computed tomography reconstruction using deformable convolutional networks.","authors":"Zhuoran Jiang,&nbsp;Yushi Chang,&nbsp;Zeyu Zhang,&nbsp;Fang-Fang Yin,&nbsp;Lei Ren","doi":"10.1002/mp.15806","DOIUrl":"https://doi.org/10.1002/mp.15806","url":null,"abstract":"<p><strong>Background: </strong>Although four-dimensional cone-beam computed tomography (4D-CBCT) is valuable to provide onboard image guidance for radiotherapy of moving targets, it requires a long acquisition time to achieve sufficient image quality for target localization. To improve the utility, it is highly desirable to reduce the 4D-CBCT scanning time while maintaining high-quality images. Current motion-compensated methods are limited by slow speed and compensation errors due to the severe intraphase undersampling.</p><p><strong>Purpose: </strong>In this work, we aim to propose an alternative feature-compensated method to realize the fast 4D-CBCT with high-quality images.</p><p><strong>Methods: </strong>We proposed a feature-compensated deformable convolutional network (FeaCo-DCN) to perform interphase compensation in the latent feature space, which has not been explored by previous studies. In FeaCo-DCN, encoding networks extract features from each phase, and then, features of other phases are deformed to those of the target phase via deformable convolutional networks. Finally, a decoding network combines and decodes features from all phases to yield high-quality images of the target phase. The proposed FeaCo-DCN was evaluated using lung cancer patient data.</p><p><strong>Results: </strong>(1) FeaCo-DCN generated high-quality images with accurate and clear structures for a fast 4D-CBCT scan; (2) 4D-CBCT images reconstructed by FeaCo-DCN achieved 3D tumor localization accuracy within 2.5 mm; (3) image reconstruction is nearly real time; and (4) FeaCo-DCN achieved superior performance by all metrics compared to the top-ranked techniques in the AAPM SPARE Challenge.</p><p><strong>Conclusion: </strong>The proposed FeaCo-DCN is effective and efficient in reconstructing 4D-CBCT while reducing about 90% of the scanning time, which can be highly valuable for moving target localization in image-guided radiotherapy.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"49 10","pages":"6461-6476"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9588592/pdf/nihms-1817259.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41176513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic detection of contouring errors using convolutional neural networks. 使用卷积神经网络自动检测轮廓误差。
Pub Date : 2019-11-01 Epub Date: 2019-09-26 DOI: 10.1002/mp.13814
Dong Joo Rhee, Carlos E Cardenas, Hesham Elhalawani, Rachel McCarroll, Lifei Zhang, Jinzhong Yang, Adam S Garden, Christine B Peterson, Beth M Beadle, Laurence E Court

Purpose: To develop a head and neck normal structures autocontouring tool that could be used to automatically detect the errors in autocontours from a clinically validated autocontouring tool.

Methods: An autocontouring tool based on convolutional neural networks (CNN) was developed for 16 normal structures of the head and neck and tested to identify the contour errors from a clinically validated multiatlas-based autocontouring system (MACS). The computed tomography (CT) scans and clinical contours from 3495 patients were semiautomatically curated and used to train and validate the CNN-based autocontouring tool. The final accuracy of the tool was evaluated by calculating the Sørensen-Dice similarity coefficients (DSC) and Hausdorff distances between the automatically generated contours and physician-drawn contours on 174 internal and 24 external CT scans. Lastly, the CNN-based tool was evaluated on 60 patients' CT scans to investigate the possibility to detect contouring failures. The contouring failures on these patients were classified as either minor or major errors. The criteria to detect contouring errors were determined by analyzing the DSC between the CNN- and MACS-based contours under two independent scenarios: (a) contours with minor errors are clinically acceptable and (b) contours with minor errors are clinically unacceptable.

Results: The average DSC and Hausdorff distance of our CNN-based tool was 98.4%/1.23 cm for brain, 89.1%/0.42 cm for eyes, 86.8%/1.28 cm for mandible, 86.4%/0.88 cm for brainstem, 83.4%/0.71 cm for spinal cord, 82.7%/1.37 cm for parotids, 80.7%/1.08 cm for esophagus, 71.7%/0.39 cm for lenses, 68.6%/0.72 for optic nerves, 66.4%/0.46 cm for cochleas, and 40.7%/0.96 cm for optic chiasm. With the error detection tool, the proportions of the clinically unacceptable MACS contours that were correctly detected were 0.99/0.80 on average except for the optic chiasm, when contours with minor errors are clinically acceptable/unacceptable, respectively. The proportions of the clinically acceptable MACS contours that were correctly detected were 0.81/0.60 on average except for the optic chiasm, when contours with minor errors are clinically acceptable/unacceptable, respectively.

Conclusion: Our CNN-based autocontouring tool performed well on both the publically available and the internal datasets. Furthermore, our results show that CNN-based algorithms are able to identify ill-defined contours from a clinically validated and used multiatlas-based autocontouring tool. Therefore, our CNN-based tool can effectively perform automatic verification of MACS contours.

目的:开发一种头颈部正常结构自动巡视工具,该工具可用于从临床验证的自动巡视工具中自动检测自动巡视中的错误。方法:为16个正常的头颈部结构开发了一种基于卷积神经网络(CNN)的自动巡视工具,并对其进行了测试,以识别临床验证的基于多器官的自动巡视系统(MACS)的轮廓误差。对3495名患者的计算机断层扫描(CT)扫描和临床轮廓进行了半自动策划,并用于训练和验证基于CNN的自动巡视工具。通过计算174次内部和24次外部CT扫描中自动生成的轮廓和医生绘制的轮廓之间的Sørensen Dice相似系数(DSC)和Hausdorff距离,评估了该工具的最终精度。最后,基于CNN的工具在60名患者的CT扫描中进行了评估,以研究检测轮廓失败的可能性。这些患者的轮廓绘制失败分为轻微错误或重大错误。检测轮廓误差的标准是通过分析两种独立情况下基于CNN和MACS的轮廓之间的DSC来确定的:(a)具有小误差的轮廓在临床上是可接受的,(b)具有小错误的轮廓在医学上是不可接受的。结果:我们基于CNN的工具的平均DSC和Hausdorff距离为:大脑98.4%/1.23cm,眼睛89.1%/0.42cm,下颌骨86.8%/1.28cm,脑干86.4%/0.8cm,脊髓83.4%/0.71cm,腮腺82.7%/1.37cm,食道80.7%/1.08cm,晶状体71.7%/0.39cm,视神经68.6%/0.72,耳蜗66.4%/0.46cm,视交叉40.7%/0.96cm。使用误差检测工具,当具有较小误差的轮廓分别为临床可接受/不可接受时,除视交叉外,正确检测到的临床不可接受MACS轮廓的比例平均为0.99/0.80。正确检测到的临床可接受MACS轮廓的比例平均为0.81/0.60,但视交叉除外,此时具有较小误差的轮廓分别为临床可接受/不可接受。结论:我们基于CNN的自动巡视工具在公开可用的数据集和内部数据集上都表现良好。此外,我们的结果表明,基于CNN的算法能够从临床验证和使用的基于多大西洋的自动漫游工具中识别出定义不清的轮廓。因此,我们基于CNN的工具可以有效地执行MACS轮廓的自动验证。
{"title":"Automatic detection of contouring errors using convolutional neural networks.","authors":"Dong Joo Rhee,&nbsp;Carlos E Cardenas,&nbsp;Hesham Elhalawani,&nbsp;Rachel McCarroll,&nbsp;Lifei Zhang,&nbsp;Jinzhong Yang,&nbsp;Adam S Garden,&nbsp;Christine B Peterson,&nbsp;Beth M Beadle,&nbsp;Laurence E Court","doi":"10.1002/mp.13814","DOIUrl":"https://doi.org/10.1002/mp.13814","url":null,"abstract":"<p><strong>Purpose: </strong>To develop a head and neck normal structures autocontouring tool that could be used to automatically detect the errors in autocontours from a clinically validated autocontouring tool.</p><p><strong>Methods: </strong>An autocontouring tool based on convolutional neural networks (CNN) was developed for 16 normal structures of the head and neck and tested to identify the contour errors from a clinically validated multiatlas-based autocontouring system (MACS). The computed tomography (CT) scans and clinical contours from 3495 patients were semiautomatically curated and used to train and validate the CNN-based autocontouring tool. The final accuracy of the tool was evaluated by calculating the Sørensen-Dice similarity coefficients (DSC) and Hausdorff distances between the automatically generated contours and physician-drawn contours on 174 internal and 24 external CT scans. Lastly, the CNN-based tool was evaluated on 60 patients' CT scans to investigate the possibility to detect contouring failures. The contouring failures on these patients were classified as either minor or major errors. The criteria to detect contouring errors were determined by analyzing the DSC between the CNN- and MACS-based contours under two independent scenarios: (a) contours with minor errors are clinically acceptable and (b) contours with minor errors are clinically unacceptable.</p><p><strong>Results: </strong>The average DSC and Hausdorff distance of our CNN-based tool was 98.4%/1.23 cm for brain, 89.1%/0.42 cm for eyes, 86.8%/1.28 cm for mandible, 86.4%/0.88 cm for brainstem, 83.4%/0.71 cm for spinal cord, 82.7%/1.37 cm for parotids, 80.7%/1.08 cm for esophagus, 71.7%/0.39 cm for lenses, 68.6%/0.72 for optic nerves, 66.4%/0.46 cm for cochleas, and 40.7%/0.96 cm for optic chiasm. With the error detection tool, the proportions of the clinically unacceptable MACS contours that were correctly detected were 0.99/0.80 on average except for the optic chiasm, when contours with minor errors are clinically acceptable/unacceptable, respectively. The proportions of the clinically acceptable MACS contours that were correctly detected were 0.81/0.60 on average except for the optic chiasm, when contours with minor errors are clinically acceptable/unacceptable, respectively.</p><p><strong>Conclusion: </strong>Our CNN-based autocontouring tool performed well on both the publically available and the internal datasets. Furthermore, our results show that CNN-based algorithms are able to identify ill-defined contours from a clinically validated and used multiatlas-based autocontouring tool. Therefore, our CNN-based tool can effectively perform automatic verification of MACS contours.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"46 11","pages":"5086-5097"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/mp.13814","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49686807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Intercomparison of MR-informed PET image reconstruction methods. MR信息PET图像重建方法的相互比较。
Pub Date : 2019-11-01 Epub Date: 2019-10-04 DOI: 10.1002/mp.13812
James Bland, Abolfazl Mehranian, Martin A Belzunce, Sam Ellis, Casper da Costa-Luis, Colm J McGinnity, Alexander Hammers, Andrew J Reader

Purpose: Numerous image reconstruction methodologies for positron emission tomography (PET) have been developed that incorporate magnetic resonance (MR) imaging structural information, producing reconstructed images with improved suppression of noise and reduced partial volume effects. However, the influence of MR structural information also increases the possibility of suppression or bias of structures present only in the PET data (PET-unique regions). To address this, further developments for MR-informed methods have been proposed, for example, through inclusion of the current reconstructed PET image, alongside the MR image, in the iterative reconstruction process. In this present work, a number of kernel and maximum a posteriori (MAP) methodologies are compared, with the aim of identifying methods that enable a favorable trade-off between the suppression of noise and the retention of unique features present in the PET data.

Methods: The reconstruction methods investigated were: the MR-informed conventional and spatially compact kernel methods, referred to as KEM and KEM largest value sparsification (LVS) respectively; the MR-informed Bowsher and Gaussian MR-guided MAP methods; and the PET-MR-informed hybrid kernel and anato-functional MAP methods. The trade-off between improving the reconstruction of the whole brain region and the PET-unique regions was investigated for all methods in comparison with postsmoothed maximum likelihood expectation maximization (MLEM), evaluated in terms of structural similarity index (SSIM), normalized root mean square error (NRMSE), bias, and standard deviation. Both simulated BrainWeb (10 noise realizations) and real [18 F] fluorodeoxyglucose (FDG) three-dimensional datasets were used. The real [18 F]FDG dataset was augmented with simulated tumors to allow comparison of the reconstruction methodologies for the case of known regions of PET-MR discrepancy and evaluated at full counts (100%) and at a reduced (10%) count level.

Results: For the high-count simulated and real data studies, the anato-functional MAP method performed better than the other methods under investigation (MR-informed, PET-MR-informed and postsmoothed MLEM), in terms of achieving the best trade-off for the reconstruction of the whole brain and PET-unique regions, assessed in terms of the SSIM, NRMSE, and bias vs standard deviation. The inclusion of PET information in the anato-functional MAP method enables the reconstruction of PET-unique regions to attain similarly low levels of bias as unsmoothed MLEM, while moderately improving the whole brain image quality for low levels of regularization. However, for low count simulated datasets the anato-functional MAP method performs poorly, due to the inclusion of noisy PET information in the regularization term. For the low counts simulated dataset, KEM LVS and to a lesser extent, HKEM performed better than the ot

目的:已经开发了许多用于正电子发射断层扫描(PET)的图像重建方法,这些方法结合了磁共振(MR)成像结构信息,产生了具有改进的噪声抑制和减少部分体积效应的重建图像。然而,MR结构信息的影响也增加了仅存在于PET数据中的结构(PET独特区域)被抑制或偏置的可能性。为了解决这一问题,已经提出了MR知情方法的进一步发展,例如,通过在迭代重建过程中包括当前重建的PET图像和MR图像。在本工作中,比较了许多核方法和最大后验(MAP)方法,目的是识别能够在抑制噪声和保留PET数据中存在的独特特征之间进行有利权衡的方法。方法:研究的重建方法为:MR知情的常规核方法和空间紧凑核方法,分别称为KEM和KEM最大值稀疏化(LVS);MR告知Bowsher和高斯MR引导的MAP方法;以及基于PET MR的混合内核和锐钛矿函数MAP方法。与后平滑最大似然期望最大化(MLEM)相比,研究了所有方法在改进整个大脑区域的重建和PET独特区域之间的权衡,并根据结构相似性指数(SSIM)、归一化均方根误差(NRMSE)、偏差和标准差进行了评估。使用模拟BrainWeb(10个噪声实现)和真实[18F]氟脱氧葡萄糖(FDG)三维数据集。真实[18F]FDG数据集用模拟肿瘤进行了扩展,以允许比较PET-MR差异的已知区域的重建方法,并在全计数(100%)和减少计数(10%)水平下进行评估。结果:对于高计数模拟和真实数据研究,在实现全脑和PET独特区域重建的最佳权衡方面,锐钛矿功能MAP方法比正在研究的其他方法(MR知情、PET MR知情和后平滑MLEM)表现更好,根据SSIM、NRMSE和偏差与标准差进行评估。在锐钛矿功能MAP方法中包含PET信息使PET独特区域的重建能够获得与非光滑MLEM类似的低水平的偏差,同时适度提高低水平正则化的全脑图像质量。然而,对于低计数的模拟数据集,由于正则化项中包含有噪声的PET信息,锐钛矿函数MAP方法表现不佳。对于低计数模拟数据集,KEM LVS和在较小程度上,HKEM在实现整个大脑和PET独特区域重建的最佳权衡方面比正在研究的其他方法表现更好,根据SSIM、NRMSE和偏差与标准偏差进行评估。结论:对于噪声数据的重建,在SSIM和NRMSE的图像质量指标方面,多种MR知情方法产生了有利的全脑与PET独特区域权衡,大大优于后平滑MLEM的全图像去噪。
{"title":"Intercomparison of MR-informed PET image reconstruction methods.","authors":"James Bland, Abolfazl Mehranian, Martin A Belzunce, Sam Ellis, Casper da Costa-Luis, Colm J McGinnity, Alexander Hammers, Andrew J Reader","doi":"10.1002/mp.13812","DOIUrl":"10.1002/mp.13812","url":null,"abstract":"<p><strong>Purpose: </strong>Numerous image reconstruction methodologies for positron emission tomography (PET) have been developed that incorporate magnetic resonance (MR) imaging structural information, producing reconstructed images with improved suppression of noise and reduced partial volume effects. However, the influence of MR structural information also increases the possibility of suppression or bias of structures present only in the PET data (PET-unique regions). To address this, further developments for MR-informed methods have been proposed, for example, through inclusion of the current reconstructed PET image, alongside the MR image, in the iterative reconstruction process. In this present work, a number of kernel and maximum a posteriori (MAP) methodologies are compared, with the aim of identifying methods that enable a favorable trade-off between the suppression of noise and the retention of unique features present in the PET data.</p><p><strong>Methods: </strong>The reconstruction methods investigated were: the MR-informed conventional and spatially compact kernel methods, referred to as KEM and KEM largest value sparsification (LVS) respectively; the MR-informed Bowsher and Gaussian MR-guided MAP methods; and the PET-MR-informed hybrid kernel and anato-functional MAP methods. The trade-off between improving the reconstruction of the whole brain region and the PET-unique regions was investigated for all methods in comparison with postsmoothed maximum likelihood expectation maximization (MLEM), evaluated in terms of structural similarity index (SSIM), normalized root mean square error (NRMSE), bias, and standard deviation. Both simulated BrainWeb (10 noise realizations) and real [<sup>18</sup> F] fluorodeoxyglucose (FDG) three-dimensional datasets were used. The real [<sup>18</sup> F]FDG dataset was augmented with simulated tumors to allow comparison of the reconstruction methodologies for the case of known regions of PET-MR discrepancy and evaluated at full counts (100%) and at a reduced (10%) count level.</p><p><strong>Results: </strong>For the high-count simulated and real data studies, the anato-functional MAP method performed better than the other methods under investigation (MR-informed, PET-MR-informed and postsmoothed MLEM), in terms of achieving the best trade-off for the reconstruction of the whole brain and PET-unique regions, assessed in terms of the SSIM, NRMSE, and bias vs standard deviation. The inclusion of PET information in the anato-functional MAP method enables the reconstruction of PET-unique regions to attain similarly low levels of bias as unsmoothed MLEM, while moderately improving the whole brain image quality for low levels of regularization. However, for low count simulated datasets the anato-functional MAP method performs poorly, due to the inclusion of noisy PET information in the regularization term. For the low counts simulated dataset, KEM LVS and to a lesser extent, HKEM performed better than the ot","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"46 11","pages":"5055-5074"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6899618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49686809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation of dental cone-beam CT scans affected by metal artifacts using a mixed-scale dense convolutional neural network. 使用混合尺度密集卷积神经网络对受金属伪影影响的牙锥束CT扫描进行分割。
Pub Date : 2019-11-01 Epub Date: 2019-09-13 DOI: 10.1002/mp.13793
Jordi Minnema, Maureen van Eijnatten, Allard A Hendriksen, Niels Liberton, Daniël M Pelt, Kees Joost Batenburg, Tymour Forouzanfar, Jan Wolff

Purpose: In order to attain anatomical models, surgical guides and implants for computer-assisted surgery, accurate segmentation of bony structures in cone-beam computed tomography (CBCT) scans is required. However, this image segmentation step is often impeded by metal artifacts. Therefore, this study aimed to develop a mixed-scale dense convolutional neural network (MS-D network) for bone segmentation in CBCT scans affected by metal artifacts.

Method: Training data were acquired from 20 dental CBCT scans affected by metal artifacts. An experienced medical engineer segmented the bony structures in all CBCT scans using global thresholding and manually removed all remaining noise and metal artifacts. The resulting gold standard segmentations were used to train an MS-D network comprising 100 convolutional layers using far fewer trainable parameters than alternative convolutional neural network (CNN) architectures. The bone segmentation performance of the MS-D network was evaluated using a leave-2-out scheme and compared with a clinical snake evolution algorithm and two state-of-the-art CNN architectures (U-Net and ResNet). All segmented CBCT scans were subsequently converted into standard tessellation language (STL) models and geometrically compared with the gold standard.

Results: CBCT scans segmented using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean Dice similarity coefficients of 0.87 ± 0.06, 0.87 ± 0.07, 0.86 ± 0.05, and 0.78 ± 0.07, respectively. The STL models acquired using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean absolute deviations of 0.44 mm ± 0.13 mm, 0.43 mm ± 0.16 mm, 0.40 mm ± 0.12 mm and 0.57 mm ± 0.22 mm, respectively. In contrast to the MS-D network, the ResNet introduced wave-like artifacts in the STL models, whereas the U-Net incorrectly labeled background voxels as bone around the vertebrae in 4 of the 9 CBCT scans containing vertebrae.

Conclusion: The MS-D network was able to accurately segment bony structures in CBCT scans affected by metal artifacts.

目的:为了获得计算机辅助手术的解剖模型、手术指南和植入物,需要在锥形束计算机断层扫描(CBCT)中准确分割骨结构。然而,该图像分割步骤经常受到金属伪影的阻碍。因此,本研究旨在开发一种混合尺度密集卷积神经网络(MS-D网络),用于受金属伪影影响的CBCT扫描中的骨骼分割。方法:从20例受金属伪影影响的牙科CBCT扫描中获取训练数据。一位经验丰富的医学工程师在所有CBCT扫描中使用全局阈值分割骨结构,并手动去除所有剩余的噪声和金属伪影。所得到的金标准分割用于使用比替代卷积神经网络(CNN)架构少得多的可训练参数来训练包括100个卷积层的MS-D网络。使用leave-2-out方案评估MS-D网络的骨骼分割性能,并与临床蛇进化算法和两种最先进的CNN架构(U-Net和ResNet)进行比较。随后将所有分割的CBCT扫描转换为标准镶嵌语言(STL)模型,并与黄金标准进行几何比较。结果:使用MS-D网络、U-Net、ResNet和snake进化算法分割的CBCT扫描显示平均Dice相似系数分别为0.87±0.06、0.87±0.07、0.86±0.05和0.78±0.07。使用MS-D网络、U-Net、ResNet和snake进化算法获得的STL模型的平均绝对偏差分别为0.44 mm±0.13 mm、0.43 mm±0.16 mm、0.40 mm±0.12 mm和0.57 mm±0.22 mm。与MS-D网络相比,ResNet在STL模型中引入了波浪状伪影,而U-Net在包含椎骨的9次CBCT扫描中的4次中错误地将背景体素标记为椎骨周围的骨骼。结论:MS-D网络能够准确地分割CBCT扫描中受金属伪影影响的骨结构。
{"title":"Segmentation of dental cone-beam CT scans affected by metal artifacts using a mixed-scale dense convolutional neural network.","authors":"Jordi Minnema,&nbsp;Maureen van Eijnatten,&nbsp;Allard A Hendriksen,&nbsp;Niels Liberton,&nbsp;Daniël M Pelt,&nbsp;Kees Joost Batenburg,&nbsp;Tymour Forouzanfar,&nbsp;Jan Wolff","doi":"10.1002/mp.13793","DOIUrl":"https://doi.org/10.1002/mp.13793","url":null,"abstract":"<p><strong>Purpose: </strong>In order to attain anatomical models, surgical guides and implants for computer-assisted surgery, accurate segmentation of bony structures in cone-beam computed tomography (CBCT) scans is required. However, this image segmentation step is often impeded by metal artifacts. Therefore, this study aimed to develop a mixed-scale dense convolutional neural network (MS-D network) for bone segmentation in CBCT scans affected by metal artifacts.</p><p><strong>Method: </strong>Training data were acquired from 20 dental CBCT scans affected by metal artifacts. An experienced medical engineer segmented the bony structures in all CBCT scans using global thresholding and manually removed all remaining noise and metal artifacts. The resulting gold standard segmentations were used to train an MS-D network comprising 100 convolutional layers using far fewer trainable parameters than alternative convolutional neural network (CNN) architectures. The bone segmentation performance of the MS-D network was evaluated using a leave-2-out scheme and compared with a clinical snake evolution algorithm and two state-of-the-art CNN architectures (U-Net and ResNet). All segmented CBCT scans were subsequently converted into standard tessellation language (STL) models and geometrically compared with the gold standard.</p><p><strong>Results: </strong>CBCT scans segmented using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean Dice similarity coefficients of 0.87 ± 0.06, 0.87 ± 0.07, 0.86 ± 0.05, and 0.78 ± 0.07, respectively. The STL models acquired using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean absolute deviations of 0.44 mm ± 0.13 mm, 0.43 mm ± 0.16 mm, 0.40 mm ± 0.12 mm and 0.57 mm ± 0.22 mm, respectively. In contrast to the MS-D network, the ResNet introduced wave-like artifacts in the STL models, whereas the U-Net incorrectly labeled background voxels as bone around the vertebrae in 4 of the 9 CBCT scans containing vertebrae.</p><p><strong>Conclusion: </strong>The MS-D network was able to accurately segment bony structures in CBCT scans affected by metal artifacts.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"46 11","pages":"5027-5035"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/mp.13793","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49686810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Technical Note: PYRO-NN: Python reconstruction operators in neural networks. 技术说明:PYRO-NN:神经网络中的Python重构运算符。
Pub Date : 2019-11-01 Epub Date: 2019-08-27 DOI: 10.1002/mp.13753
Christopher Syben, Markus Michen, Bernhard Stimpel, Stephan Seitz, Stefan Ploner, Andreas K Maier

Purpose: Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the computed tomography (CT) reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches use workarounds for mathematically unambiguously solvable problems.

Methods: PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan-, and cone-beam projectors, and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high-level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems.

Results: The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high-level Python API allows a simple use of the layers as known from Tensorflow. All algorithms and tools are referenced to a scientific publication and are compared to existing non-deep learning reconstruction frameworks. To demonstrate the capabilities of the layers, the framework comes with baseline experiments, which are described in the supplementary material. The framework is available as open-source software under the Apache 2.0 licence at https://github.com/csyben/PYRO-NN.

Conclusions: PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step toward reproducible research and give the medical physics community a toolkit to elevate medical image reconstruction with new deep learning techniques.

目的:最近,人们进行了几次尝试,将深度学习转移到医学图像重建中。越来越多的出版物遵循将计算机断层扫描(CT)重建作为已知算子嵌入神经网络的概念。然而,所提出的大多数方法都缺乏一个有效的CT重建框架,该框架完全集成到深度学习环境中。因此,许多方法对数学上明确可解的问题使用变通方法。方法:PYRO-NN是一个将已知算子嵌入到流行的深度学习框架Tensorflow中的广义框架。目前的状态包括最先进的平行光束、扇形光束和锥形光束投影仪,以及使用作为Tensorflow层提供的CUDA加速的背面投影仪。最重要的是,该框架提供了高级Python API,用于使用真实CT系统的数据进行FBP和迭代重建实验。结果:该框架提供了所有必要的算法和工具来设计具有集成CT重建算法的端到端神经网络管道。高级Python API允许简单使用Tensorflow中已知的层。所有算法和工具都参考了一份科学出版物,并与现有的非深度学习重建框架进行了比较。为了证明这些层的能力,该框架附带了补充材料中描述的基线实验。该框架可作为Apache 2.0许可证下的开源软件在https://github.com/csyben/PYRO-NN.Conclusions:PYRO-NN带有流行的深度学习框架Tensorflow,允许在医学图像重建环境中建立端到端可训练的神经网络。我们相信,该框架将是向可重复研究迈出的一步,并为医学物理界提供一个工具包,用新的深度学习技术提升医学图像重建。
{"title":"Technical Note: PYRO-NN: Python reconstruction operators in neural networks.","authors":"Christopher Syben,&nbsp;Markus Michen,&nbsp;Bernhard Stimpel,&nbsp;Stephan Seitz,&nbsp;Stefan Ploner,&nbsp;Andreas K Maier","doi":"10.1002/mp.13753","DOIUrl":"https://doi.org/10.1002/mp.13753","url":null,"abstract":"<p><strong>Purpose: </strong>Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the computed tomography (CT) reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches use workarounds for mathematically unambiguously solvable problems.</p><p><strong>Methods: </strong>PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan-, and cone-beam projectors, and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high-level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems.</p><p><strong>Results: </strong>The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high-level Python API allows a simple use of the layers as known from Tensorflow. All algorithms and tools are referenced to a scientific publication and are compared to existing non-deep learning reconstruction frameworks. To demonstrate the capabilities of the layers, the framework comes with baseline experiments, which are described in the supplementary material. The framework is available as open-source software under the Apache 2.0 licence at https://github.com/csyben/PYRO-NN.</p><p><strong>Conclusions: </strong>PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step toward reproducible research and give the medical physics community a toolkit to elevate medical image reconstruction with new deep learning techniques.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"46 11","pages":"5110-5115"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/mp.13753","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49686811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Technical Note: Simultaneous segmentation and relaxometry for MRI through multitask learning. 技术说明:通过多任务学习实现MRI的同时分割和松弛测量。
Pub Date : 2019-10-01 Epub Date: 2019-08-31 DOI: 10.1002/mp.13756
Peng Cao, Jing Liu, Shuyu Tang, Andrew P Leynes, Janine M Lupo, Duan Xu, Peder E Z Larson

Purpose: This study demonstrated a magnetic resonance (MR) signal multitask learning method for three-dimensional (3D) simultaneous segmentation and relaxometry of human brain tissues.

Materials and methods: A 3D inversion-prepared balanced steady-state free precession sequence was used for acquiring in vivo multicontrast brain images. The deep neural network contained three residual blocks, and each block had 8 fully connected layers with sigmoid activation, layer norm, and 256 neurons in each layer. Online-synthesized MR signal evolutions and labels were used to train the neural network batch-by-batch. Empirically defined ranges of T1 and T2 values for the normal gray matter, white matter, and cerebrospinal fluid (CSF) were used as the prior knowledge. MRI brain experiments were performed on three healthy volunteers. The mean and standard deviation for the T1 and T2 values in vivo were reported and compared to literature values. Additional animal (N = 6) and prostate patient (N = 1) experiments were performed to compare the estimated T1 and T2 values with those from gold standard methods and to demonstrate clinical applications of the proposed method.

Results: In animal validation experiment, the differences/errors (mean difference ± standard deviation of difference) between the T1 and T2 values estimated from the proposed method and the ground truth were 113 ± 486 and 154 ± 512 ms for T1, and 5 ± 33 and 7 ± 41 ms for T2, respectively. In healthy volunteer experiments (N = 3), whole brain segmentation and relaxometry were finished within ~ 5 s. The estimated apparent T1 and T2 maps were in accordance with known brain anatomy, and not affected by coil sensitivity variation. Gray matter, white matter, and CSF were successfully segmented. The deep neural network can also generate synthetic T1- and T2-weighted images.

Conclusion: The proposed multitask learning method can directly generate brain apparent T1 and T2 maps, as well as synthetic T1- and T2-weighted images, in conjunction with segmentation of gray matter, white matter, and CSF.

目的:本研究展示了一种用于人脑组织三维(3D)同时分割和松弛测量的磁共振(MR)信号多任务学习方法。材料和方法:使用3D反演制备的平衡稳态自由进动序列获取体内多中心脑图像。深度神经网络包含三个残差块,每个块有8个具有S形激活、层范数的完全连接层,每层有256个神经元。在线合成的MR信号演化和标记用于逐批训练神经网络。使用正常灰质、白质和脑脊液(CSF)的经验定义的T1和T2值范围作为先验知识。对三名健康志愿者进行了核磁共振脑部实验。报告了体内T1和T2值的平均值和标准偏差,并将其与文献值进行了比较。进行了额外的动物(N=6)和前列腺患者(N=1)实验,以将估计的T1和T2值与金标准方法的值进行比较,并证明所提出的方法的临床应用。结果:在动物验证实验中,根据所提出的方法估计的T1和T2值与基本事实之间的差异/误差(平均差±差的标准差)T1分别为113±486和154±512ms,T2分别为5±33和7±41ms。在健康志愿者实验中(N=3),全脑分割和松弛测量在~5秒内完成。估计的表观T1和T2图谱符合已知的大脑解剖结构,不受线圈灵敏度变化的影响。灰质、白质和脑脊液被成功分割。深度神经网络还可以生成合成的T1和T2加权图像。结论:所提出的多任务学习方法可以直接生成脑表观T1和T2图,以及合成的T1和T2加权图像,并结合灰质、白质和CSF的分割。
{"title":"Technical Note: Simultaneous segmentation and relaxometry for MRI through multitask learning.","authors":"Peng Cao,&nbsp;Jing Liu,&nbsp;Shuyu Tang,&nbsp;Andrew P Leynes,&nbsp;Janine M Lupo,&nbsp;Duan Xu,&nbsp;Peder E Z Larson","doi":"10.1002/mp.13756","DOIUrl":"https://doi.org/10.1002/mp.13756","url":null,"abstract":"<p><strong>Purpose: </strong>This study demonstrated a magnetic resonance (MR) signal multitask learning method for three-dimensional (3D) simultaneous segmentation and relaxometry of human brain tissues.</p><p><strong>Materials and methods: </strong>A 3D inversion-prepared balanced steady-state free precession sequence was used for acquiring in vivo multicontrast brain images. The deep neural network contained three residual blocks, and each block had 8 fully connected layers with sigmoid activation, layer norm, and 256 neurons in each layer. Online-synthesized MR signal evolutions and labels were used to train the neural network batch-by-batch. Empirically defined ranges of T1 and T2 values for the normal gray matter, white matter, and cerebrospinal fluid (CSF) were used as the prior knowledge. MRI brain experiments were performed on three healthy volunteers. The mean and standard deviation for the T1 and T2 values in vivo were reported and compared to literature values. Additional animal (N = 6) and prostate patient (N = 1) experiments were performed to compare the estimated T1 and T2 values with those from gold standard methods and to demonstrate clinical applications of the proposed method.</p><p><strong>Results: </strong>In animal validation experiment, the differences/errors (mean difference ± standard deviation of difference) between the T1 and T2 values estimated from the proposed method and the ground truth were 113 ± 486 and 154 ± 512 ms for T1, and 5 ± 33 and 7 ± 41 ms for T2, respectively. In healthy volunteer experiments (N = 3), whole brain segmentation and relaxometry were finished within ~ 5 s. The estimated apparent T1 and T2 maps were in accordance with known brain anatomy, and not affected by coil sensitivity variation. Gray matter, white matter, and CSF were successfully segmented. The deep neural network can also generate synthetic T1- and T2-weighted images.</p><p><strong>Conclusion: </strong>The proposed multitask learning method can directly generate brain apparent T1 and T2 maps, as well as synthetic T1- and T2-weighted images, in conjunction with segmentation of gray matter, white matter, and CSF.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"46 10","pages":"4610-4621"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/mp.13756","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The current NRC definitions of therapy misadministration are vague, do not reflect the norms of clinical practice, and should be rewritten. For the proposition. 目前NRC对治疗给药不当的定义是模糊的,不能反映临床实践的规范,应该重写。对于这个命题。
IF 3.2 Pub Date : 2004-04-01 DOI: 10.1118/1.1651486
Howard Amols
{"title":"The current NRC definitions of therapy misadministration are vague, do not reflect the norms of clinical practice, and should be rewritten. For the proposition.","authors":"Howard Amols","doi":"10.1118/1.1651486","DOIUrl":"https://doi.org/10.1118/1.1651486","url":null,"abstract":"","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"31 4","pages":"691-3"},"PeriodicalIF":3.2,"publicationDate":"2004-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical physics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1