首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Open-source graphical user interface for the creation of synthetic skeletons for medical image analysis. 用于创建医学图像分析合成骨架的开源图形用户界面。
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-14 DOI: 10.1117/1.JMI.11.3.036001
Christian Herz, Nicolas Vergnet, Sijie Tian, Abdullah H Aly, Matthew A Jolley, Nathanael Tran, Gabriel Arenas, Andras Lasso, Nadav Schwartz, Kathleen E O'Neill, Paul A Yushkevich, Alison M Pouch

Purpose: Deformable medial modeling is an inverse skeletonization approach to representing anatomy in medical images, which can be used for statistical shape analysis and assessment of patient-specific anatomical features such as locally varying thickness. It involves deforming a pre-defined synthetic skeleton, or template, to anatomical structures of the same class. The lack of software for creating such skeletons has been a limitation to more widespread use of deformable medial modeling. Therefore, the objective of this work is to present an open-source user interface (UI) for the creation of synthetic skeletons for a range of medial modeling applications in medical imaging.

Approach: A UI for interactive design of synthetic skeletons was implemented in 3D Slicer, an open-source medical image analysis application. The steps in synthetic skeleton design include importation and skeletonization of a 3D segmentation, followed by interactive 3D point placement and triangulation of the medial surface such that the desired branching configuration of the anatomical structure's medial axis is achieved. Synthetic skeleton design was evaluated in five clinical applications. Compatibility of the synthetic skeletons with open-source software for deformable medial modeling was tested, and representational accuracy of the deformed medial models was evaluated.

Results: Three users designed synthetic skeletons of anatomies with various topologies: the placenta, aortic root wall, mitral valve, cardiac ventricles, and the uterus. The skeletons were compatible with skeleton-first and boundary-first software for deformable medial modeling. The fitted medial models achieved good representational accuracy with respect to the 3D segmentations from which the synthetic skeletons were generated.

Conclusions: Synthetic skeleton design has been a practical challenge in leveraging deformable medial modeling for new clinical applications. This work demonstrates an open-source UI for user-friendly design of synthetic skeletons for anatomies with a wide range of topologies.

目的:可变形中轴建模是一种在医学图像中表示解剖结构的反骨架化方法,可用于统计形状分析和评估病人特定的解剖特征,如局部变化的厚度。它包括将预定义的合成骨架或模板变形为同类解剖结构。由于缺乏创建此类骨架的软件,可变形内侧建模无法得到更广泛的应用。因此,这项工作的目标是提供一个开源用户界面(UI),用于创建合成骨架,以满足医学影像中一系列医学建模应用的需要:方法:在开源医学图像分析应用程序 3D Slicer 中实现了用于交互式设计合成骨骼的用户界面。合成骨架设计的步骤包括导入三维分割并将其骨架化,然后对内侧表面进行交互式三维点放置和三角测量,从而实现解剖结构内侧轴所需的分支配置。合成骨骼设计在五个临床应用中进行了评估。测试了合成骨骼与用于可变形内侧建模的开源软件的兼容性,并评估了变形内侧模型的准确性:三位用户设计了不同拓扑结构的解剖合成骨架:胎盘、主动脉根壁、二尖瓣、心室和子宫。这些骨架与骨架优先和边界优先软件兼容,可用于可变形内侧建模。与生成合成骨骼的三维分割相比,拟合的内侧模型具有良好的代表性和准确性:合成骨骼设计一直是利用可变形内侧建模实现新临床应用的实际挑战。这项工作展示了一个开放源码的用户界面,可方便用户设计具有多种拓扑结构的解剖合成骨骼。
{"title":"Open-source graphical user interface for the creation of synthetic skeletons for medical image analysis.","authors":"Christian Herz, Nicolas Vergnet, Sijie Tian, Abdullah H Aly, Matthew A Jolley, Nathanael Tran, Gabriel Arenas, Andras Lasso, Nadav Schwartz, Kathleen E O'Neill, Paul A Yushkevich, Alison M Pouch","doi":"10.1117/1.JMI.11.3.036001","DOIUrl":"10.1117/1.JMI.11.3.036001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable medial modeling is an inverse skeletonization approach to representing anatomy in medical images, which can be used for statistical shape analysis and assessment of patient-specific anatomical features such as locally varying thickness. It involves deforming a pre-defined synthetic skeleton, or template, to anatomical structures of the same class. The lack of software for creating such skeletons has been a limitation to more widespread use of deformable medial modeling. Therefore, the objective of this work is to present an open-source user interface (UI) for the creation of synthetic skeletons for a range of medial modeling applications in medical imaging.</p><p><strong>Approach: </strong>A UI for interactive design of synthetic skeletons was implemented in 3D Slicer, an open-source medical image analysis application. The steps in synthetic skeleton design include importation and skeletonization of a 3D segmentation, followed by interactive 3D point placement and triangulation of the medial surface such that the desired branching configuration of the anatomical structure's medial axis is achieved. Synthetic skeleton design was evaluated in five clinical applications. Compatibility of the synthetic skeletons with open-source software for deformable medial modeling was tested, and representational accuracy of the deformed medial models was evaluated.</p><p><strong>Results: </strong>Three users designed synthetic skeletons of anatomies with various topologies: the placenta, aortic root wall, mitral valve, cardiac ventricles, and the uterus. The skeletons were compatible with skeleton-first and boundary-first software for deformable medial modeling. The fitted medial models achieved good representational accuracy with respect to the 3D segmentations from which the synthetic skeletons were generated.</p><p><strong>Conclusions: </strong>Synthetic skeleton design has been a practical challenge in leveraging deformable medial modeling for new clinical applications. This work demonstrates an open-source UI for user-friendly design of synthetic skeletons for anatomies with a wide range of topologies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"036001"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fiberscopic pattern removal for optimal coverage in 3D bladder reconstructions of fiberscope cystoscopy videos. 在纤维膀胱镜检查视频的三维膀胱重建中去除纤维图案以实现最佳覆盖。
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-17 DOI: 10.1117/1.JMI.11.3.034002
Rachel Eimen, Halina Krzyzanowska, Kristen R Scarpato, Audrey K Bowden

Purpose: In the current clinical standard of care, cystoscopic video is not routinely saved because it is cumbersome to review. Instead, clinicians rely on brief procedure notes and still frames to manage bladder pathology. Preserving discarded data via 3D reconstructions, which are convenient to review, has the potential to improve patient care. However, many clinical videos are collected by fiberscopes, which are lower cost but induce a pattern on frames that inhibit 3D reconstruction. The aim of our study is to remove the honeycomb-like pattern present in fiberscope-based cystoscopy videos to improve the quality of 3D bladder reconstructions.

Approach: Our study introduces an algorithm that applies a notch filtering mask in the Fourier domain to remove the honeycomb-like pattern from clinical cystoscopy videos collected by fiberscope as a preprocessing step to 3D reconstruction. We produce 3D reconstructions with the video before and after removing the pattern, which we compare with a metric termed the area of reconstruction coverage (ARC), defined as the surface area (in pixels) of the reconstructed bladder. All statistical analyses use paired t-tests.

Results: Preprocessing using our method for pattern removal enabled reconstruction for all (n=5) cystoscopy videos included in the study and produced a statistically significant increase in bladder coverage (p=0.018).

Conclusions: This algorithm for pattern removal increases bladder coverage in 3D reconstructions and automates mask generation and application, which could aid implementation in time-starved clinical environments. The creation and use of 3D reconstructions can improve documentation of cystoscopic findings for future surgical navigation, thus improving patient treatment and outcomes.

目的:在目前的临床治疗标准中,膀胱镜视频并没有被常规保存,因为审查起来非常麻烦。相反,临床医生依靠简短的手术记录和静止画面来处理膀胱病理。通过方便查看的三维重建保存废弃数据有可能改善患者护理。然而,许多临床视频都是通过纤维镜采集的,这种方法成本较低,但会在帧上产生图案,从而阻碍三维重建。我们的研究旨在去除纤维镜膀胱镜检查视频中的蜂窝状图案,以提高三维膀胱重建的质量:我们的研究引入了一种算法,该算法在傅立叶域中应用凹口滤波掩码,以去除由纤维镜采集的临床膀胱镜检查视频中的蜂窝状图案,作为三维重建的预处理步骤。我们将去除图案前后的视频进行三维重建,并将其与重建覆盖面积(ARC)进行比较,重建覆盖面积定义为重建膀胱的表面积(像素)。所有统计分析均采用配对 t 检验:结果:使用我们的方法去除图案进行预处理后,研究中的所有(n=5)膀胱镜检查视频都能进行重建,并且膀胱覆盖率在统计学上有显著提高(p=0.018):这种模式去除算法可提高三维重建的膀胱覆盖率,并自动生成和应用掩膜,有助于在时间紧迫的临床环境中实施。三维重建的创建和使用可以改善膀胱镜检查结果的记录,为将来的手术导航提供帮助,从而改善患者的治疗和预后。
{"title":"Fiberscopic pattern removal for optimal coverage in 3D bladder reconstructions of fiberscope cystoscopy videos.","authors":"Rachel Eimen, Halina Krzyzanowska, Kristen R Scarpato, Audrey K Bowden","doi":"10.1117/1.JMI.11.3.034002","DOIUrl":"10.1117/1.JMI.11.3.034002","url":null,"abstract":"<p><strong>Purpose: </strong>In the current clinical standard of care, cystoscopic video is not routinely saved because it is cumbersome to review. Instead, clinicians rely on brief procedure notes and still frames to manage bladder pathology. Preserving discarded data via 3D reconstructions, which are convenient to review, has the potential to improve patient care. However, many clinical videos are collected by fiberscopes, which are lower cost but induce a pattern on frames that inhibit 3D reconstruction. The aim of our study is to remove the honeycomb-like pattern present in fiberscope-based cystoscopy videos to improve the quality of 3D bladder reconstructions.</p><p><strong>Approach: </strong>Our study introduces an algorithm that applies a notch filtering mask in the Fourier domain to remove the honeycomb-like pattern from clinical cystoscopy videos collected by fiberscope as a preprocessing step to 3D reconstruction. We produce 3D reconstructions with the video before and after removing the pattern, which we compare with a metric termed the area of reconstruction coverage (<math><mrow><msub><mrow><mi>A</mi></mrow><mrow><mi>RC</mi></mrow></msub></mrow></math>), defined as the surface area (in pixels) of the reconstructed bladder. All statistical analyses use paired <math><mrow><mi>t</mi></mrow></math>-tests.</p><p><strong>Results: </strong>Preprocessing using our method for pattern removal enabled reconstruction for all (<math><mrow><mi>n</mi><mo>=</mo><mn>5</mn></mrow></math>) cystoscopy videos included in the study and produced a statistically significant increase in bladder coverage (<math><mrow><mi>p</mi><mo>=</mo><mn>0.018</mn></mrow></math>).</p><p><strong>Conclusions: </strong>This algorithm for pattern removal increases bladder coverage in 3D reconstructions and automates mask generation and application, which could aid implementation in time-starved clinical environments. The creation and use of 3D reconstructions can improve documentation of cystoscopic findings for future surgical navigation, thus improving patient treatment and outcomes.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034002"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11099938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Networking Science and Technology: Highlights from JMI Issue 3. 网络科学与技术:JMI 第三期要闻。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-06-26 DOI: 10.1117/1.JMI.11.3.030101
Bennett Landman

The editorial introduces JMI Issue 3 Volume 11, looks ahead to SPIE Medical Imaging, and highlights the journal's policy on conference article submission.

这篇社论介绍了 JMI 第 3 期第 11 卷,展望了 SPIE 医学影像会议,并重点介绍了期刊的会议文章投稿政策。
{"title":"Networking Science and Technology: Highlights from JMI Issue 3.","authors":"Bennett Landman","doi":"10.1117/1.JMI.11.3.030101","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.030101","url":null,"abstract":"<p><p>The editorial introduces JMI Issue 3 Volume 11, looks ahead to SPIE Medical Imaging, and highlights the journal's policy on conference article submission.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"030101"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11200196/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141471596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can processed images be used to determine the modulation transfer function and detective quantum efficiency? 经过处理的图像能否用于确定调制传递函数和探测器的量子效率?
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-31 DOI: 10.1117/1.JMI.11.3.033502
Lisa M Garland, Haechan J Yang, Paul A Picot, Jesse Tanguay, Ian A Cunningham

Purpose: The modulation transfer function (MTF) and detective quantum efficiency (DQE) of x-ray detectors are key Fourier metrics of performance, valid only for linear and shift-invariant (LSI) systems and generally measured following IEC guidelines requiring the use of raw (unprocessed) image data. However, many detectors incorporate processing in the imaging chain that is difficult or impossible to disable, raising questions about the practical relevance of MTF and DQE testing. We investigate the impact of convolution-based embedded processing on MTF and DQE measurements.

Approach: We use an impulse-sampled notation, consistent with a cascaded-systems analysis in spatial and spatial-frequency domains to determine the impact of discrete convolution (DC) on measured MTF and DQE following IEC guidelines.

Results: We show that digital systems remain LSI if we acknowledge both image pixel values and convolution kernels represent scaled Dirac δ-functions with an implied sinc convolution of image data. This enables use of the Fourier transform (FT) to determine impact on presampling MTF and DQE measurements.

Conclusions: It is concluded that: (i) the MTF of DC is always an unbounded cosine series; (ii) the slanted-edge method yields the true presampling MTF, even when using processed images, with processing appearing as an analytic filter with cosine-series MTF applied to raw presampling image data; (iii) the DQE is unaffected by discrete-convolution-based processing with a possible exception near zero-points in the presampling MTF; and (iv) the FT of the impulse-sampled notation is equivalent to the Z transform of image data.

目的:X 射线探测器的调制传递函数(MTF)和探测量子效率(DQE)是关键的傅立叶性能指标,仅对线性和移位不变(LSI)系统有效,通常按照要求使用原始(未处理)图像数据的 IEC 准则进行测量。然而,许多探测器在成像链中加入了难以或无法禁用的处理过程,这就对 MTF 和 DQE 测试的实际意义提出了质疑。我们研究了基于卷积的嵌入式处理对 MTF 和 DQE 测量的影响:方法:我们使用脉冲采样符号,与空间和空间频率域的级联系统分析相一致,以确定离散卷积(DC)对按照 IEC 准则测量的 MTF 和 DQE 的影响:结果:我们发现,如果我们承认图像像素值和卷积核均代表具有图像数据隐含 sinc 卷积的缩放 Dirac δ 函数,那么数字系统仍然是 LSI。这样就能利用傅立叶变换 (FT) 来确定对预采样 MTF 和 DQE 测量的影响:结论(i)直流的 MTF 始终是一个无界余弦序列;(ii)即使使用经过处理的图像,斜边法也能获得真实的采样前 MTF,处理过程显示为对原始采样前图像数据进行余弦序列 MTF 的解析滤波;(iii)DQE 不受基于离散卷积的处理过程的影响,但在采样前 MTF 的零点附近可能存在例外;(iv)脉冲采样符号的傅立叶变换等同于图像数据的 Z 变换。
{"title":"Can processed images be used to determine the modulation transfer function and detective quantum efficiency?","authors":"Lisa M Garland, Haechan J Yang, Paul A Picot, Jesse Tanguay, Ian A Cunningham","doi":"10.1117/1.JMI.11.3.033502","DOIUrl":"10.1117/1.JMI.11.3.033502","url":null,"abstract":"<p><strong>Purpose: </strong>The modulation transfer function (MTF) and detective quantum efficiency (DQE) of x-ray detectors are key Fourier metrics of performance, valid only for linear and shift-invariant (LSI) systems and generally measured following IEC guidelines requiring the use of raw (unprocessed) image data. However, many detectors incorporate processing in the imaging chain that is difficult or impossible to disable, raising questions about the practical relevance of MTF and DQE testing. We investigate the impact of convolution-based embedded processing on MTF and DQE measurements.</p><p><strong>Approach: </strong>We use an impulse-sampled notation, consistent with a cascaded-systems analysis in spatial and spatial-frequency domains to determine the impact of discrete convolution (DC) on measured MTF and DQE following IEC guidelines.</p><p><strong>Results: </strong>We show that digital systems remain LSI if we acknowledge both image pixel values and convolution kernels represent scaled Dirac <math><mrow><mi>δ</mi></mrow></math>-functions with an implied sinc convolution of image data. This enables use of the Fourier transform (FT) to determine impact on presampling MTF and DQE measurements.</p><p><strong>Conclusions: </strong>It is concluded that: (i) the MTF of DC is always an unbounded cosine series; (ii) the slanted-edge method yields the true presampling MTF, even when using processed images, with processing appearing as an analytic filter with cosine-series MTF applied to raw presampling image data; (iii) the DQE is unaffected by discrete-convolution-based processing with a possible exception near zero-points in the presampling MTF; and (iv) the FT of the impulse-sampled notation is equivalent to the <math><mrow><mi>Z</mi></mrow></math> transform of image data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"033502"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140480/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic lesion detection for narrow-band imaging bronchoscopy. 窄带成像支气管镜的自动病灶检测。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-30 DOI: 10.1117/1.JMI.11.3.036002
Vahid Daneshpajooh, Danish Ahmad, Jennifer Toth, Rebecca Bascom, William E Higgins

Purpose: Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution.

Approach: Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions.

Results: Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach.

Conclusion: The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.

目的:早期发现癌症对肺癌患者至关重要,因为它决定着疾病的预后。肺癌通常是从沿气道壁的支气管病变开始的。最新研究表明,与其他支气管镜检查方式相比,窄带成像(NBI)支气管镜能更有效地检测支气管病变。遗憾的是,NBI 视频可能难以解读,因为目前医生不得不在长长的气道检查视频中进行耗时的主观视觉搜索,以检测支气管病变。因此,NBI 支气管镜并未在实践中得到广泛应用。为了缓解这一问题,我们提出了一种在 NBI 视频中检测支气管病变的两阶段自动实时方法,并利用本机构收集的 NBI 气道检查视频对该方法进行了首次试点研究:方法:给定患者的 NBI 视频,方法的第一阶段需要一个基于深度学习的对象检测网络,并结合多帧异常度量来定位每个视频帧上的候选病变。然后,第二阶段利用连体网络和卡尔曼滤波器在多个帧上跟踪候选病变,以得出最终的病变决定:结果:对 23 名患者的 NBI 气道检查视频进行的测试表明,该方法能以实时帧速率处理输入的视频流,从而使该方法适用于现场支气管镜气道检查过程中的实时检查。此外,我们的研究还显示,病变检测的灵敏度为 93%,特异度为 86%;与之相比,最近的一系列临床研究采用目前耗时的主观临床方法,灵敏度为 80%,特异度为 84%:结论:该方法显示了以实时帧速率在 NBI 视频中进行稳健病灶检测的潜力。结论:该方法显示了以实时帧速率在 NBI 视频中进行稳健病变检测的潜力,因此有助于更普遍地使用 NBI 支气管镜进行支气管病变检测。
{"title":"Automatic lesion detection for narrow-band imaging bronchoscopy.","authors":"Vahid Daneshpajooh, Danish Ahmad, Jennifer Toth, Rebecca Bascom, William E Higgins","doi":"10.1117/1.JMI.11.3.036002","DOIUrl":"10.1117/1.JMI.11.3.036002","url":null,"abstract":"<p><strong>Purpose: </strong>Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution.</p><p><strong>Approach: </strong>Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions.</p><p><strong>Results: </strong>Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach.</p><p><strong>Conclusion: </strong>The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"036002"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11138083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computerized assessment of background parenchymal enhancement on breast dynamic contrast-enhanced-MRI including electronic lesion removal. 乳腺动态对比增强型核磁共振成像(包括电子病灶清除)背景实质增强的计算机化评估。
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-02 DOI: 10.1117/1.JMI.11.3.034501
Lindsay Douglas, Jordan Fuhrman, Qiyuan Hu, Alexandra Edwards, Deepa Sheth, Hiroyuki Abe, Maryellen Giger

Purpose: Current clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels.

Approach: A U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy c-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings.

Results: Statistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, p<0.001). Scores from all breast regions performed significantly better than guessing (p<0.025 from the z-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points.

Conclusions: Results demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.

目的:目前的临床评估根据动态对比增强(DCE)-MRI 中正常纤维腺体乳腺组织肉眼感知的增强体积和强度,将背景实质增强(BPE)定性为最小、轻度、中度或明显。肿瘤增强可能包括在 BPE 的视觉评估中,因此肿瘤内的血管生成会夸大 BPE 的估计值。我们利用 426 例核磁共振成像的数据集,开发了一种自动方法来分割乳房、电子移除病灶并计算分数以估算 BPE 水平:方法:训练 U-Net 从 DCE-MRI 最大强度投影 (MIP) 图像进行乳房分割。使用模糊 c-means 聚类对病灶进行分割;在创建投影之前,先移除病灶体积。应用 U-Net 输出创建病灶移除前后受影响乳房和未受影响乳房的投影图像。通过各种投影图像(包括第一次或第二次对比减影后 MRI 的 MIP 或平均强度投影)计算 BPE 分数,以评估不同图像参数对自动 BPE 评估的影响。为了确定在 BPE 等级分类任务中计算得分相对于放射医师评分的预测价值,进行了接收者操作特征分析:在所有乳腺区域,放射科医生的 BPE 评级与计算的 BPE 分数之间存在明显的统计学趋势(Kendall 相关性,P0.001)。所有乳房区域的得分均明显优于猜测得分(z 检验 p0.025)。结果显示,切除病灶和未切除病灶的结果在统计学上没有明显差异。在不同的观察投影和 DCE 时间点上,病变切除后第二次对比减影 MIP 中受影响乳房的 BPE 分数在统计学上高于随机猜测:结果表明,自动 BPE 评分可作为乳腺 DCE-MR 客观 BPE 水平分类的定量值,而不受病灶增强的影响。
{"title":"Computerized assessment of background parenchymal enhancement on breast dynamic contrast-enhanced-MRI including electronic lesion removal.","authors":"Lindsay Douglas, Jordan Fuhrman, Qiyuan Hu, Alexandra Edwards, Deepa Sheth, Hiroyuki Abe, Maryellen Giger","doi":"10.1117/1.JMI.11.3.034501","DOIUrl":"10.1117/1.JMI.11.3.034501","url":null,"abstract":"<p><strong>Purpose: </strong>Current clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels.</p><p><strong>Approach: </strong>A U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy <math><mrow><mi>c</mi></mrow></math>-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings.</p><p><strong>Results: </strong>Statistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, <math><mrow><mi>p</mi><mo><</mo><mn>0.001</mn></mrow></math>). Scores from all breast regions performed significantly better than guessing (<math><mrow><mi>p</mi><mo><</mo><mn>0.025</mn></mrow></math> from the <math><mrow><mi>z</mi></mrow></math>-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points.</p><p><strong>Conclusions: </strong>Results demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiresolution semantic segmentation of biological structures in digital histopathology. 数字组织病理学中生物结构的多分辨率语义分割。
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-09 DOI: 10.1117/1.JMI.11.3.037501
Sina Salsabili, Adrian D C Chan, Eranga Ukwatta

Purpose: Semantic segmentation in high-resolution, histopathology whole slide images (WSIs) is an important fundamental task in various pathology applications. Convolutional neural networks (CNN) are the state-of-the-art approach for image segmentation. A patch-based CNN approach is often employed because of the large size of WSIs; however, segmentation performance is sensitive to the field-of-view and resolution of the input patches, and balancing the trade-offs is challenging when there are drastic size variations in the segmented structures. We propose a multiresolution semantic segmentation approach, which is capable of addressing the threefold trade-off between field-of-view, computational efficiency, and spatial resolution in histopathology WSIs.

Approach: We propose a two-stage multiresolution approach for semantic segmentation of histopathology WSIs of mouse lung tissue and human placenta. In the first stage, we use four different CNNs to extract the contextual information from input patches at four different resolutions. In the second stage, we use another CNN to aggregate the extracted information in the first stage and generate the final segmentation masks.

Results: The proposed method reported 95.6%, 92.5%, and 97.1% in our single-class placenta dataset and 97.1%, 87.3%, and 83.3% in our multiclass lung dataset for pixel-wise accuracy, mean Dice similarity coefficient, and mean positive predictive value, respectively.

Conclusions: The proposed multiresolution approach demonstrated high accuracy and consistency in the semantic segmentation of biological structures of different sizes in our single-class placenta and multiclass lung histopathology WSI datasets. Our study can potentially be used in automated analysis of biological structures, facilitating the clinical research in histopathology applications.

目的:高分辨率组织病理学全切片图像(WSI)的语义分割是各种病理学应用中的一项重要基本任务。卷积神经网络(CNN)是最先进的图像分割方法。然而,分割性能对输入斑块的视场和分辨率非常敏感,而且当分割结构的尺寸变化很大时,平衡取舍是一项挑战。我们提出了一种多分辨率语义分割方法,它能够解决组织病理学 WSI 中视场、计算效率和空间分辨率之间的三重权衡问题:我们提出了一种两阶段多分辨率方法,用于对小鼠肺组织和人类胎盘的组织病理学 WSI 进行语义分割。在第一阶段,我们使用四个不同的 CNN 从四个不同分辨率的输入斑块中提取上下文信息。在第二阶段,我们使用另一个 CNN 聚合第一阶段提取的信息,并生成最终的分割掩膜:结果:在单类胎盘数据集中,所提出的方法的像素准确率、平均 Dice 相似性系数和平均正预测值分别为 95.6%、92.5% 和 97.1%;在多类肺部数据集中,所提出的方法的像素准确率、平均 Dice 相似性系数和平均正预测值分别为 97.1%、87.3% 和 83.3%:在单类胎盘和多类肺组织病理学 WSI 数据集中,所提出的多分辨率方法在对不同大小的生物结构进行语义分割时表现出了很高的准确性和一致性。我们的研究可用于生物结构的自动分析,促进组织病理学应用的临床研究。
{"title":"Multiresolution semantic segmentation of biological structures in digital histopathology.","authors":"Sina Salsabili, Adrian D C Chan, Eranga Ukwatta","doi":"10.1117/1.JMI.11.3.037501","DOIUrl":"10.1117/1.JMI.11.3.037501","url":null,"abstract":"<p><strong>Purpose: </strong>Semantic segmentation in high-resolution, histopathology whole slide images (WSIs) is an important fundamental task in various pathology applications. Convolutional neural networks (CNN) are the state-of-the-art approach for image segmentation. A patch-based CNN approach is often employed because of the large size of WSIs; however, segmentation performance is sensitive to the field-of-view and resolution of the input patches, and balancing the trade-offs is challenging when there are drastic size variations in the segmented structures. We propose a multiresolution semantic segmentation approach, which is capable of addressing the threefold trade-off between field-of-view, computational efficiency, and spatial resolution in histopathology WSIs.</p><p><strong>Approach: </strong>We propose a two-stage multiresolution approach for semantic segmentation of histopathology WSIs of mouse lung tissue and human placenta. In the first stage, we use four different CNNs to extract the contextual information from input patches at four different resolutions. In the second stage, we use another CNN to aggregate the extracted information in the first stage and generate the final segmentation masks.</p><p><strong>Results: </strong>The proposed method reported 95.6%, 92.5%, and 97.1% in our single-class placenta dataset and 97.1%, 87.3%, and 83.3% in our multiclass lung dataset for pixel-wise accuracy, mean Dice similarity coefficient, and mean positive predictive value, respectively.</p><p><strong>Conclusions: </strong>The proposed multiresolution approach demonstrated high accuracy and consistency in the semantic segmentation of biological structures of different sizes in our single-class placenta and multiclass lung histopathology WSI datasets. Our study can potentially be used in automated analysis of biological structures, facilitating the clinical research in histopathology applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"037501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography. 在 CT 血管造影中自动提取和标记冠状动脉树的图神经网络。
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-15 DOI: 10.1117/1.JMI.11.3.034001
Nils Hampe, Sanne G M van Velzen, Jelmer M Wolterink, Carlos Collet, José P S Henriques, Nils Planken, Ivana Išgum

Purpose: Automatic comprehensive reporting of coronary artery disease (CAD) requires anatomical localization of the coronary artery pathologies. To address this, we propose a fully automatic method for extraction and anatomical labeling of the coronary artery tree using deep learning.

Approach: We include coronary CT angiography (CCTA) scans of 104 patients from two hospitals. Reference annotations of coronary artery tree centerlines and labels of coronary artery segments were assigned to 10 segment classes following the American Heart Association guidelines. Our automatic method first extracts the coronary artery tree from CCTA, automatically placing a large number of seed points and simultaneous tracking of vessel-like structures from these points. Thereafter, the extracted tree is refined to retain coronary arteries only, which are subsequently labeled with a multi-resolution ensemble of graph convolutional neural networks that combine geometrical and image intensity information from adjacent segments.

Results: The method is evaluated on its ability to extract the coronary tree and to label its segments, by comparing the automatically derived and the reference labels. A separate assessment of tree extraction yielded an F1 score of 0.85. Evaluation of our combined method leads to an average F1 score of 0.74.

Conclusions: The results demonstrate that our method enables fully automatic extraction and anatomical labeling of coronary artery trees from CCTA scans. Therefore, it has the potential to facilitate detailed automatic reporting of CAD.

目的:自动全面报告冠状动脉疾病(CAD)需要对冠状动脉病变进行解剖定位。为此,我们提出了一种利用深度学习提取冠状动脉树并进行解剖标记的全自动方法:我们采用了两家医院 104 名患者的冠状动脉 CT 血管造影(CCTA)扫描结果。根据美国心脏协会指南,冠状动脉树中心线的参考注释和冠状动脉节段的标签被分配到 10 个节段类别中。我们的自动方法首先从 CCTA 中提取冠状动脉树,自动放置大量种子点,并同时跟踪这些点的血管样结构。然后,对提取的冠状动脉树进行细化,只保留冠状动脉,随后使用图卷积神经网络的多分辨率组合对其进行标记,该网络结合了相邻节段的几何和图像强度信息:结果:通过比较自动生成的标签和参考标签,对该方法提取冠状动脉树和标记其节段的能力进行了评估。对冠状动脉树提取的单独评估得出的 F1 分数为 0.85。对我们的综合方法进行评估后,平均 F1 得分为 0.74:结果表明,我们的方法能从 CCTA 扫描中全自动提取冠状动脉树并进行解剖标记。因此,它有望促进 CAD 的详细自动报告。
{"title":"Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography.","authors":"Nils Hampe, Sanne G M van Velzen, Jelmer M Wolterink, Carlos Collet, José P S Henriques, Nils Planken, Ivana Išgum","doi":"10.1117/1.JMI.11.3.034001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.034001","url":null,"abstract":"<p><strong>Purpose: </strong>Automatic comprehensive reporting of coronary artery disease (CAD) requires anatomical localization of the coronary artery pathologies. To address this, we propose a fully automatic method for extraction and anatomical labeling of the coronary artery tree using deep learning.</p><p><strong>Approach: </strong>We include coronary CT angiography (CCTA) scans of 104 patients from two hospitals. Reference annotations of coronary artery tree centerlines and labels of coronary artery segments were assigned to 10 segment classes following the American Heart Association guidelines. Our automatic method first extracts the coronary artery tree from CCTA, automatically placing a large number of seed points and simultaneous tracking of vessel-like structures from these points. Thereafter, the extracted tree is refined to retain coronary arteries only, which are subsequently labeled with a multi-resolution ensemble of graph convolutional neural networks that combine geometrical and image intensity information from adjacent segments.</p><p><strong>Results: </strong>The method is evaluated on its ability to extract the coronary tree and to label its segments, by comparing the automatically derived and the reference labels. A separate assessment of tree extraction yielded an <math><mrow><mi>F</mi><mn>1</mn></mrow></math> score of 0.85. Evaluation of our combined method leads to an average <math><mrow><mi>F</mi><mn>1</mn></mrow></math> score of 0.74.</p><p><strong>Conclusions: </strong>The results demonstrate that our method enables fully automatic extraction and anatomical labeling of coronary artery trees from CCTA scans. Therefore, it has the potential to facilitate detailed automatic reporting of CAD.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034001"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095121/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive continuation based smooth l0-norm approximation for compressed sensing MR image reconstruction. 用于压缩传感磁共振图像重建的基于平滑 l0-norm 近似的自适应延续。
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-31 DOI: 10.1117/1.JMI.11.3.035003
Sumit Datta, Joseph Suresh Paul

Purpose: There are a number of algorithms for smooth l0-norm (SL0) approximation. In most of the cases, sparsity level of the reconstructed signal is controlled by using a decreasing sequence of the modulation parameter values. However, predefined decreasing sequences of the modulation parameter values cannot produce optimal sparsity or best reconstruction performance, because the best choice of the parameter values is often data-dependent and dynamically changes in each iteration.

Approach: We propose an adaptive compressed sensing magnetic resonance image reconstruction using the SL0 approximation method. The SL0 approach typically involves one-step gradient descent of the SL0 approximating function parameterized with a modulation parameter, followed by a projection step onto the feasible solution set. Since the best choice of the parameter values is often data-dependent and dynamically changes in each iteration, it is preferable to adaptively control the rate of decrease of the parameter values. In order to achieve this, we solve two subproblems in an alternating manner. One is a sparse regularization-based subproblem, which is solved with a precomputed value of the parameter, and the second subproblem is the estimation of the parameter itself using a root finding technique.

Results: The advantage of this approach in terms of speed and accuracy is illustrated using a compressed sensing magnetic resonance image reconstruction problem and compared with constant scale factor continuation based SL0-norm and adaptive continuation based l1-norm minimization approaches. The proposed adaptive estimation is found to be at least twofold faster than automated parameter estimation based iterative shrinkage-thresholding algorithm in terms of CPU time, on an average improvement of reconstruction performance 15% in terms of normalized mean squared error.

Conclusions: An adaptive continuation-based SL0 algorithm is presented, with a potential application to compressed sensing (CS)-based MR image reconstruction. It is a data-dependent adaptive continuation method and eliminates the problem of searching for appropriate constant scale factor values to be used in the CS reconstruction of different types of MRI data.

目的:有许多平滑 L0 正态(SL0)近似的算法。在大多数情况下,重构信号的稀疏程度是通过使用调制参数值的递减序列来控制的。然而,预定义的调制参数值递减序列并不能产生最佳稀疏性或最佳重建性能,因为参数值的最佳选择往往取决于数据,并且在每次迭代中都会发生动态变化:我们提出了一种使用 SL0 近似方法的自适应压缩传感磁共振图像重建。SL0 方法通常包括用调制参数对 SL0 近似函数进行一步梯度下降,然后对可行解集进行一步投影。由于参数值的最佳选择往往取决于数据,并且在每次迭代中都会发生动态变化,因此最好能对参数值的下降率进行自适应控制。为此,我们交替解决两个子问题。一个是基于稀疏正则化的子问题,使用预先计算的参数值来解决;第二个子问题是使用寻根技术对参数本身进行估计:结果:利用压缩传感磁共振图像重建问题说明了这种方法在速度和准确性方面的优势,并与基于 SL0-norm 的恒定比例因子延续法和基于 l1-norm 的自适应延续法进行了比较。结果发现,就 CPU 时间而言,所提出的自适应估计比基于自动参数估计的迭代收缩阈值算法至少快两倍,就归一化均方误差而言,重建性能平均提高了 15%:本文介绍了一种基于自适应连续性的 SL0 算法,该算法有望应用于基于压缩传感(CS)的磁共振图像重建。它是一种依赖于数据的自适应延续方法,消除了在不同类型磁共振成像数据的 CS 重建中寻找合适的常数比例因子值的问题。
{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">Adaptive continuation based smooth <ns0:math><ns0:mrow><ns0:msub><ns0:mrow><ns0:mi>l</ns0:mi></ns0:mrow><ns0:mrow><ns0:mn>0</ns0:mn></ns0:mrow></ns0:msub></ns0:mrow></ns0:math>-norm approximation for compressed sensing MR image reconstruction.","authors":"Sumit Datta, Joseph Suresh Paul","doi":"10.1117/1.JMI.11.3.035003","DOIUrl":"10.1117/1.JMI.11.3.035003","url":null,"abstract":"<p><strong>Purpose: </strong>There are a number of algorithms for smooth <math><mrow><msub><mi>l</mi><mn>0</mn></msub></mrow></math>-norm (SL0) approximation. In most of the cases, sparsity level of the reconstructed signal is controlled by using a decreasing sequence of the modulation parameter values. However, predefined decreasing sequences of the modulation parameter values cannot produce optimal sparsity or best reconstruction performance, because the best choice of the parameter values is often data-dependent and dynamically changes in each iteration.</p><p><strong>Approach: </strong>We propose an adaptive compressed sensing magnetic resonance image reconstruction using the SL0 approximation method. The SL0 approach typically involves one-step gradient descent of the SL0 approximating function parameterized with a modulation parameter, followed by a projection step onto the feasible solution set. Since the best choice of the parameter values is often data-dependent and dynamically changes in each iteration, it is preferable to adaptively control the rate of decrease of the parameter values. In order to achieve this, we solve two subproblems in an alternating manner. One is a sparse regularization-based subproblem, which is solved with a precomputed value of the parameter, and the second subproblem is the estimation of the parameter itself using a root finding technique.</p><p><strong>Results: </strong>The advantage of this approach in terms of speed and accuracy is illustrated using a compressed sensing magnetic resonance image reconstruction problem and compared with constant scale factor continuation based SL0-norm and adaptive continuation based <math><mrow><msub><mi>l</mi><mn>1</mn></msub></mrow></math>-norm minimization approaches. The proposed adaptive estimation is found to be at least twofold faster than automated parameter estimation based iterative shrinkage-thresholding algorithm in terms of CPU time, on an average improvement of reconstruction performance 15% in terms of normalized mean squared error.</p><p><strong>Conclusions: </strong>An adaptive continuation-based SL0 algorithm is presented, with a potential application to compressed sensing (CS)-based MR image reconstruction. It is a data-dependent adaptive continuation method and eliminates the problem of searching for appropriate constant scale factor values to be used in the CS reconstruction of different types of MRI data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035003"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141015/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual thresholds for differences in CT noise texture. CT 噪音纹理差异的感知阈值。
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-05-01 Epub Date: 2024-05-09 DOI: 10.1117/1.JMI.11.3.035501
Luuk J Oostveen, Kirsten Boedeker, Daniel Shin, Craig K Abbey, Ioannis Sechopoulos
<p><strong>Purpose: </strong>The average (<math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math>) or peak (<math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mtext>peak</mtext></mrow></msub></mrow></math>) noise power spectrum (NPS) frequency is often used as a one-parameter descriptor of the CT noise texture. Our study develops a more complete two-parameter model of the CT NPS and investigates the sensitivity of human observers to changes in it.</p><p><strong>Approach: </strong>A model of CT NPS was created based on its <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> and a half-Gaussian fit (<math><mrow><mi>σ</mi></mrow></math>) to the downslope. Two-alternative forced-choice staircase studies were used to determine perceptual thresholds for noise texture, defined as parameter differences with a predetermined level of discrimination performance (80% correct). Five imaging scientist observers performed the forced-choice studies for eight directions in the <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub><mo>/</mo><mi>σ</mi></mrow></math>-space, for two reference NPSs (corresponding to body and lung kernels). The experiment was repeated with 32 radiologists, each evaluating a single direction in the <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub><mo>/</mo><mi>σ</mi></mrow></math>-space. NPS differences were quantified by the noise texture contrast (<math><mrow><msub><mi>C</mi><mtext>texture</mtext></msub></mrow></math>), the integral of the absolute NPS difference.</p><p><strong>Results: </strong>The two-parameter NPS model was found to be a good representation of various clinical CT reconstructions. Perception thresholds for <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> alone are <math><mrow><mn>0.2</mn><mtext>  </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math> for body and <math><mrow><mn>0.4</mn><mtext>  </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math> for lung NPSs. For <math><mrow><mi>σ</mi></mrow></math>, these values are 0.15 and <math><mrow><mn>2</mn><mtext>  </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math>, respectively. Thresholds change if the other parameter also changes. Different NPSs with the same <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mtext>peak</mtext></mrow></msub></mrow></math> or <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math> can be discriminated. Nonradiologist observers did not need more <math><mrow><msub><mi>C</mi><mtext>texture</mtext></msub></mrow></math> than radiologists.</p><p><strong>Conclusions: </strong><math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> or <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math> is insufficient to describe noise texture completely. The discrimination of noise texture changes depending on its frequency content. Radiologists do not discriminate noise texture changes better than nonradiologi
目的:平均(fav)或峰值(fpeak)噪声功率谱(NPS)频率通常被用作 CT 噪声纹理的单参数描述符。我们的研究建立了一个更完整的 CT 噪声功率谱双参数模型,并研究了人类观察者对其变化的敏感性:方法:根据 CT NPS 的峰值和下坡的半高斯拟合(σ)建立了一个 CT NPS 模型。使用双备选强迫选择阶梯研究来确定噪声纹理的感知阈值,该阈值被定义为具有预定辨别性能水平(80% 正确率)的参数差异。五名成像科学家观察员在 fpeak/σ 空间的八个方向上对两个参考 NPS(对应于体核和肺核)进行了强迫选择研究。该实验由 32 位放射科医生重复进行,每位医生评估 fpeak/σ 空间中的一个方向。NPS差异通过噪声纹理对比度(Ctexture)进行量化,Ctexture是NPS绝对差异的积分:结果:发现双参数 NPS 模型能很好地代表各种临床 CT 重建。仅对 fpeak 的感知阈值而言,体部 NPS 为 0.2 lp/cm,肺部 NPS 为 0.4 lp/cm。对于 σ,这些值分别为 0.15 和 2 lp/cm。如果其他参数也发生变化,阈值也会发生变化。具有相同峰值或阈值的不同 NPS 可以被区分开来。结论:fpeak 或 fav 不足以完全描述噪声纹理。对噪声纹理的辨别会随着其频率含量的变化而变化。放射科医生对噪声纹理变化的辨别能力并不比非放射科医生强。
{"title":"Perceptual thresholds for differences in CT noise texture.","authors":"Luuk J Oostveen, Kirsten Boedeker, Daniel Shin, Craig K Abbey, Ioannis Sechopoulos","doi":"10.1117/1.JMI.11.3.035501","DOIUrl":"10.1117/1.JMI.11.3.035501","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;The average (&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;av&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;) or peak (&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;) noise power spectrum (NPS) frequency is often used as a one-parameter descriptor of the CT noise texture. Our study develops a more complete two-parameter model of the CT NPS and investigates the sensitivity of human observers to changes in it.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;A model of CT NPS was created based on its &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; and a half-Gaussian fit (&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;σ&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;) to the downslope. Two-alternative forced-choice staircase studies were used to determine perceptual thresholds for noise texture, defined as parameter differences with a predetermined level of discrimination performance (80% correct). Five imaging scientist observers performed the forced-choice studies for eight directions in the &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;σ&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;-space, for two reference NPSs (corresponding to body and lung kernels). The experiment was repeated with 32 radiologists, each evaluating a single direction in the &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;σ&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;-space. NPS differences were quantified by the noise texture contrast (&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;C&lt;/mi&gt;&lt;mtext&gt;texture&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;), the integral of the absolute NPS difference.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The two-parameter NPS model was found to be a good representation of various clinical CT reconstructions. Perception thresholds for &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; alone are &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;0.2&lt;/mn&gt;&lt;mtext&gt;  &lt;/mtext&gt;&lt;mi&gt;lp&lt;/mi&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;cm&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt; for body and &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;0.4&lt;/mn&gt;&lt;mtext&gt;  &lt;/mtext&gt;&lt;mi&gt;lp&lt;/mi&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;cm&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt; for lung NPSs. For &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;σ&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;, these values are 0.15 and &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;mtext&gt;  &lt;/mtext&gt;&lt;mi&gt;lp&lt;/mi&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;cm&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;, respectively. Thresholds change if the other parameter also changes. Different NPSs with the same &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; or &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;av&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; can be discriminated. Nonradiologist observers did not need more &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;C&lt;/mi&gt;&lt;mtext&gt;texture&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; than radiologists.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; or &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;av&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; is insufficient to describe noise texture completely. The discrimination of noise texture changes depending on its frequency content. Radiologists do not discriminate noise texture changes better than nonradiologi","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086665/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1