Pub Date : 2024-05-01Epub Date: 2024-05-14DOI: 10.1117/1.JMI.11.3.036001
Christian Herz, Nicolas Vergnet, Sijie Tian, Abdullah H Aly, Matthew A Jolley, Nathanael Tran, Gabriel Arenas, Andras Lasso, Nadav Schwartz, Kathleen E O'Neill, Paul A Yushkevich, Alison M Pouch
Purpose: Deformable medial modeling is an inverse skeletonization approach to representing anatomy in medical images, which can be used for statistical shape analysis and assessment of patient-specific anatomical features such as locally varying thickness. It involves deforming a pre-defined synthetic skeleton, or template, to anatomical structures of the same class. The lack of software for creating such skeletons has been a limitation to more widespread use of deformable medial modeling. Therefore, the objective of this work is to present an open-source user interface (UI) for the creation of synthetic skeletons for a range of medial modeling applications in medical imaging.
Approach: A UI for interactive design of synthetic skeletons was implemented in 3D Slicer, an open-source medical image analysis application. The steps in synthetic skeleton design include importation and skeletonization of a 3D segmentation, followed by interactive 3D point placement and triangulation of the medial surface such that the desired branching configuration of the anatomical structure's medial axis is achieved. Synthetic skeleton design was evaluated in five clinical applications. Compatibility of the synthetic skeletons with open-source software for deformable medial modeling was tested, and representational accuracy of the deformed medial models was evaluated.
Results: Three users designed synthetic skeletons of anatomies with various topologies: the placenta, aortic root wall, mitral valve, cardiac ventricles, and the uterus. The skeletons were compatible with skeleton-first and boundary-first software for deformable medial modeling. The fitted medial models achieved good representational accuracy with respect to the 3D segmentations from which the synthetic skeletons were generated.
Conclusions: Synthetic skeleton design has been a practical challenge in leveraging deformable medial modeling for new clinical applications. This work demonstrates an open-source UI for user-friendly design of synthetic skeletons for anatomies with a wide range of topologies.
目的:可变形中轴建模是一种在医学图像中表示解剖结构的反骨架化方法,可用于统计形状分析和评估病人特定的解剖特征,如局部变化的厚度。它包括将预定义的合成骨架或模板变形为同类解剖结构。由于缺乏创建此类骨架的软件,可变形内侧建模无法得到更广泛的应用。因此,这项工作的目标是提供一个开源用户界面(UI),用于创建合成骨架,以满足医学影像中一系列医学建模应用的需要:方法:在开源医学图像分析应用程序 3D Slicer 中实现了用于交互式设计合成骨骼的用户界面。合成骨架设计的步骤包括导入三维分割并将其骨架化,然后对内侧表面进行交互式三维点放置和三角测量,从而实现解剖结构内侧轴所需的分支配置。合成骨骼设计在五个临床应用中进行了评估。测试了合成骨骼与用于可变形内侧建模的开源软件的兼容性,并评估了变形内侧模型的准确性:三位用户设计了不同拓扑结构的解剖合成骨架:胎盘、主动脉根壁、二尖瓣、心室和子宫。这些骨架与骨架优先和边界优先软件兼容,可用于可变形内侧建模。与生成合成骨骼的三维分割相比,拟合的内侧模型具有良好的代表性和准确性:合成骨骼设计一直是利用可变形内侧建模实现新临床应用的实际挑战。这项工作展示了一个开放源码的用户界面,可方便用户设计具有多种拓扑结构的解剖合成骨骼。
{"title":"Open-source graphical user interface for the creation of synthetic skeletons for medical image analysis.","authors":"Christian Herz, Nicolas Vergnet, Sijie Tian, Abdullah H Aly, Matthew A Jolley, Nathanael Tran, Gabriel Arenas, Andras Lasso, Nadav Schwartz, Kathleen E O'Neill, Paul A Yushkevich, Alison M Pouch","doi":"10.1117/1.JMI.11.3.036001","DOIUrl":"10.1117/1.JMI.11.3.036001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable medial modeling is an inverse skeletonization approach to representing anatomy in medical images, which can be used for statistical shape analysis and assessment of patient-specific anatomical features such as locally varying thickness. It involves deforming a pre-defined synthetic skeleton, or template, to anatomical structures of the same class. The lack of software for creating such skeletons has been a limitation to more widespread use of deformable medial modeling. Therefore, the objective of this work is to present an open-source user interface (UI) for the creation of synthetic skeletons for a range of medial modeling applications in medical imaging.</p><p><strong>Approach: </strong>A UI for interactive design of synthetic skeletons was implemented in 3D Slicer, an open-source medical image analysis application. The steps in synthetic skeleton design include importation and skeletonization of a 3D segmentation, followed by interactive 3D point placement and triangulation of the medial surface such that the desired branching configuration of the anatomical structure's medial axis is achieved. Synthetic skeleton design was evaluated in five clinical applications. Compatibility of the synthetic skeletons with open-source software for deformable medial modeling was tested, and representational accuracy of the deformed medial models was evaluated.</p><p><strong>Results: </strong>Three users designed synthetic skeletons of anatomies with various topologies: the placenta, aortic root wall, mitral valve, cardiac ventricles, and the uterus. The skeletons were compatible with skeleton-first and boundary-first software for deformable medial modeling. The fitted medial models achieved good representational accuracy with respect to the 3D segmentations from which the synthetic skeletons were generated.</p><p><strong>Conclusions: </strong>Synthetic skeleton design has been a practical challenge in leveraging deformable medial modeling for new clinical applications. This work demonstrates an open-source UI for user-friendly design of synthetic skeletons for anatomies with a wide range of topologies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"036001"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-05-17DOI: 10.1117/1.JMI.11.3.034002
Rachel Eimen, Halina Krzyzanowska, Kristen R Scarpato, Audrey K Bowden
Purpose: In the current clinical standard of care, cystoscopic video is not routinely saved because it is cumbersome to review. Instead, clinicians rely on brief procedure notes and still frames to manage bladder pathology. Preserving discarded data via 3D reconstructions, which are convenient to review, has the potential to improve patient care. However, many clinical videos are collected by fiberscopes, which are lower cost but induce a pattern on frames that inhibit 3D reconstruction. The aim of our study is to remove the honeycomb-like pattern present in fiberscope-based cystoscopy videos to improve the quality of 3D bladder reconstructions.
Approach: Our study introduces an algorithm that applies a notch filtering mask in the Fourier domain to remove the honeycomb-like pattern from clinical cystoscopy videos collected by fiberscope as a preprocessing step to 3D reconstruction. We produce 3D reconstructions with the video before and after removing the pattern, which we compare with a metric termed the area of reconstruction coverage (), defined as the surface area (in pixels) of the reconstructed bladder. All statistical analyses use paired -tests.
Results: Preprocessing using our method for pattern removal enabled reconstruction for all () cystoscopy videos included in the study and produced a statistically significant increase in bladder coverage ().
Conclusions: This algorithm for pattern removal increases bladder coverage in 3D reconstructions and automates mask generation and application, which could aid implementation in time-starved clinical environments. The creation and use of 3D reconstructions can improve documentation of cystoscopic findings for future surgical navigation, thus improving patient treatment and outcomes.
目的:在目前的临床治疗标准中,膀胱镜视频并没有被常规保存,因为审查起来非常麻烦。相反,临床医生依靠简短的手术记录和静止画面来处理膀胱病理。通过方便查看的三维重建保存废弃数据有可能改善患者护理。然而,许多临床视频都是通过纤维镜采集的,这种方法成本较低,但会在帧上产生图案,从而阻碍三维重建。我们的研究旨在去除纤维镜膀胱镜检查视频中的蜂窝状图案,以提高三维膀胱重建的质量:我们的研究引入了一种算法,该算法在傅立叶域中应用凹口滤波掩码,以去除由纤维镜采集的临床膀胱镜检查视频中的蜂窝状图案,作为三维重建的预处理步骤。我们将去除图案前后的视频进行三维重建,并将其与重建覆盖面积(ARC)进行比较,重建覆盖面积定义为重建膀胱的表面积(像素)。所有统计分析均采用配对 t 检验:结果:使用我们的方法去除图案进行预处理后,研究中的所有(n=5)膀胱镜检查视频都能进行重建,并且膀胱覆盖率在统计学上有显著提高(p=0.018):这种模式去除算法可提高三维重建的膀胱覆盖率,并自动生成和应用掩膜,有助于在时间紧迫的临床环境中实施。三维重建的创建和使用可以改善膀胱镜检查结果的记录,为将来的手术导航提供帮助,从而改善患者的治疗和预后。
{"title":"Fiberscopic pattern removal for optimal coverage in 3D bladder reconstructions of fiberscope cystoscopy videos.","authors":"Rachel Eimen, Halina Krzyzanowska, Kristen R Scarpato, Audrey K Bowden","doi":"10.1117/1.JMI.11.3.034002","DOIUrl":"10.1117/1.JMI.11.3.034002","url":null,"abstract":"<p><strong>Purpose: </strong>In the current clinical standard of care, cystoscopic video is not routinely saved because it is cumbersome to review. Instead, clinicians rely on brief procedure notes and still frames to manage bladder pathology. Preserving discarded data via 3D reconstructions, which are convenient to review, has the potential to improve patient care. However, many clinical videos are collected by fiberscopes, which are lower cost but induce a pattern on frames that inhibit 3D reconstruction. The aim of our study is to remove the honeycomb-like pattern present in fiberscope-based cystoscopy videos to improve the quality of 3D bladder reconstructions.</p><p><strong>Approach: </strong>Our study introduces an algorithm that applies a notch filtering mask in the Fourier domain to remove the honeycomb-like pattern from clinical cystoscopy videos collected by fiberscope as a preprocessing step to 3D reconstruction. We produce 3D reconstructions with the video before and after removing the pattern, which we compare with a metric termed the area of reconstruction coverage (<math><mrow><msub><mrow><mi>A</mi></mrow><mrow><mi>RC</mi></mrow></msub></mrow></math>), defined as the surface area (in pixels) of the reconstructed bladder. All statistical analyses use paired <math><mrow><mi>t</mi></mrow></math>-tests.</p><p><strong>Results: </strong>Preprocessing using our method for pattern removal enabled reconstruction for all (<math><mrow><mi>n</mi><mo>=</mo><mn>5</mn></mrow></math>) cystoscopy videos included in the study and produced a statistically significant increase in bladder coverage (<math><mrow><mi>p</mi><mo>=</mo><mn>0.018</mn></mrow></math>).</p><p><strong>Conclusions: </strong>This algorithm for pattern removal increases bladder coverage in 3D reconstructions and automates mask generation and application, which could aid implementation in time-starved clinical environments. The creation and use of 3D reconstructions can improve documentation of cystoscopic findings for future surgical navigation, thus improving patient treatment and outcomes.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034002"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11099938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-06-26DOI: 10.1117/1.JMI.11.3.030101
Bennett Landman
The editorial introduces JMI Issue 3 Volume 11, looks ahead to SPIE Medical Imaging, and highlights the journal's policy on conference article submission.
{"title":"Networking Science and Technology: Highlights from JMI Issue 3.","authors":"Bennett Landman","doi":"10.1117/1.JMI.11.3.030101","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.030101","url":null,"abstract":"<p><p>The editorial introduces JMI Issue 3 Volume 11, looks ahead to SPIE Medical Imaging, and highlights the journal's policy on conference article submission.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"030101"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11200196/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141471596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-05-31DOI: 10.1117/1.JMI.11.3.033502
Lisa M Garland, Haechan J Yang, Paul A Picot, Jesse Tanguay, Ian A Cunningham
Purpose: The modulation transfer function (MTF) and detective quantum efficiency (DQE) of x-ray detectors are key Fourier metrics of performance, valid only for linear and shift-invariant (LSI) systems and generally measured following IEC guidelines requiring the use of raw (unprocessed) image data. However, many detectors incorporate processing in the imaging chain that is difficult or impossible to disable, raising questions about the practical relevance of MTF and DQE testing. We investigate the impact of convolution-based embedded processing on MTF and DQE measurements.
Approach: We use an impulse-sampled notation, consistent with a cascaded-systems analysis in spatial and spatial-frequency domains to determine the impact of discrete convolution (DC) on measured MTF and DQE following IEC guidelines.
Results: We show that digital systems remain LSI if we acknowledge both image pixel values and convolution kernels represent scaled Dirac -functions with an implied sinc convolution of image data. This enables use of the Fourier transform (FT) to determine impact on presampling MTF and DQE measurements.
Conclusions: It is concluded that: (i) the MTF of DC is always an unbounded cosine series; (ii) the slanted-edge method yields the true presampling MTF, even when using processed images, with processing appearing as an analytic filter with cosine-series MTF applied to raw presampling image data; (iii) the DQE is unaffected by discrete-convolution-based processing with a possible exception near zero-points in the presampling MTF; and (iv) the FT of the impulse-sampled notation is equivalent to the transform of image data.
{"title":"Can processed images be used to determine the modulation transfer function and detective quantum efficiency?","authors":"Lisa M Garland, Haechan J Yang, Paul A Picot, Jesse Tanguay, Ian A Cunningham","doi":"10.1117/1.JMI.11.3.033502","DOIUrl":"10.1117/1.JMI.11.3.033502","url":null,"abstract":"<p><strong>Purpose: </strong>The modulation transfer function (MTF) and detective quantum efficiency (DQE) of x-ray detectors are key Fourier metrics of performance, valid only for linear and shift-invariant (LSI) systems and generally measured following IEC guidelines requiring the use of raw (unprocessed) image data. However, many detectors incorporate processing in the imaging chain that is difficult or impossible to disable, raising questions about the practical relevance of MTF and DQE testing. We investigate the impact of convolution-based embedded processing on MTF and DQE measurements.</p><p><strong>Approach: </strong>We use an impulse-sampled notation, consistent with a cascaded-systems analysis in spatial and spatial-frequency domains to determine the impact of discrete convolution (DC) on measured MTF and DQE following IEC guidelines.</p><p><strong>Results: </strong>We show that digital systems remain LSI if we acknowledge both image pixel values and convolution kernels represent scaled Dirac <math><mrow><mi>δ</mi></mrow></math>-functions with an implied sinc convolution of image data. This enables use of the Fourier transform (FT) to determine impact on presampling MTF and DQE measurements.</p><p><strong>Conclusions: </strong>It is concluded that: (i) the MTF of DC is always an unbounded cosine series; (ii) the slanted-edge method yields the true presampling MTF, even when using processed images, with processing appearing as an analytic filter with cosine-series MTF applied to raw presampling image data; (iii) the DQE is unaffected by discrete-convolution-based processing with a possible exception near zero-points in the presampling MTF; and (iv) the FT of the impulse-sampled notation is equivalent to the <math><mrow><mi>Z</mi></mrow></math> transform of image data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"033502"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140480/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-05-30DOI: 10.1117/1.JMI.11.3.036002
Vahid Daneshpajooh, Danish Ahmad, Jennifer Toth, Rebecca Bascom, William E Higgins
Purpose: Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution.
Approach: Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions.
Results: Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach.
Conclusion: The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.
{"title":"Automatic lesion detection for narrow-band imaging bronchoscopy.","authors":"Vahid Daneshpajooh, Danish Ahmad, Jennifer Toth, Rebecca Bascom, William E Higgins","doi":"10.1117/1.JMI.11.3.036002","DOIUrl":"10.1117/1.JMI.11.3.036002","url":null,"abstract":"<p><strong>Purpose: </strong>Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution.</p><p><strong>Approach: </strong>Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions.</p><p><strong>Results: </strong>Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach.</p><p><strong>Conclusion: </strong>The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"036002"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11138083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-05-02DOI: 10.1117/1.JMI.11.3.034501
Lindsay Douglas, Jordan Fuhrman, Qiyuan Hu, Alexandra Edwards, Deepa Sheth, Hiroyuki Abe, Maryellen Giger
Purpose: Current clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels.
Approach: A U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy -means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings.
Results: Statistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, ). Scores from all breast regions performed significantly better than guessing ( from the -test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points.
Conclusions: Results demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.
{"title":"Computerized assessment of background parenchymal enhancement on breast dynamic contrast-enhanced-MRI including electronic lesion removal.","authors":"Lindsay Douglas, Jordan Fuhrman, Qiyuan Hu, Alexandra Edwards, Deepa Sheth, Hiroyuki Abe, Maryellen Giger","doi":"10.1117/1.JMI.11.3.034501","DOIUrl":"10.1117/1.JMI.11.3.034501","url":null,"abstract":"<p><strong>Purpose: </strong>Current clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels.</p><p><strong>Approach: </strong>A U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy <math><mrow><mi>c</mi></mrow></math>-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings.</p><p><strong>Results: </strong>Statistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, <math><mrow><mi>p</mi><mo><</mo><mn>0.001</mn></mrow></math>). Scores from all breast regions performed significantly better than guessing (<math><mrow><mi>p</mi><mo><</mo><mn>0.025</mn></mrow></math> from the <math><mrow><mi>z</mi></mrow></math>-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points.</p><p><strong>Conclusions: </strong>Results demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-05-09DOI: 10.1117/1.JMI.11.3.037501
Sina Salsabili, Adrian D C Chan, Eranga Ukwatta
Purpose: Semantic segmentation in high-resolution, histopathology whole slide images (WSIs) is an important fundamental task in various pathology applications. Convolutional neural networks (CNN) are the state-of-the-art approach for image segmentation. A patch-based CNN approach is often employed because of the large size of WSIs; however, segmentation performance is sensitive to the field-of-view and resolution of the input patches, and balancing the trade-offs is challenging when there are drastic size variations in the segmented structures. We propose a multiresolution semantic segmentation approach, which is capable of addressing the threefold trade-off between field-of-view, computational efficiency, and spatial resolution in histopathology WSIs.
Approach: We propose a two-stage multiresolution approach for semantic segmentation of histopathology WSIs of mouse lung tissue and human placenta. In the first stage, we use four different CNNs to extract the contextual information from input patches at four different resolutions. In the second stage, we use another CNN to aggregate the extracted information in the first stage and generate the final segmentation masks.
Results: The proposed method reported 95.6%, 92.5%, and 97.1% in our single-class placenta dataset and 97.1%, 87.3%, and 83.3% in our multiclass lung dataset for pixel-wise accuracy, mean Dice similarity coefficient, and mean positive predictive value, respectively.
Conclusions: The proposed multiresolution approach demonstrated high accuracy and consistency in the semantic segmentation of biological structures of different sizes in our single-class placenta and multiclass lung histopathology WSI datasets. Our study can potentially be used in automated analysis of biological structures, facilitating the clinical research in histopathology applications.
目的:高分辨率组织病理学全切片图像(WSI)的语义分割是各种病理学应用中的一项重要基本任务。卷积神经网络(CNN)是最先进的图像分割方法。然而,分割性能对输入斑块的视场和分辨率非常敏感,而且当分割结构的尺寸变化很大时,平衡取舍是一项挑战。我们提出了一种多分辨率语义分割方法,它能够解决组织病理学 WSI 中视场、计算效率和空间分辨率之间的三重权衡问题:我们提出了一种两阶段多分辨率方法,用于对小鼠肺组织和人类胎盘的组织病理学 WSI 进行语义分割。在第一阶段,我们使用四个不同的 CNN 从四个不同分辨率的输入斑块中提取上下文信息。在第二阶段,我们使用另一个 CNN 聚合第一阶段提取的信息,并生成最终的分割掩膜:结果:在单类胎盘数据集中,所提出的方法的像素准确率、平均 Dice 相似性系数和平均正预测值分别为 95.6%、92.5% 和 97.1%;在多类肺部数据集中,所提出的方法的像素准确率、平均 Dice 相似性系数和平均正预测值分别为 97.1%、87.3% 和 83.3%:在单类胎盘和多类肺组织病理学 WSI 数据集中,所提出的多分辨率方法在对不同大小的生物结构进行语义分割时表现出了很高的准确性和一致性。我们的研究可用于生物结构的自动分析,促进组织病理学应用的临床研究。
{"title":"Multiresolution semantic segmentation of biological structures in digital histopathology.","authors":"Sina Salsabili, Adrian D C Chan, Eranga Ukwatta","doi":"10.1117/1.JMI.11.3.037501","DOIUrl":"10.1117/1.JMI.11.3.037501","url":null,"abstract":"<p><strong>Purpose: </strong>Semantic segmentation in high-resolution, histopathology whole slide images (WSIs) is an important fundamental task in various pathology applications. Convolutional neural networks (CNN) are the state-of-the-art approach for image segmentation. A patch-based CNN approach is often employed because of the large size of WSIs; however, segmentation performance is sensitive to the field-of-view and resolution of the input patches, and balancing the trade-offs is challenging when there are drastic size variations in the segmented structures. We propose a multiresolution semantic segmentation approach, which is capable of addressing the threefold trade-off between field-of-view, computational efficiency, and spatial resolution in histopathology WSIs.</p><p><strong>Approach: </strong>We propose a two-stage multiresolution approach for semantic segmentation of histopathology WSIs of mouse lung tissue and human placenta. In the first stage, we use four different CNNs to extract the contextual information from input patches at four different resolutions. In the second stage, we use another CNN to aggregate the extracted information in the first stage and generate the final segmentation masks.</p><p><strong>Results: </strong>The proposed method reported 95.6%, 92.5%, and 97.1% in our single-class placenta dataset and 97.1%, 87.3%, and 83.3% in our multiclass lung dataset for pixel-wise accuracy, mean Dice similarity coefficient, and mean positive predictive value, respectively.</p><p><strong>Conclusions: </strong>The proposed multiresolution approach demonstrated high accuracy and consistency in the semantic segmentation of biological structures of different sizes in our single-class placenta and multiclass lung histopathology WSI datasets. Our study can potentially be used in automated analysis of biological structures, facilitating the clinical research in histopathology applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"037501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-05-15DOI: 10.1117/1.JMI.11.3.034001
Nils Hampe, Sanne G M van Velzen, Jelmer M Wolterink, Carlos Collet, José P S Henriques, Nils Planken, Ivana Išgum
Purpose: Automatic comprehensive reporting of coronary artery disease (CAD) requires anatomical localization of the coronary artery pathologies. To address this, we propose a fully automatic method for extraction and anatomical labeling of the coronary artery tree using deep learning.
Approach: We include coronary CT angiography (CCTA) scans of 104 patients from two hospitals. Reference annotations of coronary artery tree centerlines and labels of coronary artery segments were assigned to 10 segment classes following the American Heart Association guidelines. Our automatic method first extracts the coronary artery tree from CCTA, automatically placing a large number of seed points and simultaneous tracking of vessel-like structures from these points. Thereafter, the extracted tree is refined to retain coronary arteries only, which are subsequently labeled with a multi-resolution ensemble of graph convolutional neural networks that combine geometrical and image intensity information from adjacent segments.
Results: The method is evaluated on its ability to extract the coronary tree and to label its segments, by comparing the automatically derived and the reference labels. A separate assessment of tree extraction yielded an score of 0.85. Evaluation of our combined method leads to an average score of 0.74.
Conclusions: The results demonstrate that our method enables fully automatic extraction and anatomical labeling of coronary artery trees from CCTA scans. Therefore, it has the potential to facilitate detailed automatic reporting of CAD.
目的:自动全面报告冠状动脉疾病(CAD)需要对冠状动脉病变进行解剖定位。为此,我们提出了一种利用深度学习提取冠状动脉树并进行解剖标记的全自动方法:我们采用了两家医院 104 名患者的冠状动脉 CT 血管造影(CCTA)扫描结果。根据美国心脏协会指南,冠状动脉树中心线的参考注释和冠状动脉节段的标签被分配到 10 个节段类别中。我们的自动方法首先从 CCTA 中提取冠状动脉树,自动放置大量种子点,并同时跟踪这些点的血管样结构。然后,对提取的冠状动脉树进行细化,只保留冠状动脉,随后使用图卷积神经网络的多分辨率组合对其进行标记,该网络结合了相邻节段的几何和图像强度信息:结果:通过比较自动生成的标签和参考标签,对该方法提取冠状动脉树和标记其节段的能力进行了评估。对冠状动脉树提取的单独评估得出的 F1 分数为 0.85。对我们的综合方法进行评估后,平均 F1 得分为 0.74:结果表明,我们的方法能从 CCTA 扫描中全自动提取冠状动脉树并进行解剖标记。因此,它有望促进 CAD 的详细自动报告。
{"title":"Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography.","authors":"Nils Hampe, Sanne G M van Velzen, Jelmer M Wolterink, Carlos Collet, José P S Henriques, Nils Planken, Ivana Išgum","doi":"10.1117/1.JMI.11.3.034001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.034001","url":null,"abstract":"<p><strong>Purpose: </strong>Automatic comprehensive reporting of coronary artery disease (CAD) requires anatomical localization of the coronary artery pathologies. To address this, we propose a fully automatic method for extraction and anatomical labeling of the coronary artery tree using deep learning.</p><p><strong>Approach: </strong>We include coronary CT angiography (CCTA) scans of 104 patients from two hospitals. Reference annotations of coronary artery tree centerlines and labels of coronary artery segments were assigned to 10 segment classes following the American Heart Association guidelines. Our automatic method first extracts the coronary artery tree from CCTA, automatically placing a large number of seed points and simultaneous tracking of vessel-like structures from these points. Thereafter, the extracted tree is refined to retain coronary arteries only, which are subsequently labeled with a multi-resolution ensemble of graph convolutional neural networks that combine geometrical and image intensity information from adjacent segments.</p><p><strong>Results: </strong>The method is evaluated on its ability to extract the coronary tree and to label its segments, by comparing the automatically derived and the reference labels. A separate assessment of tree extraction yielded an <math><mrow><mi>F</mi><mn>1</mn></mrow></math> score of 0.85. Evaluation of our combined method leads to an average <math><mrow><mi>F</mi><mn>1</mn></mrow></math> score of 0.74.</p><p><strong>Conclusions: </strong>The results demonstrate that our method enables fully automatic extraction and anatomical labeling of coronary artery trees from CCTA scans. Therefore, it has the potential to facilitate detailed automatic reporting of CAD.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034001"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095121/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-05-31DOI: 10.1117/1.JMI.11.3.035003
Sumit Datta, Joseph Suresh Paul
Purpose: There are a number of algorithms for smooth -norm (SL0) approximation. In most of the cases, sparsity level of the reconstructed signal is controlled by using a decreasing sequence of the modulation parameter values. However, predefined decreasing sequences of the modulation parameter values cannot produce optimal sparsity or best reconstruction performance, because the best choice of the parameter values is often data-dependent and dynamically changes in each iteration.
Approach: We propose an adaptive compressed sensing magnetic resonance image reconstruction using the SL0 approximation method. The SL0 approach typically involves one-step gradient descent of the SL0 approximating function parameterized with a modulation parameter, followed by a projection step onto the feasible solution set. Since the best choice of the parameter values is often data-dependent and dynamically changes in each iteration, it is preferable to adaptively control the rate of decrease of the parameter values. In order to achieve this, we solve two subproblems in an alternating manner. One is a sparse regularization-based subproblem, which is solved with a precomputed value of the parameter, and the second subproblem is the estimation of the parameter itself using a root finding technique.
Results: The advantage of this approach in terms of speed and accuracy is illustrated using a compressed sensing magnetic resonance image reconstruction problem and compared with constant scale factor continuation based SL0-norm and adaptive continuation based -norm minimization approaches. The proposed adaptive estimation is found to be at least twofold faster than automated parameter estimation based iterative shrinkage-thresholding algorithm in terms of CPU time, on an average improvement of reconstruction performance 15% in terms of normalized mean squared error.
Conclusions: An adaptive continuation-based SL0 algorithm is presented, with a potential application to compressed sensing (CS)-based MR image reconstruction. It is a data-dependent adaptive continuation method and eliminates the problem of searching for appropriate constant scale factor values to be used in the CS reconstruction of different types of MRI data.
{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">Adaptive continuation based smooth <ns0:math><ns0:mrow><ns0:msub><ns0:mrow><ns0:mi>l</ns0:mi></ns0:mrow><ns0:mrow><ns0:mn>0</ns0:mn></ns0:mrow></ns0:msub></ns0:mrow></ns0:math>-norm approximation for compressed sensing MR image reconstruction.","authors":"Sumit Datta, Joseph Suresh Paul","doi":"10.1117/1.JMI.11.3.035003","DOIUrl":"10.1117/1.JMI.11.3.035003","url":null,"abstract":"<p><strong>Purpose: </strong>There are a number of algorithms for smooth <math><mrow><msub><mi>l</mi><mn>0</mn></msub></mrow></math>-norm (SL0) approximation. In most of the cases, sparsity level of the reconstructed signal is controlled by using a decreasing sequence of the modulation parameter values. However, predefined decreasing sequences of the modulation parameter values cannot produce optimal sparsity or best reconstruction performance, because the best choice of the parameter values is often data-dependent and dynamically changes in each iteration.</p><p><strong>Approach: </strong>We propose an adaptive compressed sensing magnetic resonance image reconstruction using the SL0 approximation method. The SL0 approach typically involves one-step gradient descent of the SL0 approximating function parameterized with a modulation parameter, followed by a projection step onto the feasible solution set. Since the best choice of the parameter values is often data-dependent and dynamically changes in each iteration, it is preferable to adaptively control the rate of decrease of the parameter values. In order to achieve this, we solve two subproblems in an alternating manner. One is a sparse regularization-based subproblem, which is solved with a precomputed value of the parameter, and the second subproblem is the estimation of the parameter itself using a root finding technique.</p><p><strong>Results: </strong>The advantage of this approach in terms of speed and accuracy is illustrated using a compressed sensing magnetic resonance image reconstruction problem and compared with constant scale factor continuation based SL0-norm and adaptive continuation based <math><mrow><msub><mi>l</mi><mn>1</mn></msub></mrow></math>-norm minimization approaches. The proposed adaptive estimation is found to be at least twofold faster than automated parameter estimation based iterative shrinkage-thresholding algorithm in terms of CPU time, on an average improvement of reconstruction performance 15% in terms of normalized mean squared error.</p><p><strong>Conclusions: </strong>An adaptive continuation-based SL0 algorithm is presented, with a potential application to compressed sensing (CS)-based MR image reconstruction. It is a data-dependent adaptive continuation method and eliminates the problem of searching for appropriate constant scale factor values to be used in the CS reconstruction of different types of MRI data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035003"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141015/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-05-09DOI: 10.1117/1.JMI.11.3.035501
Luuk J Oostveen, Kirsten Boedeker, Daniel Shin, Craig K Abbey, Ioannis Sechopoulos
<p><strong>Purpose: </strong>The average (<math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math>) or peak (<math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mtext>peak</mtext></mrow></msub></mrow></math>) noise power spectrum (NPS) frequency is often used as a one-parameter descriptor of the CT noise texture. Our study develops a more complete two-parameter model of the CT NPS and investigates the sensitivity of human observers to changes in it.</p><p><strong>Approach: </strong>A model of CT NPS was created based on its <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> and a half-Gaussian fit (<math><mrow><mi>σ</mi></mrow></math>) to the downslope. Two-alternative forced-choice staircase studies were used to determine perceptual thresholds for noise texture, defined as parameter differences with a predetermined level of discrimination performance (80% correct). Five imaging scientist observers performed the forced-choice studies for eight directions in the <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub><mo>/</mo><mi>σ</mi></mrow></math>-space, for two reference NPSs (corresponding to body and lung kernels). The experiment was repeated with 32 radiologists, each evaluating a single direction in the <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub><mo>/</mo><mi>σ</mi></mrow></math>-space. NPS differences were quantified by the noise texture contrast (<math><mrow><msub><mi>C</mi><mtext>texture</mtext></msub></mrow></math>), the integral of the absolute NPS difference.</p><p><strong>Results: </strong>The two-parameter NPS model was found to be a good representation of various clinical CT reconstructions. Perception thresholds for <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> alone are <math><mrow><mn>0.2</mn><mtext> </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math> for body and <math><mrow><mn>0.4</mn><mtext> </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math> for lung NPSs. For <math><mrow><mi>σ</mi></mrow></math>, these values are 0.15 and <math><mrow><mn>2</mn><mtext> </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math>, respectively. Thresholds change if the other parameter also changes. Different NPSs with the same <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mtext>peak</mtext></mrow></msub></mrow></math> or <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math> can be discriminated. Nonradiologist observers did not need more <math><mrow><msub><mi>C</mi><mtext>texture</mtext></msub></mrow></math> than radiologists.</p><p><strong>Conclusions: </strong><math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> or <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math> is insufficient to describe noise texture completely. The discrimination of noise texture changes depending on its frequency content. Radiologists do not discriminate noise texture changes better than nonradiologi
{"title":"Perceptual thresholds for differences in CT noise texture.","authors":"Luuk J Oostveen, Kirsten Boedeker, Daniel Shin, Craig K Abbey, Ioannis Sechopoulos","doi":"10.1117/1.JMI.11.3.035501","DOIUrl":"10.1117/1.JMI.11.3.035501","url":null,"abstract":"<p><strong>Purpose: </strong>The average (<math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math>) or peak (<math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mtext>peak</mtext></mrow></msub></mrow></math>) noise power spectrum (NPS) frequency is often used as a one-parameter descriptor of the CT noise texture. Our study develops a more complete two-parameter model of the CT NPS and investigates the sensitivity of human observers to changes in it.</p><p><strong>Approach: </strong>A model of CT NPS was created based on its <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> and a half-Gaussian fit (<math><mrow><mi>σ</mi></mrow></math>) to the downslope. Two-alternative forced-choice staircase studies were used to determine perceptual thresholds for noise texture, defined as parameter differences with a predetermined level of discrimination performance (80% correct). Five imaging scientist observers performed the forced-choice studies for eight directions in the <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub><mo>/</mo><mi>σ</mi></mrow></math>-space, for two reference NPSs (corresponding to body and lung kernels). The experiment was repeated with 32 radiologists, each evaluating a single direction in the <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub><mo>/</mo><mi>σ</mi></mrow></math>-space. NPS differences were quantified by the noise texture contrast (<math><mrow><msub><mi>C</mi><mtext>texture</mtext></msub></mrow></math>), the integral of the absolute NPS difference.</p><p><strong>Results: </strong>The two-parameter NPS model was found to be a good representation of various clinical CT reconstructions. Perception thresholds for <math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> alone are <math><mrow><mn>0.2</mn><mtext> </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math> for body and <math><mrow><mn>0.4</mn><mtext> </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math> for lung NPSs. For <math><mrow><mi>σ</mi></mrow></math>, these values are 0.15 and <math><mrow><mn>2</mn><mtext> </mtext><mi>lp</mi><mo>/</mo><mi>cm</mi></mrow></math>, respectively. Thresholds change if the other parameter also changes. Different NPSs with the same <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mtext>peak</mtext></mrow></msub></mrow></math> or <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math> can be discriminated. Nonradiologist observers did not need more <math><mrow><msub><mi>C</mi><mtext>texture</mtext></msub></mrow></math> than radiologists.</p><p><strong>Conclusions: </strong><math><mrow><msub><mi>f</mi><mtext>peak</mtext></msub></mrow></math> or <math><mrow><msub><mrow><mi>f</mi></mrow><mrow><mi>av</mi></mrow></msub></mrow></math> is insufficient to describe noise texture completely. The discrimination of noise texture changes depending on its frequency content. Radiologists do not discriminate noise texture changes better than nonradiologi","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086665/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}