Pub Date : 2024-09-01Epub Date: 2024-10-03DOI: 10.1117/1.JMI.11.5.055502
Xuetong Tao, Warren M Reed, Tong Li, Patrick C Brennan, Ziba Gandomkar
Purpose: Accurate interpretation of mammograms presents challenges. Tailoring mammography training to reader profiles holds the promise of an effective strategy to reduce these errors. This proof-of-concept study investigated the feasibility of employing convolutional neural networks (CNNs) with transfer learning to categorize regions associated with false-positive (FP) errors within screening mammograms into categories of "low" or "high" likelihood of being a false-positive detection for radiologists sharing similar geographic characteristics.
Approach: Mammography test sets assessed by two geographically distant cohorts of radiologists (cohorts A and B) were collected. FP patches within these mammograms were segmented and categorized as "difficult" or "easy" based on the number of readers committing FP errors. Patches outside 1.5 times the interquartile range above the upper quartile were labeled as difficult, whereas the remaining patches were labeled as easy. Using transfer learning, a patch-wise CNN model for binary patch classification was developed utilizing ResNet as the feature extractor, with modified fully connected layers for the target task. Model performance was assessed using 10-fold cross-validation.
Results: Compared with other architectures, the transferred ResNet-50 achieved the highest performance, obtaining receiver operating characteristics area under the curve values of 0.933 ( ) and 0.975 ( ) on the validation sets for cohorts A and B, respectively.
Conclusions: The findings highlight the feasibility of employing CNN-based transfer learning to predict the difficulty levels of local FP patches in screening mammograms for specific radiologist cohort with similar geographic characteristics.
目的:准确判读乳房 X 光照片是一项挑战。根据读者特征定制乳腺 X 光检查培训有望成为减少这些错误的有效策略。这项概念验证研究调查了利用卷积神经网络(CNN)和迁移学习将乳房X光筛查中与假阳性(FP)错误相关的区域分为 "低 "或 "高 "假阳性检测可能性类别的可行性:方法:收集两组地理位置相距较远的放射科医生(A 组和 B 组)评估的乳腺 X 光检查测试集。根据出现 FP 错误的读者人数,对这些乳房 X 光片中的 FP 补丁进行分割并分为 "难 "和 "易 "两类。超出上四分位数 1.5 倍四分位数间范围的片段标记为 "困难",而其余片段标记为 "容易"。利用迁移学习,我们开发了一个用于二元补丁分类的补丁全连接 CNN 模型,使用 ResNet 作为特征提取器,并针对目标任务修改了全连接层。模型性能通过 10 倍交叉验证进行评估:结果:与其他架构相比,转用的 ResNet-50 性能最高,在群组 A 和群组 B 的验证集上分别获得了 0.933 ( ± 0.012 ) 和 0.975 ( ± 0.011 ) 的接收器工作特性曲线下面积值:研究结果凸显了采用基于 CNN 的迁移学习来预测具有相似地理特征的特定放射科医师队列在乳房 X 光筛查中局部 FP 补丁的难度水平的可行性。
{"title":"Optimizing mammography interpretation education: leveraging deep learning for cohort-specific error detection to enhance radiologist training.","authors":"Xuetong Tao, Warren M Reed, Tong Li, Patrick C Brennan, Ziba Gandomkar","doi":"10.1117/1.JMI.11.5.055502","DOIUrl":"10.1117/1.JMI.11.5.055502","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate interpretation of mammograms presents challenges. Tailoring mammography training to reader profiles holds the promise of an effective strategy to reduce these errors. This proof-of-concept study investigated the feasibility of employing convolutional neural networks (CNNs) with transfer learning to categorize regions associated with false-positive (FP) errors within screening mammograms into categories of \"low\" or \"high\" likelihood of being a false-positive detection for radiologists sharing similar geographic characteristics.</p><p><strong>Approach: </strong>Mammography test sets assessed by two geographically distant cohorts of radiologists (cohorts A and B) were collected. FP patches within these mammograms were segmented and categorized as \"difficult\" or \"easy\" based on the number of readers committing FP errors. Patches outside 1.5 times the interquartile range above the upper quartile were labeled as difficult, whereas the remaining patches were labeled as easy. Using transfer learning, a patch-wise CNN model for binary patch classification was developed utilizing ResNet as the feature extractor, with modified fully connected layers for the target task. Model performance was assessed using 10-fold cross-validation.</p><p><strong>Results: </strong>Compared with other architectures, the transferred ResNet-50 achieved the highest performance, obtaining receiver operating characteristics area under the curve values of 0.933 ( <math><mrow><mo>±</mo> <mn>0.012</mn></mrow> </math> ) and 0.975 ( <math><mrow><mo>±</mo> <mn>0.011</mn></mrow> </math> ) on the validation sets for cohorts A and B, respectively.</p><p><strong>Conclusions: </strong>The findings highlight the feasibility of employing CNN-based transfer learning to predict the difficulty levels of local FP patches in screening mammograms for specific radiologist cohort with similar geographic characteristics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"055502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11447382/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-08-30DOI: 10.1117/1.JMI.11.5.054001
Sunwoo Kwak, Hamed Akbari, Jose A Garcia, Suyash Mohan, Yehuda Dicker, Chiharu Sako, Yuji Matsumoto, MacLean P Nasrallah, Mahmoud Shalaby, Donald M O'Rourke, Russel T Shinohara, Fang Liu, Chaitra Badve, Jill S Barnholtz-Sloan, Andrew E Sloan, Matthew Lee, Rajan Jain, Santiago Cepeda, Arnab Chakravarti, Joshua D Palmer, Adam P Dicker, Gaurav Shukla, Adam E Flanders, Wenyin Shi, Graeme F Woodworth, Christos Davatzikos
Purpose: Glioblastoma (GBM) is the most common and aggressive primary adult brain tumor. The standard treatment approach is surgical resection to target the enhancing tumor mass, followed by adjuvant chemoradiotherapy. However, malignant cells often extend beyond the enhancing tumor boundaries and infiltrate the peritumoral edema. Traditional supervised machine learning techniques hold potential in predicting tumor infiltration extent but are hindered by the extensive resources needed to generate expertly delineated regions of interest (ROIs) for training models on tissue most and least likely to be infiltrated.
Approach: We developed a method combining expert knowledge and training-based data augmentation to automatically generate numerous training examples, enhancing the accuracy of our model for predicting tumor infiltration through predictive maps. Such maps can be used for targeted supra-total surgical resection and other therapies that might benefit from intensive yet well-targeted treatment of infiltrated tissue. We apply our method to preoperative multi-parametric magnetic resonance imaging (mpMRI) scans from a subset of 229 patients of a multi-institutional consortium (Radiomics Signatures for Precision Diagnostics) and test the model on subsequent scans with pathology-proven recurrence.
Results: Leave-one-site-out cross-validation was used to train and evaluate the tumor infiltration prediction model using initial pre-surgical scans, comparing the generated prediction maps with follow-up mpMRI scans confirming recurrence through post-resection tissue analysis. Performance was measured by voxel-wised odds ratios (ORs) across six institutions: University of Pennsylvania (OR: 9.97), Ohio State University (OR: 14.03), Case Western Reserve University (OR: 8.13), New York University (OR: 16.43), Thomas Jefferson University (OR: 8.22), and Rio Hortega (OR: 19.48).
Conclusions: The proposed model demonstrates that mpMRI analysis using deep learning can predict infiltration in the peri-tumoral brain region for GBM patients without needing to train a model using expert ROI drawings. Results for each institution demonstrate the model's generalizability and reproducibility.
{"title":"Predicting peritumoral glioblastoma infiltration and subsequent recurrence using deep-learning-based analysis of multi-parametric magnetic resonance imaging.","authors":"Sunwoo Kwak, Hamed Akbari, Jose A Garcia, Suyash Mohan, Yehuda Dicker, Chiharu Sako, Yuji Matsumoto, MacLean P Nasrallah, Mahmoud Shalaby, Donald M O'Rourke, Russel T Shinohara, Fang Liu, Chaitra Badve, Jill S Barnholtz-Sloan, Andrew E Sloan, Matthew Lee, Rajan Jain, Santiago Cepeda, Arnab Chakravarti, Joshua D Palmer, Adam P Dicker, Gaurav Shukla, Adam E Flanders, Wenyin Shi, Graeme F Woodworth, Christos Davatzikos","doi":"10.1117/1.JMI.11.5.054001","DOIUrl":"10.1117/1.JMI.11.5.054001","url":null,"abstract":"<p><strong>Purpose: </strong>Glioblastoma (GBM) is the most common and aggressive primary adult brain tumor. The standard treatment approach is surgical resection to target the enhancing tumor mass, followed by adjuvant chemoradiotherapy. However, malignant cells often extend beyond the enhancing tumor boundaries and infiltrate the peritumoral edema. Traditional supervised machine learning techniques hold potential in predicting tumor infiltration extent but are hindered by the extensive resources needed to generate expertly delineated regions of interest (ROIs) for training models on tissue most and least likely to be infiltrated.</p><p><strong>Approach: </strong>We developed a method combining expert knowledge and training-based data augmentation to automatically generate numerous training examples, enhancing the accuracy of our model for predicting tumor infiltration through predictive maps. Such maps can be used for targeted supra-total surgical resection and other therapies that might benefit from intensive yet well-targeted treatment of infiltrated tissue. We apply our method to preoperative multi-parametric magnetic resonance imaging (mpMRI) scans from a subset of 229 patients of a multi-institutional consortium (Radiomics Signatures for Precision Diagnostics) and test the model on subsequent scans with pathology-proven recurrence.</p><p><strong>Results: </strong>Leave-one-site-out cross-validation was used to train and evaluate the tumor infiltration prediction model using initial pre-surgical scans, comparing the generated prediction maps with follow-up mpMRI scans confirming recurrence through post-resection tissue analysis. Performance was measured by voxel-wised odds ratios (ORs) across six institutions: University of Pennsylvania (OR: 9.97), Ohio State University (OR: 14.03), Case Western Reserve University (OR: 8.13), New York University (OR: 16.43), Thomas Jefferson University (OR: 8.22), and Rio Hortega (OR: 19.48).</p><p><strong>Conclusions: </strong>The proposed model demonstrates that mpMRI analysis using deep learning can predict infiltration in the peri-tumoral brain region for GBM patients without needing to train a model using expert ROI drawings. Results for each institution demonstrate the model's generalizability and reproducibility.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054001"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11363410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-10-15DOI: 10.1117/1.JMI.11.5.053501
Harshit Agrawal, Ari Hietanen, Simo Särkkä
Purpose: X-ray scatter causes considerable degradation in the cone-beam computed tomography (CBCT) image quality. To estimate the scatter, deep learning-based methods have been demonstrated to be effective. Modern CBCT systems can scan a wide range of field-of-measurement (FOM) sizes. Variations in the size of FOM can cause a major shift in the scatter-to-primary ratio in CBCT. However, the scatter estimation performance of deep learning networks has not been extensively evaluated under varying FOMs. Therefore, we train the state-of-the-art scatter estimation neural networks for varying FOMs and develop a method to utilize FOM size information to improve performance.
Approach: We used FOM size information as additional features by converting it into two channels and then concatenating it to the encoder of the networks. We compared our approach for a U-Net, Spline-Net, and DSE-Net, by training them with and without the FOM information. We utilized a Monte Carlo-simulated dataset to train the networks on 18 FOM sizes and test on 30 unseen FOM sizes. In addition, we evaluated the models on the water phantoms and real clinical CBCT scans.
Results: The simulation study demonstrates that our method reduced average mean-absolute-percentage-error for U-Net by 38%, Spline-Net by 40%, and DSE-net by 33% for the scatter estimation in the 2D projection domain. Furthermore, the root-mean-square error on the 3D reconstructed volumes was improved for U-Net by 43%, Spline-Net by 30%, and DSE-Net by 23%. Furthermore, our method improved contrast and image quality on real datasets such as water phantom and clinical data.
Conclusion: Providing additional information about FOM size improves the robustness of the neural networks for scatter estimation. Our approach is not limited to utilizing only FOM size information; more variables such as tube voltage, scanning geometry, and patient size can be added to improve the robustness of a single network.
{"title":"Deep learning architecture for scatter estimation in cone-beam computed tomography head imaging with varying field-of-measurement settings.","authors":"Harshit Agrawal, Ari Hietanen, Simo Särkkä","doi":"10.1117/1.JMI.11.5.053501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.053501","url":null,"abstract":"<p><strong>Purpose: </strong>X-ray scatter causes considerable degradation in the cone-beam computed tomography (CBCT) image quality. To estimate the scatter, deep learning-based methods have been demonstrated to be effective. Modern CBCT systems can scan a wide range of field-of-measurement (FOM) sizes. Variations in the size of FOM can cause a major shift in the scatter-to-primary ratio in CBCT. However, the scatter estimation performance of deep learning networks has not been extensively evaluated under varying FOMs. Therefore, we train the state-of-the-art scatter estimation neural networks for varying FOMs and develop a method to utilize FOM size information to improve performance.</p><p><strong>Approach: </strong>We used FOM size information as additional features by converting it into two channels and then concatenating it to the encoder of the networks. We compared our approach for a U-Net, Spline-Net, and DSE-Net, by training them with and without the FOM information. We utilized a Monte Carlo-simulated dataset to train the networks on 18 FOM sizes and test on 30 unseen FOM sizes. In addition, we evaluated the models on the water phantoms and real clinical CBCT scans.</p><p><strong>Results: </strong>The simulation study demonstrates that our method reduced average mean-absolute-percentage-error for U-Net by 38%, Spline-Net by 40%, and DSE-net by 33% for the scatter estimation in the 2D projection domain. Furthermore, the root-mean-square error on the 3D reconstructed volumes was improved for U-Net by 43%, Spline-Net by 30%, and DSE-Net by 23%. Furthermore, our method improved contrast and image quality on real datasets such as water phantom and clinical data.</p><p><strong>Conclusion: </strong>Providing additional information about FOM size improves the robustness of the neural networks for scatter estimation. Our approach is not limited to utilizing only FOM size information; more variables such as tube voltage, scanning geometry, and patient size can be added to improve the robustness of a single network.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"053501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11477364/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-10-29DOI: 10.1117/1.JMI.11.5.054004
Vincent Loos, Rohit Pardasani, Navchetan Awasthi
Purpose: Medical image segmentation is a critical task in healthcare applications, and U-Nets have demonstrated promising results in this domain. We delve into the understudied aspect of receptive field (RF) size and its impact on the U-Net and attention U-Net architectures used for medical imaging segmentation.
Approach: We explore several critical elements including the relationship among RF size, characteristics of the region of interest, and model performance, as well as the balance between RF size and computational costs for U-Net and attention U-Net methods for different datasets. We also propose a mathematical notation for representing the theoretical receptive field (TRF) of a given layer in a network and propose two new metrics, namely, the effective receptive field (ERF) rate and the object rate, to quantify the fraction of significantly contributing pixels within the ERF against the TRF area and assessing the relative size of the segmentation object compared with the TRF size, respectively.
Results: The results demonstrate that there exists an optimal TRF size that successfully strikes a balance between capturing a wider global context and maintaining computational efficiency, thereby optimizing model performance. Interestingly, a distinct correlation is observed between the data complexity and the required TRF size; segmentation based solely on contrast achieved peak performance even with smaller TRF sizes, whereas more complex segmentation tasks necessitated larger TRFs. Attention U-Net models consistently outperformed their U-Net counterparts, highlighting the value of attention mechanisms regardless of TRF size.
Conclusions: These insights present an invaluable resource for developing more efficient U-Net-based architectures for medical imaging and pave the way for future exploration of other segmentation architectures. A tool is also developed, which calculates the TRF for a U-Net (and attention U-Net) model and also suggests an appropriate TRF size for a given model and dataset.
{"title":"Demystifying the effect of receptive field size in U-Net models for medical image segmentation.","authors":"Vincent Loos, Rohit Pardasani, Navchetan Awasthi","doi":"10.1117/1.JMI.11.5.054004","DOIUrl":"10.1117/1.JMI.11.5.054004","url":null,"abstract":"<p><strong>Purpose: </strong>Medical image segmentation is a critical task in healthcare applications, and U-Nets have demonstrated promising results in this domain. We delve into the understudied aspect of receptive field (RF) size and its impact on the U-Net and attention U-Net architectures used for medical imaging segmentation.</p><p><strong>Approach: </strong>We explore several critical elements including the relationship among RF size, characteristics of the region of interest, and model performance, as well as the balance between RF size and computational costs for U-Net and attention U-Net methods for different datasets. We also propose a mathematical notation for representing the theoretical receptive field (TRF) of a given layer in a network and propose two new metrics, namely, the effective receptive field (ERF) rate and the object rate, to quantify the fraction of significantly contributing pixels within the ERF against the TRF area and assessing the relative size of the segmentation object compared with the TRF size, respectively.</p><p><strong>Results: </strong>The results demonstrate that there exists an optimal TRF size that successfully strikes a balance between capturing a wider global context and maintaining computational efficiency, thereby optimizing model performance. Interestingly, a distinct correlation is observed between the data complexity and the required TRF size; segmentation based solely on contrast achieved peak performance even with smaller TRF sizes, whereas more complex segmentation tasks necessitated larger TRFs. Attention U-Net models consistently outperformed their U-Net counterparts, highlighting the value of attention mechanisms regardless of TRF size.</p><p><strong>Conclusions: </strong>These insights present an invaluable resource for developing more efficient U-Net-based architectures for medical imaging and pave the way for future exploration of other segmentation architectures. A tool is also developed, which calculates the TRF for a U-Net (and attention U-Net) model and also suggests an appropriate TRF size for a given model and dataset.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054004"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11520766/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-09-20DOI: 10.1117/1.JMI.11.5.054502
Raissa Souza, Emma A M Stanley, Vedant Gulve, Jasmine Moore, Chris Kang, Richard Camicioli, Oury Monchi, Zahinoor Ismail, Matthias Wilms, Nils D Forkert
Purpose: Distributed learning is widely used to comply with data-sharing regulations and access diverse datasets for training machine learning (ML) models. The traveling model (TM) is a distributed learning approach that sequentially trains with data from one center at a time, which is especially advantageous when dealing with limited local datasets. However, a critical concern emerges when centers utilize different scanners for data acquisition, which could potentially lead models to exploit these differences as shortcuts. Although data harmonization can mitigate this issue, current methods typically rely on large or paired datasets, which can be impractical to obtain in distributed setups.
Approach: We introduced HarmonyTM, a data harmonization method tailored for the TM. HarmonyTM effectively mitigates bias in the model's feature representation while retaining crucial disease-related information, all without requiring extensive datasets. Specifically, we employed adversarial training to "unlearn" bias from the features used in the model for classifying Parkinson's disease (PD). We evaluated HarmonyTM using multi-center three-dimensional (3D) neuroimaging datasets from 83 centers using 23 different scanners.
Results: Our results show that HarmonyTM improved PD classification accuracy from 72% to 76% and reduced (unwanted) scanner classification accuracy from 53% to 30% in the TM setup.
Conclusion: HarmonyTM is a method tailored for harmonizing 3D neuroimaging data within the TM approach, aiming to minimize shortcut learning in distributed setups. This prevents the disease classifier from leveraging scanner-specific details to classify patients with or without PD-a key aspect for deploying ML models for clinical applications.
{"title":"HarmonyTM: multi-center data harmonization applied to distributed learning for Parkinson's disease classification.","authors":"Raissa Souza, Emma A M Stanley, Vedant Gulve, Jasmine Moore, Chris Kang, Richard Camicioli, Oury Monchi, Zahinoor Ismail, Matthias Wilms, Nils D Forkert","doi":"10.1117/1.JMI.11.5.054502","DOIUrl":"10.1117/1.JMI.11.5.054502","url":null,"abstract":"<p><strong>Purpose: </strong>Distributed learning is widely used to comply with data-sharing regulations and access diverse datasets for training machine learning (ML) models. The traveling model (TM) is a distributed learning approach that sequentially trains with data from one center at a time, which is especially advantageous when dealing with limited local datasets. However, a critical concern emerges when centers utilize different scanners for data acquisition, which could potentially lead models to exploit these differences as shortcuts. Although data harmonization can mitigate this issue, current methods typically rely on large or paired datasets, which can be impractical to obtain in distributed setups.</p><p><strong>Approach: </strong>We introduced HarmonyTM, a data harmonization method tailored for the TM. HarmonyTM effectively mitigates bias in the model's feature representation while retaining crucial disease-related information, all without requiring extensive datasets. Specifically, we employed adversarial training to \"unlearn\" bias from the features used in the model for classifying Parkinson's disease (PD). We evaluated HarmonyTM using multi-center three-dimensional (3D) neuroimaging datasets from 83 centers using 23 different scanners.</p><p><strong>Results: </strong>Our results show that HarmonyTM improved PD classification accuracy from 72% to 76% and reduced (unwanted) scanner classification accuracy from 53% to 30% in the TM setup.</p><p><strong>Conclusion: </strong>HarmonyTM is a method tailored for harmonizing 3D neuroimaging data within the TM approach, aiming to minimize shortcut learning in distributed setups. This prevents the disease classifier from leveraging scanner-specific details to classify patients with or without PD-a key aspect for deploying ML models for clinical applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11413651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-10-23DOI: 10.1117/1.JMI.11.5.057001
Siegfried Schlunk, Brett Byram
Purpose: Early image quality metrics were often designed with clinicians in mind, and ideal metrics would correlate with the subjective opinion of practitioners. Over time, adaptive beamformers and other post-processing methods have become more common, and these newer methods often violate assumptions of earlier image quality metrics, invalidating the meaning of those metrics. The result is that beamformers may "manipulate" metrics without producing more clinical information.
Approach: In this work, Smith et al.'s signal-to-noise ratio (SNR) metric for lesion detectability is considered, and a more robust version, here called generalized SNR (gSNR), is proposed that uses generalized contrast-to-noise ratio (gCNR) as a core. It is analytically shown that for Rayleigh distributed data, gCNR is a function of Smith et al.'s (and therefore can be used as a substitution). More robust methods for estimating the resolution cell size are considered. Simulated lesions are included to verify the equations and demonstrate behavior, and it is shown to apply equally well to in vivo data.
Results: gSNR is shown to be equivalent to SNR for delay-and-sum (DAS) beamformed data, as intended. However, it is shown to be more robust against transformations and report lesion detectability more accurately for non-Rayleigh distributed data. In the simulation included, the SNR of DAS was , and minimum variance (MV) was , but the gSNR of DAS was , and MV was , which agrees with the subjective assessment of the image. Likewise, the transformation (which is clinically identical to DAS) had an incorrect SNR of and a correct gSNR of . Similar results are shown in vivo.
Conclusions: Using gCNR as a component to estimate gSNR creates a robust measure of lesion detectability. Like SNR, gSNR can be compared with the Rose criterion and may better correlate with clinical assessments of image quality for modern beamformers.
{"title":"Expanding generalized contrast-to-noise ratio into a clinically relevant measure of lesion detectability by considering size and spatial resolution.","authors":"Siegfried Schlunk, Brett Byram","doi":"10.1117/1.JMI.11.5.057001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.057001","url":null,"abstract":"<p><strong>Purpose: </strong>Early image quality metrics were often designed with clinicians in mind, and ideal metrics would correlate with the subjective opinion of practitioners. Over time, adaptive beamformers and other post-processing methods have become more common, and these newer methods often violate assumptions of earlier image quality metrics, invalidating the meaning of those metrics. The result is that beamformers may \"manipulate\" metrics without producing more clinical information.</p><p><strong>Approach: </strong>In this work, Smith et al.'s signal-to-noise ratio (SNR) metric for lesion detectability is considered, and a more robust version, here called generalized SNR (gSNR), is proposed that uses generalized contrast-to-noise ratio (gCNR) as a core. It is analytically shown that for Rayleigh distributed data, gCNR is a function of Smith et al.'s <math> <mrow><msub><mi>C</mi> <mi>ψ</mi></msub> </mrow> </math> (and therefore can be used as a substitution). More robust methods for estimating the resolution cell size are considered. Simulated lesions are included to verify the equations and demonstrate behavior, and it is shown to apply equally well to <i>in vivo</i> data.</p><p><strong>Results: </strong>gSNR is shown to be equivalent to SNR for delay-and-sum (DAS) beamformed data, as intended. However, it is shown to be more robust against transformations and report lesion detectability more accurately for non-Rayleigh distributed data. In the simulation included, the SNR of DAS was <math><mrow><mn>4.4</mn> <mo>±</mo> <mn>0.8</mn></mrow> </math> , and minimum variance (MV) was <math><mrow><mn>6.4</mn> <mo>±</mo> <mn>1.9</mn></mrow> </math> , but the gSNR of DAS was <math><mrow><mn>4.5</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> , and MV was <math><mrow><mn>3.0</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> , which agrees with the subjective assessment of the image. Likewise, the <math> <mrow><msup><mi>DAS</mi> <mn>2</mn></msup> </mrow> </math> transformation (which is clinically identical to DAS) had an incorrect SNR of <math><mrow><mn>9.4</mn> <mo>±</mo> <mn>1.0</mn></mrow> </math> and a correct gSNR of <math><mrow><mn>4.4</mn> <mo>±</mo> <mn>0.9</mn></mrow> </math> . Similar results are shown <i>in vivo</i>.</p><p><strong>Conclusions: </strong>Using gCNR as a component to estimate gSNR creates a robust measure of lesion detectability. Like SNR, gSNR can be compared with the Rose criterion and may better correlate with clinical assessments of image quality for modern beamformers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"057001"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11498315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-08-30DOI: 10.1117/1.JMI.11.5.054002
Gino E Jansen, Bob D de Vos, Mitchel A Molenaar, Mark J Schuuring, Berto J Bouma, Ivana Išgum
Purpose: Interpreting echocardiographic exams requires substantial manual interaction as videos lack scan-plane information and have inconsistent image quality, ranging from clinically relevant to unrecognizable. Thus, a manual prerequisite step for analysis is to select the appropriate views that showcase both the target anatomy and optimal image quality. To automate this selection process, we present a method for automatic classification of routine views, recognition of unknown views, and quality assessment of detected views.
Approach: We train a neural network for view classification and employ the logit activations from the neural network for unknown view recognition. Subsequently, we train a linear regression algorithm that uses feature embeddings from the neural network to predict view quality scores. We evaluate the method on a clinical test set of 2466 echocardiography videos with expert-annotated view labels and a subset of 438 videos with expert-rated view quality scores. A second observer annotated a subset of 894 videos, including all quality-rated videos.
Results: The proposed method achieved an accuracy of for the joint objective of routine view classification and unknown view recognition, whereas a second observer reached an accuracy of 87.6%. For view quality assessment, the method achieved a Spearman's rank correlation coefficient of 0.71, whereas a second observer reached a correlation coefficient of 0.62.
Conclusion: The proposed method approaches expert-level performance, enabling fully automatic selection of the most appropriate views for manual or automatic downstream analysis.
{"title":"Automated echocardiography view classification and quality assessment with recognition of unknown views.","authors":"Gino E Jansen, Bob D de Vos, Mitchel A Molenaar, Mark J Schuuring, Berto J Bouma, Ivana Išgum","doi":"10.1117/1.JMI.11.5.054002","DOIUrl":"10.1117/1.JMI.11.5.054002","url":null,"abstract":"<p><strong>Purpose: </strong>Interpreting echocardiographic exams requires substantial manual interaction as videos lack scan-plane information and have inconsistent image quality, ranging from clinically relevant to unrecognizable. Thus, a manual prerequisite step for analysis is to select the appropriate views that showcase both the target anatomy and optimal image quality. To automate this selection process, we present a method for automatic classification of routine views, recognition of unknown views, and quality assessment of detected views.</p><p><strong>Approach: </strong>We train a neural network for view classification and employ the logit activations from the neural network for unknown view recognition. Subsequently, we train a linear regression algorithm that uses feature embeddings from the neural network to predict view quality scores. We evaluate the method on a clinical test set of 2466 echocardiography videos with expert-annotated view labels and a subset of 438 videos with expert-rated view quality scores. A second observer annotated a subset of 894 videos, including all quality-rated videos.</p><p><strong>Results: </strong>The proposed method achieved an accuracy of <math><mrow><mn>84.9</mn> <mo>%</mo> <mo>±</mo> <mn>0.67</mn></mrow> </math> for the joint objective of routine view classification and unknown view recognition, whereas a second observer reached an accuracy of 87.6%. For view quality assessment, the method achieved a Spearman's rank correlation coefficient of 0.71, whereas a second observer reached a correlation coefficient of 0.62.</p><p><strong>Conclusion: </strong>The proposed method approaches expert-level performance, enabling fully automatic selection of the most appropriate views for manual or automatic downstream analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054002"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-09-03DOI: 10.1117/1.JMI.11.5.054003
Nagasoujanya V Annasamudram, Azubuike M Okorie, Richard G Spencer, Rita R Kalyani, Qi Yang, Bennett A Landman, Luigi Ferrucci, Sokratis Makrogiannis
Purpose: Segmentation is essential for tissue quantification and characterization in studies of aging and age-related and metabolic diseases and the development of imaging biomarkers. We propose a multi-method and multi-atlas methodology for automated segmentation of functional muscle groups in three-dimensional (3D) thigh magnetic resonance images. These groups lie anatomically adjacent to each other, rendering their manual delineation a challenging and time-consuming task.
Approach: We introduce a framework for automated segmentation of the four main functional muscle groups of the thigh, gracilis, hamstring, quadriceps femoris, and sartorius, using chemical shift encoded water-fat magnetic resonance imaging (CSE-MRI). We propose fusing anatomical mappings from multiple deformable models with 3D deep learning model-based segmentation. This approach leverages the generalizability of multi-atlas segmentation (MAS) and accuracy of deep networks, hence enabling accurate assessment of volume and fat content of muscle groups.
Results: For segmentation performance evaluation, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD-95). We evaluated the proposed framework, its variants, and baseline methods on 15 healthy subjects by threefold cross-validation and tested on four patients. Fusion of multiple atlases, deformable registration models, and deep learning segmentation produced the top performance with an average DSC of 0.859 and HD-95 of 8.34 over all muscles.
Conclusions: Fusion of multiple anatomical mappings from multiple MAS techniques enriches the template set and improves the segmentation accuracy. Additional fusion with deep network decisions applied to the subject space offers complementary information. The proposed approach can produce accurate segmentation of individual muscle groups in 3D thigh MRI scans.
{"title":"Deep network and multi-atlas segmentation fusion for delineation of thigh muscle groups in three-dimensional water-fat separated MRI.","authors":"Nagasoujanya V Annasamudram, Azubuike M Okorie, Richard G Spencer, Rita R Kalyani, Qi Yang, Bennett A Landman, Luigi Ferrucci, Sokratis Makrogiannis","doi":"10.1117/1.JMI.11.5.054003","DOIUrl":"10.1117/1.JMI.11.5.054003","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation is essential for tissue quantification and characterization in studies of aging and age-related and metabolic diseases and the development of imaging biomarkers. We propose a multi-method and multi-atlas methodology for automated segmentation of functional muscle groups in three-dimensional (3D) thigh magnetic resonance images. These groups lie anatomically adjacent to each other, rendering their manual delineation a challenging and time-consuming task.</p><p><strong>Approach: </strong>We introduce a framework for automated segmentation of the four main functional muscle groups of the thigh, gracilis, hamstring, quadriceps femoris, and sartorius, using chemical shift encoded water-fat magnetic resonance imaging (CSE-MRI). We propose fusing anatomical mappings from multiple deformable models with 3D deep learning model-based segmentation. This approach leverages the generalizability of multi-atlas segmentation (MAS) and accuracy of deep networks, hence enabling accurate assessment of volume and fat content of muscle groups.</p><p><strong>Results: </strong>For segmentation performance evaluation, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD-95). We evaluated the proposed framework, its variants, and baseline methods on 15 healthy subjects by threefold cross-validation and tested on four patients. Fusion of multiple atlases, deformable registration models, and deep learning segmentation produced the top performance with an average DSC of 0.859 and HD-95 of 8.34 over all muscles.</p><p><strong>Conclusions: </strong>Fusion of multiple anatomical mappings from multiple MAS techniques enriches the template set and improves the segmentation accuracy. Additional fusion with deep network decisions applied to the subject space offers complementary information. The proposed approach can produce accurate segmentation of individual muscle groups in 3D thigh MRI scans.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054003"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11369361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-09-12DOI: 10.1117/1.JMI.11.5.054501
Karen Drukker, Milica Medved, Carla B Harmath, Maryellen L Giger, Obianuju S Madueke-Laveaux
Significance: Uterine fibroids (UFs) can pose a serious health risk to women. UFs are benign tumors that vary in clinical presentation from asymptomatic to causing debilitating symptoms. UF management is limited by our inability to predict UF growth rate and future morbidity.
Aim: We aim to develop a predictive model to identify UFs with increased growth rates and possible resultant morbidity.
Approach: We retrospectively analyzed 44 expertly outlined UFs from 20 patients who underwent two multi-parametric MR imaging exams as part of a prospective study over an average of 16 months. We identified 44 initial features by extracting quantitative magnetic resonance imaging (MRI) features plus morphological and textural radiomics features from DCE, T2, and apparent diffusion coefficient sequences. Principal component analysis reduced dimensionality, with the smallest number of components explaining over 97.5% of the variance selected. Employing a leave-one-fibroid-out scheme, a linear discriminant analysis classifier utilized these components to output a growth risk score.
Results: The classifier incorporated the first three principal components and achieved an area under the receiver operating characteristic curve of 0.80 (95% confidence interval [0.69; 0.91]), effectively distinguishing UFs growing faster than the median growth rate of from slower-growing ones within the cohort. Time-to-event analysis, dividing the cohort based on the median growth risk score, yielded a hazard ratio of 0.33 [0.15; 0.76], demonstrating potential clinical utility.
Conclusion: We developed a promising predictive model utilizing quantitative MRI features and principal component analysis to identify UFs with increased growth rates. Furthermore, the model's discrimination ability supports its potential clinical utility in developing tailored patient and fibroid-specific management once validated on a larger cohort.
{"title":"Radiomics and quantitative multi-parametric MRI for predicting uterine fibroid growth.","authors":"Karen Drukker, Milica Medved, Carla B Harmath, Maryellen L Giger, Obianuju S Madueke-Laveaux","doi":"10.1117/1.JMI.11.5.054501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.054501","url":null,"abstract":"<p><strong>Significance: </strong>Uterine fibroids (UFs) can pose a serious health risk to women. UFs are benign tumors that vary in clinical presentation from asymptomatic to causing debilitating symptoms. UF management is limited by our inability to predict UF growth rate and future morbidity.</p><p><strong>Aim: </strong>We aim to develop a predictive model to identify UFs with increased growth rates and possible resultant morbidity.</p><p><strong>Approach: </strong>We retrospectively analyzed 44 expertly outlined UFs from 20 patients who underwent two multi-parametric MR imaging exams as part of a prospective study over an average of 16 months. We identified 44 initial features by extracting quantitative magnetic resonance imaging (MRI) features plus morphological and textural radiomics features from DCE, T2, and apparent diffusion coefficient sequences. Principal component analysis reduced dimensionality, with the smallest number of components explaining over 97.5% of the variance selected. Employing a leave-one-fibroid-out scheme, a linear discriminant analysis classifier utilized these components to output a growth risk score.</p><p><strong>Results: </strong>The classifier incorporated the first three principal components and achieved an area under the receiver operating characteristic curve of 0.80 (95% confidence interval [0.69; 0.91]), effectively distinguishing UFs growing faster than the median growth rate of <math><mrow><mn>0.93</mn> <mtext> </mtext> <msup><mrow><mi>cm</mi></mrow> <mrow><mn>3</mn></mrow> </msup> <mo>/</mo> <mi>year</mi> <mo>/</mo> <mi>fibroid</mi></mrow> </math> from slower-growing ones within the cohort. Time-to-event analysis, dividing the cohort based on the median growth risk score, yielded a hazard ratio of 0.33 [0.15; 0.76], demonstrating potential clinical utility.</p><p><strong>Conclusion: </strong>We developed a promising predictive model utilizing quantitative MRI features and principal component analysis to identify UFs with increased growth rates. Furthermore, the model's discrimination ability supports its potential clinical utility in developing tailored patient and fibroid-specific management once validated on a larger cohort.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"054501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-10-17DOI: 10.1117/1.JMI.11.5.053502
Nicholas Felice, Benjamin Wildman-Tobriner, William Paul Segars, Mustafa R Bashir, Daniele Marin, Ehsan Samei, Ehsan Abadi
Purpose: Photon-counting computed tomography (PCCT) has the potential to provide superior image quality to energy-integrating CT (EICT). We objectively compare PCCT to EICT for liver lesion detection.
Approach: Fifty anthropomorphic, computational phantoms with inserted liver lesions were generated. Contrast-enhanced scans of each phantom were simulated at the portal venous phase. The acquisitions were done using DukeSim, a validated CT simulation platform. Scans were simulated at two dose levels ( 1.5 to 6.0 mGy) modeling PCCT (NAEOTOM Alpha, Siemens, Erlangen, Germany) and EICT (SOMATOM Flash, Siemens). Images were reconstructed with varying levels of kernel sharpness (soft, medium, sharp). To provide a quantitative estimate of image quality, the modulation transfer function (MTF), frequency at 50% of the MTF ( ), noise magnitude, contrast-to-noise ratio (CNR, per lesion), and detectability index ( , per lesion) were measured.
Results: Across all studied conditions, the best detection performance, measured by , was for PCCT images with the highest dose level and softest kernel. With soft kernel reconstruction, PCCT demonstrated improved lesion CNR and compared with EICT, with a mean increase in CNR of 35.0% ( ) and 21% ( ) and a mean increase in of 41.0% ( ) and 23.3% ( ) for the 1.5 and 6.0 mGy acquisitions, respectively. The improvements were greatest for larger phantoms, low-contrast lesions, and low-dose scans.
Conclusions: PCCT demonstrated objective improvement in liver lesion detection and image quality metrics compared with EICT. These advances may lead to earlier and more accurate liver lesion detection, thus improving patient care.
{"title":"Photon-counting computed tomography versus energy-integrating computed tomography for detection of small liver lesions: comparison using a virtual framework imaging.","authors":"Nicholas Felice, Benjamin Wildman-Tobriner, William Paul Segars, Mustafa R Bashir, Daniele Marin, Ehsan Samei, Ehsan Abadi","doi":"10.1117/1.JMI.11.5.053502","DOIUrl":"10.1117/1.JMI.11.5.053502","url":null,"abstract":"<p><strong>Purpose: </strong>Photon-counting computed tomography (PCCT) has the potential to provide superior image quality to energy-integrating CT (EICT). We objectively compare PCCT to EICT for liver lesion detection.</p><p><strong>Approach: </strong>Fifty anthropomorphic, computational phantoms with inserted liver lesions were generated. Contrast-enhanced scans of each phantom were simulated at the portal venous phase. The acquisitions were done using DukeSim, a validated CT simulation platform. Scans were simulated at two dose levels ( <math> <mrow> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> 1.5 to 6.0 mGy) modeling PCCT (NAEOTOM Alpha, Siemens, Erlangen, Germany) and EICT (SOMATOM Flash, Siemens). Images were reconstructed with varying levels of kernel sharpness (soft, medium, sharp). To provide a quantitative estimate of image quality, the modulation transfer function (MTF), frequency at 50% of the MTF ( <math> <mrow><msub><mi>f</mi> <mn>50</mn></msub> </mrow> </math> ), noise magnitude, contrast-to-noise ratio (CNR, per lesion), and detectability index ( <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> , per lesion) were measured.</p><p><strong>Results: </strong>Across all studied conditions, the best detection performance, measured by <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> , was for PCCT images with the highest dose level and softest kernel. With soft kernel reconstruction, PCCT demonstrated improved lesion CNR and <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> compared with EICT, with a mean increase in CNR of 35.0% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and 21% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and a mean increase in <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> of 41.0% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and 23.3% ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.007</mn></mrow> </math> ) for the 1.5 and 6.0 mGy acquisitions, respectively. The improvements were greatest for larger phantoms, low-contrast lesions, and low-dose scans.</p><p><strong>Conclusions: </strong>PCCT demonstrated objective improvement in liver lesion detection and image quality metrics compared with EICT. These advances may lead to earlier and more accurate liver lesion detection, thus improving patient care.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"053502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486217/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}