Pub Date : 2025-01-08DOI: 10.1016/j.compmedimag.2024.102490
Yiwen Shen , Li Chen , Jieyi Liu , Haobo Chen , Changyan Wang , Hong Ding , Qi Zhang
Parkinson disease (PD) is a prevalent neurodegenerative disorder, and its accurate diagnosis is crucial for timely intervention. We propose the PArkinson disease Denoising and Segmentation Network (PADS-Net), to simultaneously denoise and segment transcranial ultrasound images of midbrain for accurate PD diagnosis. The PADS-Net is built upon generative adversarial networks and incorporates a multi-task deep learning framework aimed at optimizing the tasks of denoising and segmentation for ultrasound images. A composite loss function including the mean absolute error, the mean squared error and the Dice loss, is adopted in the PADS-Net to effectively capture image details. The PADS-Net also integrates radiomics techniques for PD diagnosis by exploiting high-throughput features from ultrasound images. A four-branch ensemble diagnostic model is designed by utilizing two “wings” of the butterfly-shaped midbrain regions on both ipsilateral and contralateral images to enhance the accuracy of PD diagnosis. Experimental results demonstrate that the PADS-Net not only reduced speckle noise, achieving the edge-to-noise ratio of 16.90, but also attained a Dice coefficient of 0.91 for midbrain segmentation. The PADS-Net finally achieved an area under the receiver operating characteristic curve as high as 0.87 for diagnosis of PD. Our PADS-Net excels in transcranial ultrasound image denoising and segmentation and offers a potential clinical solution to accurate PD assessment.
{"title":"PADS-Net: GAN-based radiomics using multi-task network of denoising and segmentation for ultrasonic diagnosis of Parkinson disease","authors":"Yiwen Shen , Li Chen , Jieyi Liu , Haobo Chen , Changyan Wang , Hong Ding , Qi Zhang","doi":"10.1016/j.compmedimag.2024.102490","DOIUrl":"10.1016/j.compmedimag.2024.102490","url":null,"abstract":"<div><div>Parkinson disease (PD) is a prevalent neurodegenerative disorder, and its accurate diagnosis is crucial for timely intervention. We propose the <em>PA</em>rkinson disease <em>D</em>enoising and <em>S</em>egmentation <em>Net</em>work (PADS-Net), to simultaneously denoise and segment transcranial ultrasound images of midbrain for accurate PD diagnosis. The PADS-Net is built upon generative adversarial networks and incorporates a multi-task deep learning framework aimed at optimizing the tasks of denoising and segmentation for ultrasound images. A composite loss function including the mean absolute error, the mean squared error and the Dice loss, is adopted in the PADS-Net to effectively capture image details. The PADS-Net also integrates radiomics techniques for PD diagnosis by exploiting high-throughput features from ultrasound images. A four-branch ensemble diagnostic model is designed by utilizing two “wings” of the butterfly-shaped midbrain regions on both ipsilateral and contralateral images to enhance the accuracy of PD diagnosis. Experimental results demonstrate that the PADS-Net not only reduced speckle noise, achieving the edge-to-noise ratio of 16.90, but also attained a Dice coefficient of 0.91 for midbrain segmentation. The PADS-Net finally achieved an area under the receiver operating characteristic curve as high as 0.87 for diagnosis of PD. Our PADS-Net excels in transcranial ultrasound image denoising and segmentation and offers a potential clinical solution to accurate PD assessment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102490"},"PeriodicalIF":5.4,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-08DOI: 10.1016/j.compmedimag.2025.102492
Lidan Fu , Lingbing Li , Binchun Lu , Xiaoyong Guo , Xiaojing Shi , Jie Tian , Zhenhua Hu
In clinical optical molecular imaging, the need for real-time high frame rates and low excitation doses to ensure patient safety inherently increases susceptibility to detection noise. Faced with the challenge of image degradation caused by severe noise, image denoising is essential for mitigating the trade-off between acquisition cost and image quality. However, prevailing deep learning methods exhibit uncontrollable and suboptimal performance with limited interpretability, primarily due to neglecting underlying physical model and frequency information. In this work, we introduce an end-to-end model-driven Deep Equilibrium Unfolding Mamba (DEQ-UMamba) that integrates proximal gradient descent technique and learnt spatial-frequency characteristics to decouple complex noise structures into statistical distributions, enabling effective noise estimation and suppression in fluorescent images. Moreover, to address the computational limitations of unfolding networks, DEQ-UMamba trains an implicit mapping by directly differentiating the equilibrium point of the convergent solution, thereby ensuring stability and avoiding non-convergent behavior. With each network module aligned to a corresponding operation in the iterative optimization process, the proposed method achieves clear structural interpretability and strong performance. Comprehensive experiments conducted on both clinical and in vivo datasets demonstrate that DEQ-UMamba outperforms current state-of-the-art alternatives while utilizing fewer parameters, facilitating the advancement of cost-effective and high-quality clinical molecular imaging.
{"title":"Deep Equilibrium Unfolding Learning for Noise Estimation and Removal in Optical Molecular Imaging","authors":"Lidan Fu , Lingbing Li , Binchun Lu , Xiaoyong Guo , Xiaojing Shi , Jie Tian , Zhenhua Hu","doi":"10.1016/j.compmedimag.2025.102492","DOIUrl":"10.1016/j.compmedimag.2025.102492","url":null,"abstract":"<div><div>In clinical optical molecular imaging, the need for real-time high frame rates and low excitation doses to ensure patient safety inherently increases susceptibility to detection noise. Faced with the challenge of image degradation caused by severe noise, image denoising is essential for mitigating the trade-off between acquisition cost and image quality. However, prevailing deep learning methods exhibit uncontrollable and suboptimal performance with limited interpretability, primarily due to neglecting underlying physical model and frequency information. In this work, we introduce an end-to-end model-driven Deep Equilibrium Unfolding Mamba (DEQ-UMamba) that integrates proximal gradient descent technique and learnt spatial-frequency characteristics to decouple complex noise structures into statistical distributions, enabling effective noise estimation and suppression in fluorescent images. Moreover, to address the computational limitations of unfolding networks, DEQ-UMamba trains an implicit mapping by directly differentiating the equilibrium point of the convergent solution, thereby ensuring stability and avoiding non-convergent behavior. With each network module aligned to a corresponding operation in the iterative optimization process, the proposed method achieves clear structural interpretability and strong performance. Comprehensive experiments conducted on both clinical and in vivo datasets demonstrate that DEQ-UMamba outperforms current state-of-the-art alternatives while utilizing fewer parameters, facilitating the advancement of cost-effective and high-quality clinical molecular imaging.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102492"},"PeriodicalIF":5.4,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1016/j.compmedimag.2024.102485
Shuanglin Jiang , Jiangchang Xu , Wenyin Wang , Baoxin Tao , Yiqun Wu , Xiaojun Chen
Accurate segmentation of the inferior alveolar nerve (IAN) within Cone-Beam Computed Tomography (CBCT) images is critical for the precise planning of oral and maxillofacial surgeries, especially to avoid IAN damage. Existing methods often fail due to the low contrast of the IAN and the presence of artifacts, which can cause segmentation discontinuities. To address these challenges, this paper proposes a novel approach that employs Non-Uniform Rational B-Spline (NURBS) curve shape priors into a multiscale attention network for the automatic segmentation of the IAN. Firstly, an automatic method for generating non-uniform rational B-spline (NURBS) shape prior is proposed and introduced into the segmentation network, which significantly enhancing the continuity and accuracy of IAN segmentation. Then a multiscale attention segmentation network, incorporating a dilation selective attention module is developed, to improve the network’s feature extraction capacity. The proposed approach is validated on both in-house and public datasets, showcasing superior performance compared to established benchmarks, achieving 80.29±11.04% dice coefficient (Dice) and 68.14±12.06% intersection of union (IoU), the 95% Hausdorff distance (95HD) reaches 1.61±6.14 mm and mean surface distance (MSD) reaches 0.64±2.16 mm on private dataset. On public dataset, the Dice reaches 80.69±4.93%, IoU reaches 67.86±6.73%, 95HD reaches 1.04±0.95 mm, and MSD reaches 0.42±0.34 mm. Compared to state-of-the-art networks, the proposed approach out-performs in both voxel accuracy and surface distance. It offers significant potential to improve doctors’ efficiency in segmentation tasks and holds promise for applications in dental surgery planning. The source codes are available at https://github.com/SJTUjsl/NURBS_IAN.git.
{"title":"NURBS curve shape prior-guided multiscale attention network for automatic segmentation of the inferior alveolar nerve","authors":"Shuanglin Jiang , Jiangchang Xu , Wenyin Wang , Baoxin Tao , Yiqun Wu , Xiaojun Chen","doi":"10.1016/j.compmedimag.2024.102485","DOIUrl":"10.1016/j.compmedimag.2024.102485","url":null,"abstract":"<div><div>Accurate segmentation of the inferior alveolar nerve (IAN) within Cone-Beam Computed Tomography (CBCT) images is critical for the precise planning of oral and maxillofacial surgeries, especially to avoid IAN damage. Existing methods often fail due to the low contrast of the IAN and the presence of artifacts, which can cause segmentation discontinuities. To address these challenges, this paper proposes a novel approach that employs Non-Uniform Rational B-Spline (NURBS) curve shape priors into a multiscale attention network for the automatic segmentation of the IAN. Firstly, an automatic method for generating non-uniform rational B-spline (NURBS) shape prior is proposed and introduced into the segmentation network, which significantly enhancing the continuity and accuracy of IAN segmentation. Then a multiscale attention segmentation network, incorporating a dilation selective attention module is developed, to improve the network’s feature extraction capacity. The proposed approach is validated on both in-house and public datasets, showcasing superior performance compared to established benchmarks, achieving 80.29±11.04% dice coefficient (Dice) and 68.14±12.06% intersection of union (IoU), the 95% Hausdorff distance (95HD) reaches 1.61±6.14 mm and mean surface distance (MSD) reaches 0.64±2.16 mm on private dataset. On public dataset, the Dice reaches 80.69±4.93%, IoU reaches 67.86±6.73%, 95HD reaches 1.04±0.95 mm, and MSD reaches 0.42±0.34 mm. Compared to state-of-the-art networks, the proposed approach out-performs in both voxel accuracy and surface distance. It offers significant potential to improve doctors’ efficiency in segmentation tasks and holds promise for applications in dental surgery planning. The source codes are available at <span><span>https://github.com/SJTUjsl/NURBS_IAN.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102485"},"PeriodicalIF":5.4,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-04DOI: 10.1016/j.compmedimag.2024.102491
Xinghua Ma , Mingye Zou , Xinyan Fang , Gongning Luo , Wei Wang , Suyu Dong , Xiangyu Li , Kuanquan Wang , Qing Dong , Ye Tian , Shuo Li
A generic and versatile CT Image Reconstruction (CTIR) scheme can efficiently mitigate imaging noise resulting from inherent physical limitations, substantially bolstering the dependability of CT imaging diagnostics across a wider spectrum of patient cases. Current CTIR techniques often concentrate on distinct areas such as Low-Dose CT denoising (LDCTD), Sparse-View CT reconstruction (SVCTR), and Metal Artifact Reduction (MAR). Nevertheless, due to the intricate nature of multi-scenario CTIR, these techniques frequently narrow their focus to specific tasks, resulting in limited generalization capabilities for diverse scenarios. We propose a novel Convergent–Diffusion Denoising Model (CDDM) for multi-scenario CTIR, which utilizes a stepwise denoising process to converge toward an imaging-noise-free image with high generalization. CDDM uses a diffusion-based process based on a priori decay distribution to steadily correct imaging noise, thus avoiding the overfitting of individual samples. Within CDDM, a domain-correlated sampling network (DS-Net) provides an innovative sinogram-guided noise prediction scheme to leverage both image and sinogram (i.e., dual-domain) information. DS-Net analyzes the correlation of the dual-domain representations for sampling the noise distribution, introducing sinogram semantics to avoid secondary artifacts. Experimental results validate the practical applicability of our scheme across various CTIR scenarios, including LDCTD, MAR, and SVCTR, with the support of sinogram knowledge.
{"title":"Convergent–Diffusion Denoising Model for multi-scenario CT Image Reconstruction","authors":"Xinghua Ma , Mingye Zou , Xinyan Fang , Gongning Luo , Wei Wang , Suyu Dong , Xiangyu Li , Kuanquan Wang , Qing Dong , Ye Tian , Shuo Li","doi":"10.1016/j.compmedimag.2024.102491","DOIUrl":"10.1016/j.compmedimag.2024.102491","url":null,"abstract":"<div><div>A generic and versatile CT Image Reconstruction (CTIR) scheme can efficiently mitigate imaging noise resulting from inherent physical limitations, substantially bolstering the dependability of CT imaging diagnostics across a wider spectrum of patient cases. Current CTIR techniques often concentrate on distinct areas such as Low-Dose CT denoising (LDCTD), Sparse-View CT reconstruction (SVCTR), and Metal Artifact Reduction (MAR). Nevertheless, due to the intricate nature of multi-scenario CTIR, these techniques frequently narrow their focus to specific tasks, resulting in limited generalization capabilities for diverse scenarios. We propose a novel Convergent–Diffusion Denoising Model (CDDM) for multi-scenario CTIR, which utilizes a stepwise denoising process to converge toward an imaging-noise-free image with high generalization. CDDM uses a diffusion-based process based on a priori decay distribution to steadily correct imaging noise, thus avoiding the overfitting of individual samples. Within CDDM, a domain-correlated sampling network (DS-Net) provides an innovative sinogram-guided noise prediction scheme to leverage both image and sinogram (<em>i.e.</em>, dual-domain) information. DS-Net analyzes the correlation of the dual-domain representations for sampling the noise distribution, introducing sinogram semantics to avoid secondary artifacts. Experimental results validate the practical applicability of our scheme across various CTIR scenarios, including LDCTD, MAR, and SVCTR, with the support of sinogram knowledge.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102491"},"PeriodicalIF":5.4,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-04DOI: 10.1016/j.compmedimag.2024.102486
Xinyao Liu , Junchang Xin , Qi Shen , Zhihong Huang , Zhiqiong Wang
With the increasing popularity of medical imaging and its expanding applications, posing significant challenges for radiologists. Radiologists need to spend substantial time and effort to review images and manually writing reports every day. To address these challenges and speed up the process of patient care, researchers have employed deep learning methods to automatically generate medical reports. In recent years, researchers have been increasingly focusing on this task and a large amount of related work has emerged. Although there have been some review articles summarizing the state of the art in this field, their discussions remain relatively limited. Therefore, this paper provides a comprehensive review of the latest advancements in automatic medical report generation, focusing on four key aspects: (1) describing the problem of automatic medical report generation, (2) introducing datasets of different modalities, (3) thoroughly analyzing existing evaluation metrics, (4) classifying existing studies into five categories: retrieval-based, domain knowledge-based, attention-based, reinforcement learning-based, large language models-based, and merged model. In addition, we point out the problems in this field and discuss the directions of future challenges. We hope that this review provides a thorough understanding of automatic medical report generation and encourages the continued development in this area.
{"title":"Automatic medical report generation based on deep learning: A state of the art survey","authors":"Xinyao Liu , Junchang Xin , Qi Shen , Zhihong Huang , Zhiqiong Wang","doi":"10.1016/j.compmedimag.2024.102486","DOIUrl":"10.1016/j.compmedimag.2024.102486","url":null,"abstract":"<div><div>With the increasing popularity of medical imaging and its expanding applications, posing significant challenges for radiologists. Radiologists need to spend substantial time and effort to review images and manually writing reports every day. To address these challenges and speed up the process of patient care, researchers have employed deep learning methods to automatically generate medical reports. In recent years, researchers have been increasingly focusing on this task and a large amount of related work has emerged. Although there have been some review articles summarizing the state of the art in this field, their discussions remain relatively limited. Therefore, this paper provides a comprehensive review of the latest advancements in automatic medical report generation, focusing on four key aspects: (1) describing the problem of automatic medical report generation, (2) introducing datasets of different modalities, (3) thoroughly analyzing existing evaluation metrics, (4) classifying existing studies into five categories: retrieval-based, domain knowledge-based, attention-based, reinforcement learning-based, large language models-based, and merged model. In addition, we point out the problems in this field and discuss the directions of future challenges. We hope that this review provides a thorough understanding of automatic medical report generation and encourages the continued development in this area.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102486"},"PeriodicalIF":5.4,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-04DOI: 10.1016/j.compmedimag.2024.102489
Chenjun Li , Dian Yang , Shun Yao , Shuyue Wang , Ye Wu , Le Zhang , Qiannuo Li , Kang Ik Kevin Cho , Johanna Seitz-Holland , Lipeng Ning , Jon Haitz Legarreta , Yogesh Rathi , Carl-Fredrik Westin , Lauren J. O’Donnell , Nir A. Sochen , Ofer Pasternak , Fan Zhang
In this study, we developed an Evidential Ensemble Neural Network based on Deep learning and Diffusion MRI, namely DDEvENet, for anatomical brain parcellation. The key innovation of DDEvENet is the design of an evidential deep learning framework to quantify predictive uncertainty at each voxel during a single inference. To do so, we design an evidence-based ensemble learning framework for uncertainty-aware parcellation to leverage the multiple dMRI parameters derived from diffusion MRI. Using DDEvENet, we obtained accurate parcellation and uncertainty estimates across different datasets from healthy and clinical populations and with different imaging acquisitions. The overall network includes five parallel subnetworks, where each is dedicated to learning the FreeSurfer parcellation for a certain diffusion MRI parameter. An evidence-based ensemble methodology is then proposed to fuse the individual outputs. We perform experimental evaluations on large-scale datasets from multiple imaging sources, including high-quality diffusion MRI data from healthy adults and clinically diffusion MRI data from participants with various brain diseases (schizophrenia, bipolar disorder, attention-deficit/hyperactivity disorder, Parkinson’s disease, cerebral small vessel disease, and neurosurgical patients with brain tumors). Compared to several state-of-the-art methods, our experimental results demonstrate highly improved parcellation accuracy across the multiple testing datasets despite the differences in dMRI acquisition protocols and health conditions. Furthermore, thanks to the uncertainty estimation, our DDEvENet approach demonstrates a good ability to detect abnormal brain regions in patients with lesions that are consistent with expert-drawn results, enhancing the interpretability and reliability of the segmentation results.
{"title":"DDEvENet: Evidence-based ensemble learning for uncertainty-aware brain parcellation using diffusion MRI","authors":"Chenjun Li , Dian Yang , Shun Yao , Shuyue Wang , Ye Wu , Le Zhang , Qiannuo Li , Kang Ik Kevin Cho , Johanna Seitz-Holland , Lipeng Ning , Jon Haitz Legarreta , Yogesh Rathi , Carl-Fredrik Westin , Lauren J. O’Donnell , Nir A. Sochen , Ofer Pasternak , Fan Zhang","doi":"10.1016/j.compmedimag.2024.102489","DOIUrl":"10.1016/j.compmedimag.2024.102489","url":null,"abstract":"<div><div>In this study, we developed an Evidential Ensemble Neural Network based on Deep learning and Diffusion MRI, namely DDEvENet, for anatomical brain parcellation. The key innovation of DDEvENet is the design of an evidential deep learning framework to quantify predictive uncertainty at each voxel during a single inference. To do so, we design an evidence-based ensemble learning framework for uncertainty-aware parcellation to leverage the multiple dMRI parameters derived from diffusion MRI. Using DDEvENet, we obtained accurate parcellation and uncertainty estimates across different datasets from healthy and clinical populations and with different imaging acquisitions. The overall network includes five parallel subnetworks, where each is dedicated to learning the FreeSurfer parcellation for a certain diffusion MRI parameter. An evidence-based ensemble methodology is then proposed to fuse the individual outputs. We perform experimental evaluations on large-scale datasets from multiple imaging sources, including high-quality diffusion MRI data from healthy adults and clinically diffusion MRI data from participants with various brain diseases (schizophrenia, bipolar disorder, attention-deficit/hyperactivity disorder, Parkinson’s disease, cerebral small vessel disease, and neurosurgical patients with brain tumors). Compared to several state-of-the-art methods, our experimental results demonstrate highly improved parcellation accuracy across the multiple testing datasets despite the differences in dMRI acquisition protocols and health conditions. Furthermore, thanks to the uncertainty estimation, our DDEvENet approach demonstrates a good ability to detect abnormal brain regions in patients with lesions that are consistent with expert-drawn results, enhancing the interpretability and reliability of the segmentation results.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102489"},"PeriodicalIF":5.4,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Methods for the automated segmentation of brain structures are a major subject of medical research. The small structures of the deep brain have received scant attention, notably for lack of manual delineations by medical experts. In this study, we assessed an automated segmentation of a novel clinical dataset containing White Matter Attenuated Inversion-Recovery (WAIR) MRI images and five manually segmented structures (substantia nigra (SN), subthalamic nucleus (STN), red nucleus (RN), mammillary body (MB) and mammillothalamic fascicle (MT-fa)) in 53 patients with severe Parkinson’s disease. T1 and DTI images were additionally used. We also assessed the reorientation of DTI diffusion vectors with reference to the ACPC line. A state-of-the-art nnU-Net method was trained and tested on subsets of 38 and 15 image datasets respectively. We used Dice similarity coefficient (DSC), 95% Hausdorff distance (95HD), and volumetric similarity (VS) as metrics to evaluate network efficiency in reproducing manual contouring. Random-effects models statistically compared values according to structures, accounting for between- and within-participant variability. Results show that WAIR significantly outperformed T1 for DSC (0.739 ± 0.073), 95HD (1.739 ± 0.398), and VS (0.892 ± 0.044). The DSC values for automated segmentation of MB, RN, SN, STN, and MT-fa decreased in that order, in line with the increasing complexity observed in manual segmentation. Based on training results, the reorientation of DTI vectors improved the automated segmentation.
{"title":"Automated segmentation of deep brain structures from Inversion-Recovery MRI","authors":"Aigerim Dautkulova , Omar Ait Aider , Céline Teulière , Jérôme Coste , Rémi Chaix , Omar Ouachik , Bruno Pereira , Jean-Jacques Lemaire","doi":"10.1016/j.compmedimag.2024.102488","DOIUrl":"10.1016/j.compmedimag.2024.102488","url":null,"abstract":"<div><div>Methods for the automated segmentation of brain structures are a major subject of medical research. The small structures of the deep brain have received scant attention, notably for lack of manual delineations by medical experts. In this study, we assessed an automated segmentation of a novel clinical dataset containing White Matter Attenuated Inversion-Recovery (WAIR) MRI images and five manually segmented structures (substantia nigra (SN), subthalamic nucleus (STN), red nucleus (RN), mammillary body (MB) and mammillothalamic fascicle (MT-fa)) in 53 patients with severe Parkinson’s disease. T1 and DTI images were additionally used. We also assessed the reorientation of DTI diffusion vectors with reference to the ACPC line. A state-of-the-art nnU-Net method was trained and tested on subsets of 38 and 15 image datasets respectively. We used Dice similarity coefficient (DSC), 95% Hausdorff distance (95HD), and volumetric similarity (VS) as metrics to evaluate network efficiency in reproducing manual contouring. Random-effects models statistically compared values according to structures, accounting for between- and within-participant variability. Results show that WAIR significantly outperformed T1 for DSC (0.739 ± 0.073), 95HD (1.739 ± 0.398), and VS (0.892 ± 0.044). The DSC values for automated segmentation of MB, RN, SN, STN, and MT-fa decreased in that order, in line with the increasing complexity observed in manual segmentation. Based on training results, the reorientation of DTI vectors improved the automated segmentation.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102488"},"PeriodicalIF":5.4,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Portable head CT images often suffer motion artifacts due to the prolonged scanning time and critically ill patients who are unable to hold still. Image-domain motion correction is attractive for this application as it does not require CT projection data. This paper describes and evaluates a generative model based on conditional diffusion to correct motion artifacts in portable head CT scans. This model was trained to find the motion-free CT image conditioned on the paired motion-corrupted image. Our method utilizes histogram equalization to resolve the intensity range discrepancy of skull and brain tissue and an advanced Elucidated Diffusion Model (EDM) framework for faster sampling and better motion correction performance. Our EDM framework is superior in correcting artifacts in the brain tissue region and across the entire image compared to CNN-based methods and standard diffusion approach (DDPM) in a simulation study and a phantom study with known motion-free ground truth. Furthermore, we conducted a reader study on real-world portable CT scans to demonstrate improvement of image quality using our method.
{"title":"Portable head CT motion artifact correction via diffusion-based generative model","authors":"Zhennong Chen , Siyeop Yoon , Quirin Strotzer , Rehab Naeem Khalid , Matthew Tivnan , Quanzheng Li , Rajiv Gupta , Dufan Wu","doi":"10.1016/j.compmedimag.2024.102478","DOIUrl":"10.1016/j.compmedimag.2024.102478","url":null,"abstract":"<div><div>Portable head CT images often suffer motion artifacts due to the prolonged scanning time and critically ill patients who are unable to hold still. Image-domain motion correction is attractive for this application as it does not require CT projection data. This paper describes and evaluates a generative model based on conditional diffusion to correct motion artifacts in portable head CT scans. This model was trained to find the motion-free CT image conditioned on the paired motion-corrupted image. Our method utilizes histogram equalization to resolve the intensity range discrepancy of skull and brain tissue and an advanced Elucidated Diffusion Model (EDM) framework for faster sampling and better motion correction performance. Our EDM framework is superior in correcting artifacts in the brain tissue region and across the entire image compared to CNN-based methods and standard diffusion approach (DDPM) in a simulation study and a phantom study with known motion-free ground truth. Furthermore, we conducted a reader study on real-world portable CT scans to demonstrate improvement of image quality using our method.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102478"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.compmedimag.2024.102482
Yiming Liu , Ling Zhang , Mingxue Gu , Yaoxing Xiao , Ting Yu , Xiang Tao , Qing Zhang , Yan Wang , Dinggang Shen , Qingli Li
Pathological analysis of placenta is currently a valuable tool for gaining insights into pregnancy outcomes. In placental histopathology, multiple functional tissues can be inspected as potential signals reflecting the transfer functionality between fetal and maternal circulations. However, the identification of multiple functional tissues is challenging due to (1) severe heterogeneity in texture, size and shape, (2) distribution across different scales and (3) the need for comprehensive assessment at the whole slide image (WSI) level. To solve aforementioned problems, we establish a brand new dataset and propose a computer-aided segmentation framework through multi-model fusion and distillation to identify multiple functional tissues in placental histopathologic images, including villi, capillaries, fibrin deposits and trophoblast aggregations. Specifically, we propose a two-stage Multi-model Fusion and Distillation (MMFD) framework. Considering the multi-scale distribution and heterogeneity of multiple functional tissues, we enhance the visual representation in the first stage by fusing feature from multiple models to boost the effectiveness of the network. However, the multi-model fusion stage contributes to extra parameters and a significant computational burden, which is impractical for recognizing gigapixels of WSIs within clinical practice. In the second stage, we propose straightforward plug-in feature distillation method that transfers knowledge from the large fused model to a compact student model. In self-collected placental dataset, our proposed MMFD framework demonstrates an improvement of 4.3% in mean Intersection over Union (mIoU) while achieving an approximate 50% increase in inference speed and utilizing only 10% of parameters and computational resources, compared to the parameter-efficient fine-tuned Segment Anything Model (SAM) baseline. Visualization of segmentation results across entire WSIs on unseen cases demonstrates the generalizability of our proposed MMFD framework. Besides, experimental results on a public dataset further prove the effectiveness of MMFD framework on other tasks. Our work can present a fundamental method to expedite quantitative analysis of placental histopathology.
{"title":"Inspect quantitative signals in placental histopathology: Computer-assisted multiple functional tissues identification through multi-model fusion and distillation framework","authors":"Yiming Liu , Ling Zhang , Mingxue Gu , Yaoxing Xiao , Ting Yu , Xiang Tao , Qing Zhang , Yan Wang , Dinggang Shen , Qingli Li","doi":"10.1016/j.compmedimag.2024.102482","DOIUrl":"10.1016/j.compmedimag.2024.102482","url":null,"abstract":"<div><div>Pathological analysis of placenta is currently a valuable tool for gaining insights into pregnancy outcomes. In placental histopathology, multiple functional tissues can be inspected as potential signals reflecting the transfer functionality between fetal and maternal circulations. However, the identification of multiple functional tissues is challenging due to (1) severe heterogeneity in texture, size and shape, (2) distribution across different scales and (3) the need for comprehensive assessment at the whole slide image (WSI) level. To solve aforementioned problems, we establish a brand new dataset and propose a computer-aided segmentation framework through multi-model fusion and distillation to identify multiple functional tissues in placental histopathologic images, including villi, capillaries, fibrin deposits and trophoblast aggregations. Specifically, we propose a two-stage Multi-model Fusion and Distillation (MMFD) framework. Considering the multi-scale distribution and heterogeneity of multiple functional tissues, we enhance the visual representation in the first stage by fusing feature from multiple models to boost the effectiveness of the network. However, the multi-model fusion stage contributes to extra parameters and a significant computational burden, which is impractical for recognizing gigapixels of WSIs within clinical practice. In the second stage, we propose straightforward plug-in feature distillation method that transfers knowledge from the large fused model to a compact student model. In self-collected placental dataset, our proposed MMFD framework demonstrates an improvement of 4.3% in mean Intersection over Union (mIoU) while achieving an approximate 50% increase in inference speed and utilizing only 10% of parameters and computational resources, compared to the parameter-efficient fine-tuned Segment Anything Model (SAM) baseline. Visualization of segmentation results across entire WSIs on unseen cases demonstrates the generalizability of our proposed MMFD framework. Besides, experimental results on a public dataset further prove the effectiveness of MMFD framework on other tasks. Our work can present a fundamental method to expedite quantitative analysis of placental histopathology.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102482"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.compmedimag.2024.102474
Pierre Rougé , Pierre-Henri Conze , Nicolas Passat , Odyssée Merveille
Segmentation in medical imaging is an essential and often preliminary task in the image processing chain, driving numerous efforts towards the design of robust segmentation algorithms. Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data. However, such labels are typically highly time-consuming, error-prone and expensive to produce. Alternatively, semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled. They are particularly useful for cerebrovascular segmentation, given that labeling a single volume requires several hours for an expert. In addition to the challenge posed by insufficient annotations, there are concerns regarding annotation consistency. The task of annotating the cerebrovascular tree is inherently ambiguous. Due to the discrete nature of images, the borders and extremities of vessels are often unclear. Consequently, annotations heavily rely on the expert subjectivity and on the underlying clinical objective. These discrepancies significantly increase the complexity of the segmentation task for the model and consequently impair the results. Consequently, it becomes imperative to provide clinicians with precise guidelines to improve the annotation process and construct more uniform datasets. In this article, we investigate the data dependency of deep learning methods within the context of imperfect data and semi-supervised learning, for cerebrovascular segmentation. Specifically, this study compares various state-of-the-art semi-supervised methods based on unsupervised regularization and evaluates their performance in diverse quantity and quality data scenarios. Based on these experiments, we provide guidelines for the annotation and training of cerebrovascular segmentation models.
{"title":"Guidelines for cerebrovascular segmentation: Managing imperfect annotations in the context of semi-supervised learning","authors":"Pierre Rougé , Pierre-Henri Conze , Nicolas Passat , Odyssée Merveille","doi":"10.1016/j.compmedimag.2024.102474","DOIUrl":"10.1016/j.compmedimag.2024.102474","url":null,"abstract":"<div><div>Segmentation in medical imaging is an essential and often preliminary task in the image processing chain, driving numerous efforts towards the design of robust segmentation algorithms. Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data. However, such labels are typically highly time-consuming, error-prone and expensive to produce. Alternatively, semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled. They are particularly useful for cerebrovascular segmentation, given that labeling a single volume requires several hours for an expert. In addition to the challenge posed by insufficient annotations, there are concerns regarding annotation consistency. The task of annotating the cerebrovascular tree is inherently ambiguous. Due to the discrete nature of images, the borders and extremities of vessels are often unclear. Consequently, annotations heavily rely on the expert subjectivity and on the underlying clinical objective. These discrepancies significantly increase the complexity of the segmentation task for the model and consequently impair the results. Consequently, it becomes imperative to provide clinicians with precise guidelines to improve the annotation process and construct more uniform datasets. In this article, we investigate the data dependency of deep learning methods within the context of imperfect data and semi-supervised learning, for cerebrovascular segmentation. Specifically, this study compares various state-of-the-art semi-supervised methods based on unsupervised regularization and evaluates their performance in diverse quantity and quality data scenarios. Based on these experiments, we provide guidelines for the annotation and training of cerebrovascular segmentation models.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102474"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}