首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Calcium deblooming in coronary computed tomography angiography via semantic-oriented generative adversarial network
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-24 DOI: 10.1016/j.compmedimag.2025.102515
Huiyu Zhao , Wangshu Zhu , Luyuan Jin , Yijia Xiong , Xiao Deng , Yuehua Li , Weiwen Zou
Calcium blooming artifact produced by calcified plaque in coronary computed tomography angiography (CCTA) is a significant contributor to false-positive results for radiologists. Most previous research focused on general noise reduction of CT images, while performance was limited when facing the blooming artifact. To address this problem, we designed an automated and robust semantics-oriented adversarial network that fully exploits the calcified plaques as semantic regions in the CCTA. The semantic features were extracted using a feature extraction module and implemented through a global–local fusion module, a generator with a semantic similarity module, and a matrix discriminator. The effectiveness of our network was validated both on a virtual and a clinical dataset. The clinical dataset consists of 372 CCTA and corresponding coronary angiogram (CAG) results, with the assistance of two cardiac radiologists (with 10 and 21 years of experience) for clinical evaluation. The proposed method effectively reduces artifacts for three major coronary arteries and significantly improves the specificity and positive predictive value for the diagnosis of coronary stenosis.
{"title":"Calcium deblooming in coronary computed tomography angiography via semantic-oriented generative adversarial network","authors":"Huiyu Zhao ,&nbsp;Wangshu Zhu ,&nbsp;Luyuan Jin ,&nbsp;Yijia Xiong ,&nbsp;Xiao Deng ,&nbsp;Yuehua Li ,&nbsp;Weiwen Zou","doi":"10.1016/j.compmedimag.2025.102515","DOIUrl":"10.1016/j.compmedimag.2025.102515","url":null,"abstract":"<div><div>Calcium blooming artifact produced by calcified plaque in coronary computed tomography angiography (CCTA) is a significant contributor to false-positive results for radiologists. Most previous research focused on general noise reduction of CT images, while performance was limited when facing the blooming artifact. To address this problem, we designed an automated and robust semantics-oriented adversarial network that fully exploits the calcified plaques as semantic regions in the CCTA. The semantic features were extracted using a feature extraction module and implemented through a global–local fusion module, a generator with a semantic similarity module, and a matrix discriminator. The effectiveness of our network was validated both on a virtual and a clinical dataset. The clinical dataset consists of 372 CCTA and corresponding coronary angiogram (CAG) results, with the assistance of two cardiac radiologists (with 10 and 21 years of experience) for clinical evaluation. The proposed method effectively reduces artifacts for three major coronary arteries and significantly improves the specificity and positive predictive value for the diagnosis of coronary stenosis.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102515"},"PeriodicalIF":5.4,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TTGA U-Net: Two-stage two-stream graph attention U-Net for hepatic vessel connectivity enhancement
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-22 DOI: 10.1016/j.compmedimag.2025.102514
Ziqi Zhao , Wentao Li , Xiaoyi Ding , Jianqi Sun , Lisa X. Xu
Accurate segmentation of hepatic vessels is pivotal for guiding preoperative planning in ablation surgery utilizing CT images. While non-contrast CT images often lack observable vessels, we focus on segmenting hepatic vessels within preoperative MR images. However, the vascular structures depicted in MR images are susceptible to noise, leading to challenges in connectivity. To address this issue, we propose a two-stage two-stream graph attention U-Net (i.e., TTGA U-Net) for hepatic vessel segmentation. Specifically, the first-stage network employs a CNN or Transformer-based architecture to preliminarily locate the vessel position, followed by an improved superpixel segmentation method to generate graph structures based on the positioning results. The second-stage network extracts graph node features through two parallel branches of a graph spatial attention network (GAT) and a graph channel attention network (GCT), employing self-attention mechanisms to balance these features. The graph pooling operation is utilized to aggregate node information. Moreover, we introduce a feature fusion module instead of skip connections to merge the two graph attention features, providing additional information to the decoder effectively. We establish a novel well-annotated high-quality MR image dataset for hepatic vessel segmentation and validate the vessel connectivity enhancement network’s effectiveness on this dataset and the public dataset 3D IRCADB. Experimental results demonstrate that our TTGA U-Net outperforms state-of-the-art methods, notably enhancing vessel connectivity.
{"title":"TTGA U-Net: Two-stage two-stream graph attention U-Net for hepatic vessel connectivity enhancement","authors":"Ziqi Zhao ,&nbsp;Wentao Li ,&nbsp;Xiaoyi Ding ,&nbsp;Jianqi Sun ,&nbsp;Lisa X. Xu","doi":"10.1016/j.compmedimag.2025.102514","DOIUrl":"10.1016/j.compmedimag.2025.102514","url":null,"abstract":"<div><div>Accurate segmentation of hepatic vessels is pivotal for guiding preoperative planning in ablation surgery utilizing CT images. While non-contrast CT images often lack observable vessels, we focus on segmenting hepatic vessels within preoperative MR images. However, the vascular structures depicted in MR images are susceptible to noise, leading to challenges in connectivity. To address this issue, we propose a two-stage two-stream graph attention U-Net (i.e., TTGA U-Net) for hepatic vessel segmentation. Specifically, the first-stage network employs a CNN or Transformer-based architecture to preliminarily locate the vessel position, followed by an improved superpixel segmentation method to generate graph structures based on the positioning results. The second-stage network extracts graph node features through two parallel branches of a graph spatial attention network (GAT) and a graph channel attention network (GCT), employing self-attention mechanisms to balance these features. The graph pooling operation is utilized to aggregate node information. Moreover, we introduce a feature fusion module instead of skip connections to merge the two graph attention features, providing additional information to the decoder effectively. We establish a novel well-annotated high-quality MR image dataset for hepatic vessel segmentation and validate the vessel connectivity enhancement network’s effectiveness on this dataset and the public dataset 3D IRCADB. Experimental results demonstrate that our TTGA U-Net outperforms state-of-the-art methods, notably enhancing vessel connectivity.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102514"},"PeriodicalIF":5.4,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel generative model for brain tumor detection using magnetic resonance imaging
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-19 DOI: 10.1016/j.compmedimag.2025.102498
José Jerovane da Costa Nascimento , Adriell Gomes Marques , Lucas do Nascimento Souza , Carlos Mauricio Jaborandy de Mattos Dourado Junior , Antonio Carlos da Silva Barros , Victor Hugo C. de Albuquerque , Luís Fabrício de Freitas Sousa
Brain tumors are a disease that kills thousands of people worldwide each year. Early identification through diagnosis is essential for monitoring and treating patients. The proposed study brings a new method through intelligent computational cells that are capable of segmenting the tumor region with high precision. The method uses deep learning to detect brain tumors with the “You only look once” (Yolov8) framework, and a fine-tuning process at the end of the network layer using intelligent computational cells capable of traversing the detected region, segmenting the edges of the brain tumor. In addition, the method uses a classification pipeline that combines a set of classifiers and extractors combined with grid search, to find the best combination and the best parameters for the dataset. The method obtained satisfactory results above 98% accuracy for region detection, and above 99% for brain tumor segmentation and accuracies above 98% for binary classification of brain tumor, and segmentation time obtaining less than 1 s, surpassing the state of the art compared to the same database, demonstrating the effectiveness of the proposed method. The new approach proposes the classification of different databases through data fusion to classify the presence of tumor in MRI images, as well as the patient’s life span. The segmentation and classification steps are validated by comparing them with the literature, with comparisons between works that used the same dataset. The method addresses a new generative AI for brain tumor capable of generating a pre-diagnosis through input data through Large Language Model (LLM), and can be used in systems to aid medical imaging diagnosis. As a contribution, this study employs new detection models combined with innovative methods based on digital image processing to improve segmentation metrics, as well as the use of Data Fusion, combining two tumor datasets to enhance classification performance. The study also utilizes LLM models to refine the pre-diagnosis obtained post-classification. Thus, this study proposes a Computer-Aided Diagnosis (CAD) method through AI with PDI, CNN, and LLM.
{"title":"A novel generative model for brain tumor detection using magnetic resonance imaging","authors":"José Jerovane da Costa Nascimento ,&nbsp;Adriell Gomes Marques ,&nbsp;Lucas do Nascimento Souza ,&nbsp;Carlos Mauricio Jaborandy de Mattos Dourado Junior ,&nbsp;Antonio Carlos da Silva Barros ,&nbsp;Victor Hugo C. de Albuquerque ,&nbsp;Luís Fabrício de Freitas Sousa","doi":"10.1016/j.compmedimag.2025.102498","DOIUrl":"10.1016/j.compmedimag.2025.102498","url":null,"abstract":"<div><div>Brain tumors are a disease that kills thousands of people worldwide each year. Early identification through diagnosis is essential for monitoring and treating patients. The proposed study brings a new method through intelligent computational cells that are capable of segmenting the tumor region with high precision. The method uses deep learning to detect brain tumors with the “You only look once” (Yolov8) framework, and a fine-tuning process at the end of the network layer using intelligent computational cells capable of traversing the detected region, segmenting the edges of the brain tumor. In addition, the method uses a classification pipeline that combines a set of classifiers and extractors combined with grid search, to find the best combination and the best parameters for the dataset. The method obtained satisfactory results above 98% accuracy for region detection, and above 99% for brain tumor segmentation and accuracies above 98% for binary classification of brain tumor, and segmentation time obtaining less than 1 s, surpassing the state of the art compared to the same database, demonstrating the effectiveness of the proposed method. The new approach proposes the classification of different databases through data fusion to classify the presence of tumor in MRI images, as well as the patient’s life span. The segmentation and classification steps are validated by comparing them with the literature, with comparisons between works that used the same dataset. The method addresses a new generative AI for brain tumor capable of generating a pre-diagnosis through input data through Large Language Model (LLM), and can be used in systems to aid medical imaging diagnosis. As a contribution, this study employs new detection models combined with innovative methods based on digital image processing to improve segmentation metrics, as well as the use of Data Fusion, combining two tumor datasets to enhance classification performance. The study also utilizes LLM models to refine the pre-diagnosis obtained post-classification. Thus, this study proposes a Computer-Aided Diagnosis (CAD) method through AI with PDI, CNN, and LLM.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102498"},"PeriodicalIF":5.4,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subtraction-free artifact-aware digital subtraction angiography image generation for head and neck vessels from motion data
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-18 DOI: 10.1016/j.compmedimag.2025.102512
Yunbi Liu , Dong Du , Yun Liu , Shengxian Tu , Wei Yang , Xiaoguang Han , Shiteng Suo , Qingshan Liu
Digital subtraction angiography (DSA) is an essential diagnostic tool for analyzing and diagnosing vascular diseases. However, DSA imaging techniques based on subtraction are prone to artifacts due to misalignments between mask and contrast images caused by inevitable patient movements, hindering accurate vessel identification and surgical treatment. While various registration-based algorithms aim to correct these misalignments, they often fall short in efficiency and effectiveness. Recent deep learning (DL)-based studies aim to generate synthetic DSA images directly from contrast images, free of subtraction. However, these methods typically require clean, motion-free training data, which is challenging to acquire in clinical settings. As a result, existing DSA images often contain motion-affected artifacts, complicating the development of models for generating artifact-free images. In this work, we propose an innovative Artifact-aware DSA image generation method (AaDSA) that utilizes solely motion data to produce artifact-free DSA images without subtraction. Our method employs a Gradient Field Transformation (GFT)-based technique to create an artifact mask that identifies artifact regions in DSA images with minimal manual annotation. This artifact mask guides the training of the AaDSA model, allowing it to bypass the adverse effects of artifact regions during model training. During inference, the AaDSA model can automatically generate artifact-free DSA images from single contrast images without any human intervention. Experimental results on a real head-and-neck DSA dataset show that our approach significantly outperforms state-of-the-art methods, highlighting its potential for clinical use.
{"title":"Subtraction-free artifact-aware digital subtraction angiography image generation for head and neck vessels from motion data","authors":"Yunbi Liu ,&nbsp;Dong Du ,&nbsp;Yun Liu ,&nbsp;Shengxian Tu ,&nbsp;Wei Yang ,&nbsp;Xiaoguang Han ,&nbsp;Shiteng Suo ,&nbsp;Qingshan Liu","doi":"10.1016/j.compmedimag.2025.102512","DOIUrl":"10.1016/j.compmedimag.2025.102512","url":null,"abstract":"<div><div>Digital subtraction angiography (DSA) is an essential diagnostic tool for analyzing and diagnosing vascular diseases. However, DSA imaging techniques based on subtraction are prone to artifacts due to misalignments between mask and contrast images caused by inevitable patient movements, hindering accurate vessel identification and surgical treatment. While various registration-based algorithms aim to correct these misalignments, they often fall short in efficiency and effectiveness. Recent deep learning (DL)-based studies aim to generate synthetic DSA images directly from contrast images, free of subtraction. However, these methods typically require clean, motion-free training data, which is challenging to acquire in clinical settings. As a result, existing DSA images often contain motion-affected artifacts, complicating the development of models for generating artifact-free images. In this work, we propose an innovative Artifact-aware DSA image generation method (AaDSA) that utilizes solely motion data to produce artifact-free DSA images without subtraction. Our method employs a Gradient Field Transformation (GFT)-based technique to create an artifact mask that identifies artifact regions in DSA images with minimal manual annotation. This artifact mask guides the training of the AaDSA model, allowing it to bypass the adverse effects of artifact regions during model training. During inference, the AaDSA model can automatically generate artifact-free DSA images from single contrast images without any human intervention. Experimental results on a real head-and-neck DSA dataset show that our approach significantly outperforms state-of-the-art methods, highlighting its potential for clinical use.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102512"},"PeriodicalIF":5.4,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior knowledge-based multi-task learning network for pulmonary nodule classification
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-16 DOI: 10.1016/j.compmedimag.2025.102511
Peng Xue , Hang Lu , Yu Fu , Huizhong Ji , Meirong Ren , Taohui Xiao , Zhili Zhang , Enqing Dong
The morphological characteristics of pulmonary nodule, also known as the attributes, are crucial for classification of benign and malignant nodules. In clinical, radiologists usually conduct a comprehensive analysis of correlations between different attributes, to accurately judge pulmonary nodules are benign or malignant. However, most of pulmonary nodule classification models ignore the inherent correlations between different attributes, leading to unsatisfactory classification performance. To address these problems, we propose a prior knowledge-based multi-task learning (PK-MTL) network for pulmonary nodule classification. To be specific, the correlations between different attributes are treated as prior knowledge, and established through multi-order task transfer learning. Then, the complex correlations between different attributes are encoded into hypergraph structure, and leverage hypergraph neural network for learning the correlation representation. On the other hand, a multi-task learning framework is constructed for joint segmentation, benign–malignant classification and attribute scoring of pulmonary nodules, aiming to improve the classification performance of pulmonary nodules comprehensively. In order to embed prior knowledge into multi-task learning framework, a feature fusion block is designed to organically integrate image-level features with attribute prior knowledge. In addition, a channel-wise cross attention block is constructed to fuse the features of encoder and decoder, to further improve the segmentation performance. Extensive experiments on LIDC-IDRI dataset show that our proposed method can achieve 91.04% accuracy for diagnosing malignant nodules, obtaining the state-of-art results.
{"title":"Prior knowledge-based multi-task learning network for pulmonary nodule classification","authors":"Peng Xue ,&nbsp;Hang Lu ,&nbsp;Yu Fu ,&nbsp;Huizhong Ji ,&nbsp;Meirong Ren ,&nbsp;Taohui Xiao ,&nbsp;Zhili Zhang ,&nbsp;Enqing Dong","doi":"10.1016/j.compmedimag.2025.102511","DOIUrl":"10.1016/j.compmedimag.2025.102511","url":null,"abstract":"<div><div>The morphological characteristics of pulmonary nodule, also known as the attributes, are crucial for classification of benign and malignant nodules. In clinical, radiologists usually conduct a comprehensive analysis of correlations between different attributes, to accurately judge pulmonary nodules are benign or malignant. However, most of pulmonary nodule classification models ignore the inherent correlations between different attributes, leading to unsatisfactory classification performance. To address these problems, we propose a prior knowledge-based multi-task learning (PK-MTL) network for pulmonary nodule classification. To be specific, the correlations between different attributes are treated as prior knowledge, and established through multi-order task transfer learning. Then, the complex correlations between different attributes are encoded into hypergraph structure, and leverage hypergraph neural network for learning the correlation representation. On the other hand, a multi-task learning framework is constructed for joint segmentation, benign–malignant classification and attribute scoring of pulmonary nodules, aiming to improve the classification performance of pulmonary nodules comprehensively. In order to embed prior knowledge into multi-task learning framework, a feature fusion block is designed to organically integrate image-level features with attribute prior knowledge. In addition, a channel-wise cross attention block is constructed to fuse the features of encoder and decoder, to further improve the segmentation performance. Extensive experiments on LIDC-IDRI dataset show that our proposed method can achieve 91.04% accuracy for diagnosing malignant nodules, obtaining the state-of-art results.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102511"},"PeriodicalIF":5.4,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143427975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Z-SSMNet: Zonal-aware Self-supervised Mesh Network for prostate cancer detection and diagnosis with Bi-parametric MRI
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-15 DOI: 10.1016/j.compmedimag.2025.102510
Yuan Yuan , Euijoon Ahn , Dagan Feng , Mohamed Khadra , Jinman Kim
Bi-parametric magnetic resonance imaging (bpMRI) has become a pivotal modality in the detection and diagnosis of clinically significant prostate cancer (csPCa). Developing AI-based systems to identify csPCa using bpMRI can transform prostate cancer (PCa) management by improving efficiency and cost-effectiveness. However, current state-of-the-art methods using convolutional neural networks (CNNs) and Transformers are limited in learning in-plane and three-dimensional spatial information from anisotropic bpMRI. Their performances also depend on the availability of large, diverse, and well-annotated bpMRI datasets. To address these challenges, we propose the Zonal-aware Self-supervised Mesh Network (Z-SSMNet), which adaptively integrates multi-dimensional (2D/2.5D/3D) convolutions to learn dense intra-slice information and sparse inter-slice information of the anisotropic bpMRI in a balanced manner. We also propose a self-supervised learning (SSL) technique that effectively captures both intra-slice and inter-slice semantic information using large-scale unlabeled data. Furthermore, we constrain the network to focus on the zonal anatomical regions to improve the detection and diagnosis capability of csPCa. We conducted extensive experiments on the PI-CAI (Prostate Imaging - Cancer AI) dataset comprising 10000+ multi-center and multi-scanner data. Our Z-SSMNet excelled in both lesion-level detection (AP score of 0.633) and patient-level diagnosis (AUROC score of 0.881), securing the top position in the Open Development Phase of the PI-CAI challenge and maintained strong performance, achieving an AP score of 0.690 and an AUROC score of 0.909, and securing the second-place ranking in the Closed Testing Phase. These findings underscore the potential of AI-driven systems for csPCa diagnosis and management.
{"title":"Z-SSMNet: Zonal-aware Self-supervised Mesh Network for prostate cancer detection and diagnosis with Bi-parametric MRI","authors":"Yuan Yuan ,&nbsp;Euijoon Ahn ,&nbsp;Dagan Feng ,&nbsp;Mohamed Khadra ,&nbsp;Jinman Kim","doi":"10.1016/j.compmedimag.2025.102510","DOIUrl":"10.1016/j.compmedimag.2025.102510","url":null,"abstract":"<div><div>Bi-parametric magnetic resonance imaging (bpMRI) has become a pivotal modality in the detection and diagnosis of clinically significant prostate cancer (csPCa). Developing AI-based systems to identify csPCa using bpMRI can transform prostate cancer (PCa) management by improving efficiency and cost-effectiveness. However, current state-of-the-art methods using convolutional neural networks (CNNs) and Transformers are limited in learning in-plane and three-dimensional spatial information from anisotropic bpMRI. Their performances also depend on the availability of large, diverse, and well-annotated bpMRI datasets. To address these challenges, we propose the Zonal-aware Self-supervised Mesh Network (Z-SSMNet), which adaptively integrates multi-dimensional (2D/2.5D/3D) convolutions to learn dense intra-slice information and sparse inter-slice information of the anisotropic bpMRI in a balanced manner. We also propose a self-supervised learning (SSL) technique that effectively captures both intra-slice and inter-slice semantic information using large-scale unlabeled data. Furthermore, we constrain the network to focus on the zonal anatomical regions to improve the detection and diagnosis capability of csPCa. We conducted extensive experiments on the PI-CAI (Prostate Imaging - Cancer AI) dataset comprising 10000+ multi-center and multi-scanner data. Our Z-SSMNet excelled in both lesion-level detection (AP score of 0.633) and patient-level diagnosis (AUROC score of 0.881), securing the top position in the Open Development Phase of the PI-CAI challenge and maintained strong performance, achieving an AP score of 0.690 and an AUROC score of 0.909, and securing the second-place ranking in the Closed Testing Phase. These findings underscore the potential of AI-driven systems for csPCa diagnosis and management.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102510"},"PeriodicalIF":5.4,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Joint Lesion Detection by enhancing local feature interaction
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-10 DOI: 10.1016/j.compmedimag.2025.102509
Yaqi Liu , Tingting Wang , Li Yang , Jianhong Wu , Tao He
Recently, deep learning models have demonstrated impressive performance in Automatic Joint Lesion Detection (AJLD), yet balancing accuracy and efficiency remains a significant challenge. This paper focuses on achieving end-to-end lesion detection while improving accuracy to meet clinical requirements. To enhance the overall performance of AJLD, we propose novel modules: Local Attention Feature Fusion (LAFF) and Gaussian Positional Encoding (GPE). These modules are extensively integrated into YOLO, resulting in an improved YOLO model by enhancing Local Feature interaction, named YOLOlf for short. The LAFF module, based on pathological features presented by arthritis, strengthens the implicit connections between joints by acquiring local attention information. The GPE module enhances the connections between joints by encoding their local positional information. In this paper, we validate our approach using two arthritis datasets, including the largest AJLD dataset in the literature (960 X-ray images annotated by two arthritis specialists and one radiologist) and another arthritis dataset with 216 X-ray images, supplemented by the MURA dataset, a more general dataset for abnormality detection in musculoskeletal radiographs. In various series of YOLO models, the improved YOLOlf shows a significant increase in detection accuracy. Taking YOLOv8 as an example, the improved YOLOlfv8 increases mAP@50 from 0.765 to 0.785 and from 0.831 to 0.859 on two arthritis datasets, demonstrating the plug-and-play nature and clinical applicability of the proposed LAFF and GPE modules.
{"title":"Automatic Joint Lesion Detection by enhancing local feature interaction","authors":"Yaqi Liu ,&nbsp;Tingting Wang ,&nbsp;Li Yang ,&nbsp;Jianhong Wu ,&nbsp;Tao He","doi":"10.1016/j.compmedimag.2025.102509","DOIUrl":"10.1016/j.compmedimag.2025.102509","url":null,"abstract":"<div><div>Recently, deep learning models have demonstrated impressive performance in Automatic Joint Lesion Detection (AJLD), yet balancing accuracy and efficiency remains a significant challenge. This paper focuses on achieving end-to-end lesion detection while improving accuracy to meet clinical requirements. To enhance the overall performance of AJLD, we propose novel modules: Local Attention Feature Fusion (LAFF) and Gaussian Positional Encoding (GPE). These modules are extensively integrated into YOLO, resulting in an improved YOLO model by enhancing <strong>L</strong>ocal <strong>F</strong>eature interaction, named <span><math><msub><mrow><mi>YOLO</mi></mrow><mrow><mi>lf</mi></mrow></msub></math></span> for short. The LAFF module, based on pathological features presented by arthritis, strengthens the implicit connections between joints by acquiring local attention information. The GPE module enhances the connections between joints by encoding their local positional information. In this paper, we validate our approach using two arthritis datasets, including the largest AJLD dataset in the literature (960 X-ray images annotated by two arthritis specialists and one radiologist) and another arthritis dataset with 216 X-ray images, supplemented by the MURA dataset, a more general dataset for abnormality detection in musculoskeletal radiographs. In various series of YOLO models, the improved <span><math><msub><mrow><mi>YOLO</mi></mrow><mrow><mi>lf</mi></mrow></msub></math></span> shows a significant increase in detection accuracy. Taking YOLOv8 as an example, the improved <span><math><mrow><msub><mrow><mi>YOLO</mi></mrow><mrow><mi>lf</mi></mrow></msub><mi>v8</mi></mrow></math></span> increases mAP@50 from 0.765 to 0.785 and from 0.831 to 0.859 on two arthritis datasets, demonstrating the plug-and-play nature and clinical applicability of the proposed LAFF and GPE modules.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102509"},"PeriodicalIF":5.4,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TQGDNet: Coronary artery calcium deposit detection on computed tomography
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-06 DOI: 10.1016/j.compmedimag.2025.102503
Wei-Chien Wang , Christopher Yu , Euijoon Ahn , Shahab Pathan , Kazuaki Negishi , Jinman Kim
Coronary artery disease (CAD) continues to be a leading global cause of cardiovascular related mortality. The scoring of coronary artery calcium (CAC) using computer tomography (CT) images is a diagnostic instrument for evaluating the risk of asymptomatic individuals prone to atherosclerotic cardiovascular disease. State-of-the-art automated CAC scoring methods rely on large annotated datasets to train convolutional neural network (CNN) models. However, these methods do not integrate features across different levels and layers of the CNN, particularly in the lower layers where important information regarding small calcium regions are present. In this study, we propose a new CNN model specifically designed to effectively capture features associated with small regions and their surrounding areas in low-contrast CT images. Our model integrates a specifically designed low-contrast detection module and two fusion modules focusing on the lower layers of the network to connect more deeper and wider neurons (or nodes) across multiple adjacent levels. Our first module, called ThrConvs, includes three convolution blocks tailored to detecting objects in images characterized by low contrast. Following this, two fusion modules are introduced: (i) Queen-fusion (Qf), which introduces a cross-scale feature method to fuse features from multiple adjacent levels and layers and, (ii) lower-layer Gather-and-Distribute (GD) module, which focuses on learning comprehensive features associated with small-sized calcium deposits and their surroundings. We demonstrate superior performance of our model using the public OrCaScore dataset, encompassing 269 calcium deposits, surpassing the capabilities of previous state-of-the-art works. We demonstrate the enhanced performance of our approach, achieving a notable 2.3–3.6 % improvement in mean Pixel Accuracy (mPA) on both the private Concord dataset and the public OrCaScore dataset, surpassing the capabilities of established detection methods.
{"title":"TQGDNet: Coronary artery calcium deposit detection on computed tomography","authors":"Wei-Chien Wang ,&nbsp;Christopher Yu ,&nbsp;Euijoon Ahn ,&nbsp;Shahab Pathan ,&nbsp;Kazuaki Negishi ,&nbsp;Jinman Kim","doi":"10.1016/j.compmedimag.2025.102503","DOIUrl":"10.1016/j.compmedimag.2025.102503","url":null,"abstract":"<div><div>Coronary artery disease (CAD) continues to be a leading global cause of cardiovascular related mortality. The scoring of coronary artery calcium (CAC) using computer tomography (CT) images is a diagnostic instrument for evaluating the risk of asymptomatic individuals prone to atherosclerotic cardiovascular disease. State-of-the-art automated CAC scoring methods rely on large annotated datasets to train convolutional neural network (CNN) models. However, these methods do not integrate features across different levels and layers of the CNN, particularly in the lower layers where important information regarding small calcium regions are present. In this study, we propose a new CNN model specifically designed to effectively capture features associated with small regions and their surrounding areas in low-contrast CT images. Our model integrates a specifically designed low-contrast detection module and two fusion modules focusing on the lower layers of the network to connect more deeper and wider neurons (or nodes) across multiple adjacent levels. Our first module, called ThrConvs, includes three convolution blocks tailored to detecting objects in images characterized by low contrast. Following this, two fusion modules are introduced: (i) Queen-fusion (Qf), which introduces a cross-scale feature method to fuse features from multiple adjacent levels and layers and, (ii) lower-layer Gather-and-Distribute (GD) module, which focuses on learning comprehensive features associated with small-sized calcium deposits and their surroundings. We demonstrate superior performance of our model using the public OrCaScore dataset, encompassing 269 calcium deposits, surpassing the capabilities of previous state-of-the-art works. We demonstrate the enhanced performance of our approach, achieving a notable 2.3–3.6 % improvement in mean Pixel Accuracy (mPA) on both the private Concord dataset and the public OrCaScore dataset, surpassing the capabilities of established detection methods.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102503"},"PeriodicalIF":5.4,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CGNet: Few-shot learning for Intracranial Hemorrhage Segmentation
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-05 DOI: 10.1016/j.compmedimag.2025.102505
Wanyuan Gong , Yanmin Luo , Fuxing Yang , Huabiao Zhou , Zhongwei Lin , Chi Cai , Youcao Lin , Junyan Chen
In recent years, with the increasing attention from researchers towards medical imaging, deep learning-based image segmentation techniques have become mainstream in the field, requiring large amounts of manually annotated data. Annotating datasets for Intracranial Hemorrhage(ICH) is particularly tedious and costly. Few-shot segmentation holds significant potential for medical imaging. In this work, we designed a novel segmentation model CGNet to leverage a limited dataset for segmenting ICH regions, we propose a Cross Feature Module (CFM) enhances the understanding of lesion details by facilitating interaction between feature information from the query and support sets and Support Guide Query (SGQ) refines segmentation targets by integrating features from support and query sets at different scales, preserving the integrity of target feature information while further enhancing segmentation detail. We first propose transforming the ICH segmentation task into a few-shot learning problem. We evaluated our model using the publicly available BHSD dataset and the private IHSAH dataset. Our approach outperforms current state-of-the-art few-shot segmentation models, outperforming methods of 3% and 1.8% in Dice coefficient scores, respectively, and also exceeds the performance of fully supervised segmentation models with the same amount of data.
{"title":"CGNet: Few-shot learning for Intracranial Hemorrhage Segmentation","authors":"Wanyuan Gong ,&nbsp;Yanmin Luo ,&nbsp;Fuxing Yang ,&nbsp;Huabiao Zhou ,&nbsp;Zhongwei Lin ,&nbsp;Chi Cai ,&nbsp;Youcao Lin ,&nbsp;Junyan Chen","doi":"10.1016/j.compmedimag.2025.102505","DOIUrl":"10.1016/j.compmedimag.2025.102505","url":null,"abstract":"<div><div>In recent years, with the increasing attention from researchers towards medical imaging, deep learning-based image segmentation techniques have become mainstream in the field, requiring large amounts of manually annotated data. Annotating datasets for Intracranial Hemorrhage(ICH) is particularly tedious and costly. Few-shot segmentation holds significant potential for medical imaging. In this work, we designed a novel segmentation model CGNet to leverage a limited dataset for segmenting ICH regions, we propose a Cross Feature Module (CFM) enhances the understanding of lesion details by facilitating interaction between feature information from the query and support sets and Support Guide Query (SGQ) refines segmentation targets by integrating features from support and query sets at different scales, preserving the integrity of target feature information while further enhancing segmentation detail. We first propose transforming the ICH segmentation task into a few-shot learning problem. We evaluated our model using the publicly available BHSD dataset and the private IHSAH dataset. Our approach outperforms current state-of-the-art few-shot segmentation models, outperforming methods of 3% and 1.8% in Dice coefficient scores, respectively, and also exceeds the performance of fully supervised segmentation models with the same amount of data.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102505"},"PeriodicalIF":5.4,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143234846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-03 DOI: 10.1016/j.compmedimag.2025.102502
Jun Shi , Dongdong Sun , Zhiguo Jiang , Jun Du , Wei Wang , Yushan Zheng , Haibo Wu
Human epidermal growth factor receptor 2 (HER2) is an important biomarker for prognosis and prediction of treatment response in breast cancer (BC). HER2 scoring is typically evaluated by pathologist microscopic observation on immunohistochemistry (IHC) images, which is labor-intensive and results in observational biases among different pathologists. Most existing methods generally use hand-crafted features or deep learning models in unimodal (hematoxylin and eosin (H&E) or IHC) to predict HER2 scores through supervised or weakly supervised learning. Consequently, the information from different modalities is not effectively integrated into feature learning which can help improve HER2 scoring performance. In this paper, we propose a novel weakly supervised multi-modal contrastive learning (WSMCL) framework to predict the HER2 scores in BC at the whole slide image (WSI) level. It aims to leverage multi-modal (H&E and IHC) joint learning under the weak supervision of WSI label to achieve the HER2 score prediction. Specifically, the patch features within H&E and IHC WSIs are respectively extracted and then the multi-head self-attention (MHSA) is used to explore the global dependencies of the patches within each modality. The patch features corresponding to top-k and bottom-k attention scores generated by MHSA in each modality are selected as the candidates for multi-modal joint learning. Particularly, a multi-modal attentive contrastive learning (MACL) module is designed to guarantee the semantic alignment of the candidate features from different modalities. Extensive experiments demonstrate the proposed WSMCL has the better HER2 scoring performance and outperforms the state-of-the-art methods. The code is available at https://github.com/HFUT-miaLab/WSMCL.
{"title":"Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer","authors":"Jun Shi ,&nbsp;Dongdong Sun ,&nbsp;Zhiguo Jiang ,&nbsp;Jun Du ,&nbsp;Wei Wang ,&nbsp;Yushan Zheng ,&nbsp;Haibo Wu","doi":"10.1016/j.compmedimag.2025.102502","DOIUrl":"10.1016/j.compmedimag.2025.102502","url":null,"abstract":"<div><div>Human epidermal growth factor receptor 2 (HER2) is an important biomarker for prognosis and prediction of treatment response in breast cancer (BC). HER2 scoring is typically evaluated by pathologist microscopic observation on immunohistochemistry (IHC) images, which is labor-intensive and results in observational biases among different pathologists. Most existing methods generally use hand-crafted features or deep learning models in unimodal (hematoxylin and eosin (H&amp;E) or IHC) to predict HER2 scores through supervised or weakly supervised learning. Consequently, the information from different modalities is not effectively integrated into feature learning which can help improve HER2 scoring performance. In this paper, we propose a novel weakly supervised multi-modal contrastive learning (WSMCL) framework to predict the HER2 scores in BC at the whole slide image (WSI) level. It aims to leverage multi-modal (H&amp;E and IHC) joint learning under the weak supervision of WSI label to achieve the HER2 score prediction. Specifically, the patch features within H&amp;E and IHC WSIs are respectively extracted and then the multi-head self-attention (MHSA) is used to explore the global dependencies of the patches within each modality. The patch features corresponding to top-k and bottom-k attention scores generated by MHSA in each modality are selected as the candidates for multi-modal joint learning. Particularly, a multi-modal attentive contrastive learning (MACL) module is designed to guarantee the semantic alignment of the candidate features from different modalities. Extensive experiments demonstrate the proposed WSMCL has the better HER2 scoring performance and outperforms the state-of-the-art methods. The code is available at <span><span>https://github.com/HFUT-miaLab/WSMCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102502"},"PeriodicalIF":5.4,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143234792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1