首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
EMCNN: Fine-Grained Emotion Recognition based on PPG using Multi-scale Convolutional Neural Network
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107594
Jiyang Han , Hui Li , Xi Zhang , Yu Zhang , Hui Yang
The objective nature of physiological electrical signals, which is not susceptible to human manipulation, promotes their application in the field of affective computing. However, most of the existing methods rely on multi-channel or multi-modal signals, which are cumbersome to collect in daily settings, thus limiting their practical application. In this paper, we proposed a photoplethysmography (PPG) based emotion recognition approach that leverages a single-channel signal for fine-grained emotion analysis. To address the inherent simplicity of the PPG signal, an Emotional Multi-scale Convolutional Neural Network (EMCNN) is presented to enrich the metadata by integrating information from both time and frequency domains, thereby enhancing the ability of feature extraction. Moreover, in addition to the binary classification task regarding the aspect of valence or arousal, the 4-dimensional classification task is also considered to achieve fine-grained emotion recognition. Experiments on the DEAP dataset demonstrate that the proposed method obtained outstanding accuracy of 94.2%, 94.0%, and 86.1%, for the binary valence, binary arousal, and 4-dimensional classification, respectively. Furthermore, it exhibits significant generalization ability in achieving considerable performance when tested on the self-collected dataset. The successful implementation of fine-grained PPG-based emotion recognition will not only facilitate the development of non-invasive wearable emotion monitoring but also paves the way for clinical applications.
{"title":"EMCNN: Fine-Grained Emotion Recognition based on PPG using Multi-scale Convolutional Neural Network","authors":"Jiyang Han ,&nbsp;Hui Li ,&nbsp;Xi Zhang ,&nbsp;Yu Zhang ,&nbsp;Hui Yang","doi":"10.1016/j.bspc.2025.107594","DOIUrl":"10.1016/j.bspc.2025.107594","url":null,"abstract":"<div><div>The objective nature of physiological electrical signals, which is not susceptible to human manipulation, promotes their application in the field of affective computing. However, most of the existing methods rely on multi-channel or multi-modal signals, which are cumbersome to collect in daily settings, thus limiting their practical application. In this paper, we proposed a photoplethysmography (PPG) based emotion recognition approach that leverages a single-channel signal for fine-grained emotion analysis. To address the inherent simplicity of the PPG signal, an Emotional Multi-scale Convolutional Neural Network (EMCNN) is presented to enrich the metadata by integrating information from both time and frequency domains, thereby enhancing the ability of feature extraction. Moreover, in addition to the binary classification task regarding the aspect of valence or arousal, the 4-dimensional classification task is also considered to achieve fine-grained emotion recognition. Experiments on the DEAP dataset demonstrate that the proposed method obtained outstanding accuracy of 94.2%, 94.0%, and 86.1%, for the binary valence, binary arousal, and 4-dimensional classification, respectively. Furthermore, it exhibits significant generalization ability in achieving considerable performance when tested on the self-collected dataset. The successful implementation of fine-grained PPG-based emotion recognition will not only facilitate the development of non-invasive wearable emotion monitoring but also paves the way for clinical applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107594"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QRS detection in noisy electrocardiogram using an adaptively regularized numerical differentiation method
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107666
Haoming Yan , Zixian Yang , Jiuwei Gao , Xuewen Wang
QRS detection in noisy electrocardiograms (ECG) often requires the calculation of the signal’s numerical differentiation without amplifying the noise. This study proposed and applied a numerical differentiation method based on adaptively weighted Tikhonov regularization (AWTR) in QRS detection. By adaptively weighting the terms of the summation in the regularization term, the AWTR-based method can accurately calculate the details in the derivative of noisy signals while maintaining smoothness. In particular, it does well in processing signals whose derivatives are continuous and have dramatic variations in some locations. When implemented on synthetic ECG signals with noise added, the AWTR-based numerical differentiation method achieves the highest accuracy compared with Tikhonov regularization and total-variation based ones. Based on this method, a QRS detection algorithm, which combines wavelet denoising, Hilbert transform, absolute-value transform, and adaptive threshold, is developed and evaluated. The algorithm can effectively emphasize QRS complexes in noisy ECG signals while suppressing the noise and other waveforms. The results pave the way for QRS detection with high accuracy. The sensitivity, positive predictivity and detection error rate of the algorithm implemented on the benchmark MIT-BIH Arrhythmia Database are 99.90%, 99.91%, and 0.20%, respectively, which are superior to most of the reported state-of-the-art methods.
{"title":"QRS detection in noisy electrocardiogram using an adaptively regularized numerical differentiation method","authors":"Haoming Yan ,&nbsp;Zixian Yang ,&nbsp;Jiuwei Gao ,&nbsp;Xuewen Wang","doi":"10.1016/j.bspc.2025.107666","DOIUrl":"10.1016/j.bspc.2025.107666","url":null,"abstract":"<div><div>QRS detection in noisy electrocardiograms (ECG) often requires the calculation of the signal’s numerical differentiation without amplifying the noise. This study proposed and applied a numerical differentiation method based on adaptively weighted Tikhonov regularization (AWTR) in QRS detection. By adaptively weighting the terms of the summation in the regularization term, the AWTR-based method can accurately calculate the details in the derivative of noisy signals while maintaining smoothness. In particular, it does well in processing signals whose derivatives are continuous and have dramatic variations in some locations. When implemented on synthetic ECG signals with noise added, the AWTR-based numerical differentiation method achieves the highest accuracy compared with Tikhonov regularization and total-variation based ones. Based on this method, a QRS detection algorithm, which combines wavelet denoising, Hilbert transform, absolute-value transform, and adaptive threshold, is developed and evaluated. The algorithm can effectively emphasize QRS complexes in noisy ECG signals while suppressing the noise and other waveforms. The results pave the way for QRS detection with high accuracy. The sensitivity, positive predictivity and detection error rate of the algorithm implemented on the benchmark MIT-BIH Arrhythmia Database are 99.90%, 99.91%, and 0.20%, respectively, which are superior to most of the reported state-of-the-art methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107666"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware self-training with adversarial data augmentation for semi-supervised medical image segmentation
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107561
Juan Cao , Jiaran Chen , Jinjia Liu , Yuanyuan Gu , Lili Chen
Supervised algorithms require a significant amount of labeled data to ensure the effectiveness and robustness. Unfortunately, obtaining segmentation masks annotated by experts is both time-consuming and expensive. Although existing methods use data augmentation to expand the training data, these approaches often only slightly improve the generalization. To address this issue, we proposes a medical image segmentation framework that aims to leverage unlabeled samples for feature learning and improve segmentation performance. The proposed framework includes a data augmentation model and a segmentation model. The data augmentation model utilizes generative adversarial networks to model the spatial and intensity transformations in medical images and generate strongly-augmented samples to expand the training set. The segmentation model is implemented by applying self-training methods and consistency regularization. Firstly, pseudo-labeling is performed on weakly-augmented samples. Then, consistency regularization is applied to encourage the model predictions on strongly-augmented samples to be consistent with the pseudo-labels. This aims to improve the robustness of the model on unseen samples. To mitigate the network degradation caused by unreliable pseudo-labels, a new self-training strategy and uncertainty estimation are introduced into the segmentation framework to enhance the reliability of pseudo-labels. The proposed framework is rigorously evaluated for the segmentation of cardiac and prostate images, the experimental results indicate that it achieves competitive performance compared to several state-of-the-art methods. Moreover, the proposed method is applicable for joint training with limited labeled and additional unlabeled data, potentially reducing the workload of obtaining annotated images.
{"title":"Uncertainty-aware self-training with adversarial data augmentation for semi-supervised medical image segmentation","authors":"Juan Cao ,&nbsp;Jiaran Chen ,&nbsp;Jinjia Liu ,&nbsp;Yuanyuan Gu ,&nbsp;Lili Chen","doi":"10.1016/j.bspc.2025.107561","DOIUrl":"10.1016/j.bspc.2025.107561","url":null,"abstract":"<div><div>Supervised algorithms require a significant amount of labeled data to ensure the effectiveness and robustness. Unfortunately, obtaining segmentation masks annotated by experts is both time-consuming and expensive. Although existing methods use data augmentation to expand the training data, these approaches often only slightly improve the generalization. To address this issue, we proposes a medical image segmentation framework that aims to leverage unlabeled samples for feature learning and improve segmentation performance. The proposed framework includes a data augmentation model and a segmentation model. The data augmentation model utilizes generative adversarial networks to model the spatial and intensity transformations in medical images and generate strongly-augmented samples to expand the training set. The segmentation model is implemented by applying self-training methods and consistency regularization. Firstly, pseudo-labeling is performed on weakly-augmented samples. Then, consistency regularization is applied to encourage the model predictions on strongly-augmented samples to be consistent with the pseudo-labels. This aims to improve the robustness of the model on unseen samples. To mitigate the network degradation caused by unreliable pseudo-labels, a new self-training strategy and uncertainty estimation are introduced into the segmentation framework to enhance the reliability of pseudo-labels. The proposed framework is rigorously evaluated for the segmentation of cardiac and prostate images, the experimental results indicate that it achieves competitive performance compared to several state-of-the-art methods. Moreover, the proposed method is applicable for joint training with limited labeled and additional unlabeled data, potentially reducing the workload of obtaining annotated images.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107561"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SK-VM++: Mamba assists skip-connections for medical image segmentation
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107646
Renkai Wu , Liuyue Pan , Pengchen Liang , Qing Chang , Xianjin Wang , Weihuan Fang
In medical automatic image segmentation engineering, the U-shaped structure is the primary key framework. And the skip-connection operation in it is an important operation for key fusion of high and low features, which is one of the highlights of the U-shaped architecture. However, the traditional U-shaped architecture usually employs direct concatenation or different variants of convolution-based module composition. The recent emergence of Mamba, based on state-space models (SSMs), has shaken up the traditional convolution and Transformers that have long been the foundational building blocks. In this study, we analyze the impact of Mamba on skip-connection operations for U-shaped architectures and propose a novel skip-connection operation (SK-VM++) combining the UNet++ framework and Mamba. Specifically, Mamba is able to refine the fusion of high and low feature information better than traditional convolution. In addition, SK-VM++ leverages the excellent property of Mamba’s concatenation, making it significantly less sensitive to changes in computational complexity and parameters caused by changes in the number of channels. In particular, the number of channels increases from 64 to 512, and the convolution-based FLOPs and parameters rise by 8.82 and 6.22 times, respectively, compared to our proposed Mamba-based skip-connection operation. In addition, comparing with the most popular nnU-Net and VM-UNet, the DSC of SK-VM++ improves by 2.01% and 1.10% on the ISIC2017 dataset, 1.59% and 9.10% on the CVC-ClinicDB dataset, 1.23% and 18.94% on the Promise12 dataset and 46.25% and 34.01% improvement on the UWF-RHS dataset. The code is available from https://github.com/wurenkai/SK-VMPlusPlus.
{"title":"SK-VM++: Mamba assists skip-connections for medical image segmentation","authors":"Renkai Wu ,&nbsp;Liuyue Pan ,&nbsp;Pengchen Liang ,&nbsp;Qing Chang ,&nbsp;Xianjin Wang ,&nbsp;Weihuan Fang","doi":"10.1016/j.bspc.2025.107646","DOIUrl":"10.1016/j.bspc.2025.107646","url":null,"abstract":"<div><div>In medical automatic image segmentation engineering, the U-shaped structure is the primary key framework. And the skip-connection operation in it is an important operation for key fusion of high and low features, which is one of the highlights of the U-shaped architecture. However, the traditional U-shaped architecture usually employs direct concatenation or different variants of convolution-based module composition. The recent emergence of Mamba, based on state-space models (SSMs), has shaken up the traditional convolution and Transformers that have long been the foundational building blocks. In this study, we analyze the impact of Mamba on skip-connection operations for U-shaped architectures and propose a novel skip-connection operation (SK-VM++) combining the UNet++ framework and Mamba. Specifically, Mamba is able to refine the fusion of high and low feature information better than traditional convolution. In addition, SK-VM++ leverages the excellent property of Mamba’s concatenation, making it significantly less sensitive to changes in computational complexity and parameters caused by changes in the number of channels. In particular, the number of channels increases from 64 to 512, and the convolution-based FLOPs and parameters rise by 8.82 and 6.22 times, respectively, compared to our proposed Mamba-based skip-connection operation. In addition, comparing with the most popular nnU-Net and VM-UNet, the DSC of SK-VM++ improves by 2.01% and 1.10% on the ISIC2017 dataset, 1.59% and 9.10% on the CVC-ClinicDB dataset, 1.23% and 18.94% on the Promise12 dataset and 46.25% and 34.01% improvement on the UWF-RHS dataset. The code is available from <span><span>https://github.com/wurenkai/SK-VMPlusPlus</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107646"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuse-Former: An interpretability analysis model for rs-fMRI based on multi-scale information fusion interaction
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2024.107471
Jiayu Ye , Yanting Li , An Zeng , Dan Pan , Alzheimer’s Disease Neuroimaging Initiative
Resting-state functional magnetic resonance imaging (rs-fMRI), as a non-invasive neuroimaging technique, is widely used in the auxiliary diagnosis of brain diseases. However, existing deep learning-based methods are often not sensitive to the exploration of multi-scale temporal features, especially lacking effective utilization of information regarding blood oxygen level dependent (BOLD) level changes over short periods of time. Hence, we propose a brain disease recognition and analysis model for rs-fMRI based on multi-scale information fusion interaction (Fuse-Former). The design of Fuse-Former adopts a global–local model architecture. The model divides the brain into different regions of interest (ROI) using an external atlas and extracts regional BOLD response information as feature inputs. The global feature extraction module extracts features from the entire sequence through window information interaction and token fusion. The local feature extraction module proposes a KL distribution attention mechanism, which effectively selects key window time series features. It closely focuses on the subtle changes in BOLD response information during rest state. Moreover, Fuse-Former designs an interpretable module based on clustering, which unsupervisedly aggregates ROI in rs-fMRI that have similar effects on disease recognition and analyzes the correlation between ROI in each cluster. Fuse-Former model attains an accuracy of 0.738 and an AUC of 0.798 on the ADNI, and an accuracy of 0.743 and an AUC of 0.808 on the ABIDE I. When compared to advanced benchmark models, it exhibits substantial performance enhancements. Through the utilization of an interpretability module, we identify that the Dorsal Attention Network, Limbic Network, and Salience/Ventral Attention Network are particularly influential in the ADNI, whereas the Visual Network and Somatomotor Network are more significant in the ABIDE I. The experimental results demonstrate that brain network connectivity patterns exhibit significant differences across various pathologies. In terms of clustering structure, the ROIs for autism spectrum disorder (ASD) exhibit a more complex feature space distribution. Code is available at https://github.com/yjy-97/Fuse-Former.
{"title":"Fuse-Former: An interpretability analysis model for rs-fMRI based on multi-scale information fusion interaction","authors":"Jiayu Ye ,&nbsp;Yanting Li ,&nbsp;An Zeng ,&nbsp;Dan Pan ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.bspc.2024.107471","DOIUrl":"10.1016/j.bspc.2024.107471","url":null,"abstract":"<div><div>Resting-state functional magnetic resonance imaging (rs-fMRI), as a non-invasive neuroimaging technique, is widely used in the auxiliary diagnosis of brain diseases. However, existing deep learning-based methods are often not sensitive to the exploration of multi-scale temporal features, especially lacking effective utilization of information regarding blood oxygen level dependent (BOLD) level changes over short periods of time. Hence, we propose a brain disease recognition and analysis model for rs-fMRI based on multi-scale information fusion interaction (Fuse-Former). The design of Fuse-Former adopts a global–local model architecture. The model divides the brain into different regions of interest (ROI) using an external atlas and extracts regional BOLD response information as feature inputs. The global feature extraction module extracts features from the entire sequence through window information interaction and token fusion. The local feature extraction module proposes a KL distribution attention mechanism, which effectively selects key window time series features. It closely focuses on the subtle changes in BOLD response information during rest state. Moreover, Fuse-Former designs an interpretable module based on clustering, which unsupervisedly aggregates ROI in rs-fMRI that have similar effects on disease recognition and analyzes the correlation between ROI in each cluster. Fuse-Former model attains an accuracy of 0.738 and an AUC of 0.798 on the ADNI, and an accuracy of 0.743 and an AUC of 0.808 on the ABIDE I. When compared to advanced benchmark models, it exhibits substantial performance enhancements. Through the utilization of an interpretability module, we identify that the Dorsal Attention Network, Limbic Network, and Salience/Ventral Attention Network are particularly influential in the ADNI, whereas the Visual Network and Somatomotor Network are more significant in the ABIDE I. The experimental results demonstrate that brain network connectivity patterns exhibit significant differences across various pathologies. In terms of clustering structure, the ROIs for autism spectrum disorder (ASD) exhibit a more complex feature space distribution. Code is available at <span><span>https://github.com/yjy-97/Fuse-Former</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107471"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bounding boxes for weakly-supervised breast cancer segmentation in DCE-MRI
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107656
Yuming Zhong , Zeyan Xu , Chu Han , Zaiyi Liu , Yi Wang
Accurate segmentation of cancerous regions in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is crucial for the diagnosis and prognosis assessment of high-risk breast cancer. Deep learning methods have achieved success in this task. However, their performance heavily relies on large-scale fully annotated training data, which are time-consuming and labor-intensive to acquire. To alleviate the annotation effort, we propose a simple yet effective bounding box supervised segmentation framework, which consists of a primary network and an ancillary network. To fully exploit the bounding box annotations, we initially train the ancillary network. Specifically, we integrate a bounding box encoder into the ancillary network to serve as a naive spatial attention mechanism, thereby enhancing feature distinction between voxels inside and outside the bounding box. Additionally, we convert uncertain voxel-wise labels inside bounding box into accurate projection labels, ensuring a noise-free initial training process. Subsequently, we adopt an alternating optimization scheme where self-training is performed to generate voxel-wise pseudo labels, and a regularized loss is optimized to correct potential prediction error. Finally, we employ knowledge distillation to guide the training of the primary network with the pseudo labels generated by the ancillary network. We evaluate our method on an in-house DCE-MRI dataset containing 461 patients with 561 biopsy-proven breast cancers (mass/non-mass: 319/242). Our method attains a mean Dice value of 81.42%, outcompeting other weakly-supervised methods in our experiments. Notably, for the non-mass-like lesions with irregular shapes, our method can still generate favorable segmentation with an average Dice of 79.31%. The code is publicly available at https://github.com/Abner228/weakly_box_breast_cancer_seg.
{"title":"Bounding boxes for weakly-supervised breast cancer segmentation in DCE-MRI","authors":"Yuming Zhong ,&nbsp;Zeyan Xu ,&nbsp;Chu Han ,&nbsp;Zaiyi Liu ,&nbsp;Yi Wang","doi":"10.1016/j.bspc.2025.107656","DOIUrl":"10.1016/j.bspc.2025.107656","url":null,"abstract":"<div><div>Accurate segmentation of cancerous regions in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is crucial for the diagnosis and prognosis assessment of high-risk breast cancer. Deep learning methods have achieved success in this task. However, their performance heavily relies on large-scale fully annotated training data, which are time-consuming and labor-intensive to acquire. To alleviate the annotation effort, we propose a simple yet effective bounding box supervised segmentation framework, which consists of a primary network and an ancillary network. To fully exploit the bounding box annotations, we initially train the ancillary network. Specifically, we integrate a bounding box encoder into the ancillary network to serve as a naive spatial attention mechanism, thereby enhancing feature distinction between voxels inside and outside the bounding box. Additionally, we convert uncertain voxel-wise labels inside bounding box into accurate projection labels, ensuring a noise-free initial training process. Subsequently, we adopt an alternating optimization scheme where self-training is performed to generate voxel-wise pseudo labels, and a regularized loss is optimized to correct potential prediction error. Finally, we employ knowledge distillation to guide the training of the primary network with the pseudo labels generated by the ancillary network. We evaluate our method on an in-house DCE-MRI dataset containing 461 patients with 561 biopsy-proven breast cancers (mass/non-mass: 319/242). Our method attains a mean Dice value of 81.42%, outcompeting other weakly-supervised methods in our experiments. Notably, for the non-mass-like lesions with irregular shapes, our method can still generate favorable segmentation with an average Dice of 79.31%. <em>The code is publicly available at</em> <span><span>https://github.com/Abner228/weakly_box_breast_cancer_seg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107656"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning based coronary artery disease detection and segmentation using ultrasound imaging with adaptive gated SCNN models
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107637
Anshy Singh , N. Nagabhooshanam , Rakesh Kumar , Rajesh Verma , S. Mohanasundaram , Ramaswamy Manjith , Mohammed shuaib , A. Rajaram
The coronary artery disease is a constriction or obstruction of the coronary arteries, which provide the heart muscle with blood that, is rich in oxygen. Atherosclerosis, a condition where plaque, a mixture of fat, cholesterol, and other chemicals, accumulates inside the arterial walls, is usually the cause of this narrowing. The goals of Coronary artery disease treatment are to control symptoms, stop the illness from getting worse, and lower the chance of consequences including heart attacks and strokes. Medications, lifestyle, and occasionally surgical procedures are common forms of treatment. Outcomes show the Adaptive Gated Spatial Convolutional Neural Network model’s potential for precise and timely Coronary artery disease detection by ultrasound imaging. This work emphasizes how crucial it is to use deep learning algorithms like Adaptive Gated Spatial Convolutional Neural Network for the diagnosis of Coronary artery disease, especially when ultrasound imaging is involved. By enhancing CAD diagnosis and risk classification, proposed method presents a viable path toward better patient outcomes and more efficient treatment. The Adaptive Gated Spatial Convolutional Neural Network model outperforms other techniques with an impressive accuracy of 95.45% Sensitivity of 90.45%, Specificity of 94.36% and Roc of 94.56 %. To demonstrate the generalizability and clinical value of technique across a range of patient demographics, more validation and clinical testing are necessary.
{"title":"Deep learning based coronary artery disease detection and segmentation using ultrasound imaging with adaptive gated SCNN models","authors":"Anshy Singh ,&nbsp;N. Nagabhooshanam ,&nbsp;Rakesh Kumar ,&nbsp;Rajesh Verma ,&nbsp;S. Mohanasundaram ,&nbsp;Ramaswamy Manjith ,&nbsp;Mohammed shuaib ,&nbsp;A. Rajaram","doi":"10.1016/j.bspc.2025.107637","DOIUrl":"10.1016/j.bspc.2025.107637","url":null,"abstract":"<div><div>The coronary artery disease is a constriction or obstruction of the coronary arteries, which provide the heart muscle with blood that, is rich in oxygen. Atherosclerosis, a condition where plaque, a mixture of fat, cholesterol, and other chemicals, accumulates inside the arterial walls, is usually the cause of this narrowing. The goals of Coronary artery disease treatment are to control symptoms, stop the illness from getting worse, and lower the chance of consequences including heart attacks and strokes. Medications, lifestyle, and occasionally surgical procedures are common forms of treatment. Outcomes show the Adaptive Gated Spatial Convolutional Neural Network model’s potential for precise and timely Coronary artery disease detection by ultrasound imaging. This work emphasizes how crucial it is to use deep learning algorithms like Adaptive Gated Spatial Convolutional Neural Network for the diagnosis of Coronary artery disease, especially when ultrasound imaging is involved. By enhancing CAD diagnosis and risk classification, proposed method presents a viable path toward better patient outcomes and more efficient treatment. The Adaptive Gated Spatial Convolutional Neural Network model outperforms other techniques with an impressive accuracy of 95.45% Sensitivity of 90.45%, Specificity of 94.36% and Roc of 94.56 %. To demonstrate the generalizability and clinical value of technique across a range of patient demographics, more validation and clinical testing are necessary.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107637"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Parkinson’s disease classification from dynamic foot pressure data: A combined approach of clustering and feature selection
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107654
Shupei Jiao, Hua Huo, Wei Liu, Changwei Zhao, Lan Ma, Jinxuan Wang, Dongfang Li
In this paper, we focus on diagnosing Parkinson’s patients using dynamic plantar pressure data collected via sensor devices. We employ data preprocessing methods, including clustering, dimensionality reduction, and multichannel feature screening. Our approach proposes a comprehensive set of data processing techniques, including data cleaning, constrained clustering, and dimensionality reduction, to convert sensor data into a multichannel multivariate time series suitable for neural network input. Unlike current methods that use all features for automatic filtering by the network — adding complexity and resource burden — we introduce a data analysis method combining statistical features and Recursive Feature Elimination. This reduces the number of channels and simplifies the model. We used a simplified 1D-convnet model, achieving a 10-fold accuracy of 91.09%, segmentation accuracy of 95.54%, individual accuracy of 97.33%, weighted precision of 95.71%, weighted recall of 95.56%, and a weighted F1-score of 95.61%. Our results validate the effectiveness of our data acquisition and feature screening methods, and notably, our processing speed is nearly three times faster.
{"title":"Efficient Parkinson’s disease classification from dynamic foot pressure data: A combined approach of clustering and feature selection","authors":"Shupei Jiao,&nbsp;Hua Huo,&nbsp;Wei Liu,&nbsp;Changwei Zhao,&nbsp;Lan Ma,&nbsp;Jinxuan Wang,&nbsp;Dongfang Li","doi":"10.1016/j.bspc.2025.107654","DOIUrl":"10.1016/j.bspc.2025.107654","url":null,"abstract":"<div><div>In this paper, we focus on diagnosing Parkinson’s patients using dynamic plantar pressure data collected via sensor devices. We employ data preprocessing methods, including clustering, dimensionality reduction, and multichannel feature screening. Our approach proposes a comprehensive set of data processing techniques, including data cleaning, constrained clustering, and dimensionality reduction, to convert sensor data into a multichannel multivariate time series suitable for neural network input. Unlike current methods that use all features for automatic filtering by the network — adding complexity and resource burden — we introduce a data analysis method combining statistical features and Recursive Feature Elimination. This reduces the number of channels and simplifies the model. We used a simplified 1D-convnet model, achieving a 10-fold accuracy of 91.09%, segmentation accuracy of 95.54%, individual accuracy of 97.33%, weighted precision of 95.71%, weighted recall of 95.56%, and a weighted F1-score of 95.61%. Our results validate the effectiveness of our data acquisition and feature screening methods, and notably, our processing speed is nearly three times faster.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107654"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified multi-protocol MRI for Alzheimer’s disease diagnosis: Dual-decoder adversarial autoencoder and ensemble residual shrinkage attention network
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107660
Shiyao Li, Shukuan Lin, Yue Tu, Jianzhong Qiao, Shenao Xiao
Magnetic Resonance Imaging (MRI) has emerged as a critical tool in Alzheimer’s Disease (AD) clinical research, owing to its exceptional soft tissue contrast and high-resolution 3D imaging capabilities. Despite its advantages, current diagnostic models often overlook the potential of multi-protocol MRI imaging, leading to limited clinical applicability and practical challenges in generalizing to diverse data protocols. Furthermore, existing multi-protocol models lack a robust method for effectively aligning MRI images, resulting in model inefficient due to inconsistencies across protocols. To address these limitations, we propose a novel approach utilizing unified multi-protocol MRIs for AD diagnosis. Specifically, we introduce a double decoder adversarial autoencoder (DDAAE) to align MRIs from different protocols. The aligned MRI images are then integrated into our proposed ensemble residual soft shrinkage threshold attention (ERS2TA) diagnostic network for disease diagnosis. This framework not only leverages multi-protocol MRI images but also emphasizes disease-relevant regions while minimizing the impact of noise on diagnostic accuracy. Experimental evaluations on the ADNI dataset demonstrate superior performance in both the AD vs. Normal Controls (NC) classification task and the stable mild cognitive impairment (sMCI) vs. progressive mild cognitive impairment (pMCI) classification task, surpassing existing state-of-the-art methods.
{"title":"Unified multi-protocol MRI for Alzheimer’s disease diagnosis: Dual-decoder adversarial autoencoder and ensemble residual shrinkage attention network","authors":"Shiyao Li,&nbsp;Shukuan Lin,&nbsp;Yue Tu,&nbsp;Jianzhong Qiao,&nbsp;Shenao Xiao","doi":"10.1016/j.bspc.2025.107660","DOIUrl":"10.1016/j.bspc.2025.107660","url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) has emerged as a critical tool in Alzheimer’s Disease (AD) clinical research, owing to its exceptional soft tissue contrast and high-resolution 3D imaging capabilities. Despite its advantages, current diagnostic models often overlook the potential of multi-protocol MRI imaging, leading to limited clinical applicability and practical challenges in generalizing to diverse data protocols. Furthermore, existing multi-protocol models lack a robust method for effectively aligning MRI images, resulting in model inefficient due to inconsistencies across protocols. To address these limitations, we propose a novel approach utilizing unified multi-protocol MRIs for AD diagnosis. Specifically, we introduce a double decoder adversarial autoencoder (DDAAE) to align MRIs from different protocols. The aligned MRI images are then integrated into our proposed ensemble residual soft shrinkage threshold attention (ERS<sup>2</sup>TA) diagnostic network for disease diagnosis. This framework not only leverages multi-protocol MRI images but also emphasizes disease-relevant regions while minimizing the impact of noise on diagnostic accuracy. Experimental evaluations on the ADNI dataset demonstrate superior performance in both the AD vs. Normal Controls (NC) classification task and the stable mild cognitive impairment (sMCI) vs. progressive mild cognitive impairment (pMCI) classification task, surpassing existing state-of-the-art methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107660"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scribble-supervised medical image segmentation based on dynamically generated pseudo labels via multi-scale superpixels
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107668
Zhixun Li, Jiancheng Fang, Ruiyun Qiu, Huiling Gong
Nowadays, deep learning-based training increasingly requires adequate pixel-level labeled data. However, in the context of medical image segmentation, accurate pixel-level labeling of lesion edges remains a significant challenge for annotators. Nevertheless, making medical image segmentation with weak annotations is one of the most difficult tasks currently because the weak annotations only cover a small portion of the image and contain little relevant information. To maximize the utility of weak annotations, we propose a novel segmentation method that relies on scribble annotations. By utilizing multi-scale superpixels and deep features from the U-Net, the proposed method iteratively expands and generates pseudo labels with higher accuracy and richer information. This process involves similarity calculations, a dynamic adjustment mechanism, and multi-scale refinement for epoch-wise network training. Thus, as the step-wise expansion of high-confident pseudo labels and the elimination of low-confident ones, the performance of the method can gradually approach some fully-supervised methods. Our method outperforms other weakly annotated segmentation methods on the ACDC and ISIC2018 datasets, as shown by extensive experiments. The results show the segmentation performance of the proposed network is superiorly increased by approximately 1.8%, 3.5% and 1.6% on IoU, CPA and Dice, respectively, and the 95% hausdorff distance (HD95) decreased by approximately 0.8. Furthermore, ablation experiments confirm the effectiveness of each component of our method.
{"title":"Scribble-supervised medical image segmentation based on dynamically generated pseudo labels via multi-scale superpixels","authors":"Zhixun Li,&nbsp;Jiancheng Fang,&nbsp;Ruiyun Qiu,&nbsp;Huiling Gong","doi":"10.1016/j.bspc.2025.107668","DOIUrl":"10.1016/j.bspc.2025.107668","url":null,"abstract":"<div><div>Nowadays, deep learning-based training increasingly requires adequate pixel-level labeled data. However, in the context of medical image segmentation, accurate pixel-level labeling of lesion edges remains a significant challenge for annotators. Nevertheless, making medical image segmentation with weak annotations is one of the most difficult tasks currently because the weak annotations only cover a small portion of the image and contain little relevant information. To maximize the utility of weak annotations, we propose a novel segmentation method that relies on scribble annotations. By utilizing multi-scale superpixels and deep features from the U-Net, the proposed method iteratively expands and generates pseudo labels with higher accuracy and richer information. This process involves similarity calculations, a dynamic adjustment mechanism, and multi-scale refinement for epoch-wise network training. Thus, as the step-wise expansion of high-confident pseudo labels and the elimination of low-confident ones, the performance of the method can gradually approach some fully-supervised methods. Our method outperforms other weakly annotated segmentation methods on the ACDC and ISIC2018 datasets, as shown by extensive experiments. The results show the segmentation performance of the proposed network is superiorly increased by approximately 1.8%, 3.5% and 1.6% on <em>IoU</em>, <em>CPA</em> and <em>Dice</em>, respectively, and the 95% hausdorff distance (HD95) decreased by approximately 0.8. Furthermore, ablation experiments confirm the effectiveness of each component of our method.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107668"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1