首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
Heel pad’s hyperelastic properties and gait parameters reciprocal modelling by a Gaussian Mixture Model and Extreme Gradient Boosting framework
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-04-01 DOI: 10.1016/j.bspc.2025.107818
Luca Quagliato , Sewon Kim , Olamide Robiat Hassan , Taeyong Lee
Gait analysis and heel pad mechanical properties have been largely studied by physicians and biomechanical engineers alike. However, only a few contributions deal with the intertwining relationship between these two essential aspects and no research seems to propose a modeling approach to quantitatively correlate them. To bridge this gap, indentation experiments on the heel pad and gait analysis through motion capture camera were carried out on a group composed of 40 male and female subjects in the 20′s to 50′s. To establish a robust correlation between these two sets of parameters, the Gaussian Mixture Model (GMM) features’ enhancement technique was employed and combined with the Extreme Gradient Boosting (XGB) regressor. The hyperelastic constants from models, together with the gait parameters, were employed as both features and target variables in the GMM-XGB architecture showing the ambivalence of the solution and deviations between 5% and 8% in most cases. The results show the strong reciprocal correlation between the individual’s foot plantar soft tissue’s mechanical response and the gait parameters and pave the way for further investigations in the field of biomechanics.
{"title":"Heel pad’s hyperelastic properties and gait parameters reciprocal modelling by a Gaussian Mixture Model and Extreme Gradient Boosting framework","authors":"Luca Quagliato ,&nbsp;Sewon Kim ,&nbsp;Olamide Robiat Hassan ,&nbsp;Taeyong Lee","doi":"10.1016/j.bspc.2025.107818","DOIUrl":"10.1016/j.bspc.2025.107818","url":null,"abstract":"<div><div>Gait analysis and heel pad mechanical properties have been largely studied by physicians and biomechanical engineers alike. However, only a few contributions deal with the intertwining relationship between these two essential aspects and no research seems to propose a modeling approach to quantitatively correlate them. To bridge this gap, indentation experiments on the heel pad and gait analysis through motion capture camera were carried out on a group composed of 40 male and female subjects in the 20′s to 50′s. To establish a robust correlation between these two sets of parameters, the Gaussian Mixture Model (GMM) features’ enhancement technique was employed and combined with the Extreme Gradient Boosting (XGB) regressor. The hyperelastic constants from models, together with the gait parameters, were employed as both features and target variables in the GMM-XGB architecture showing the ambivalence of the solution and deviations between 5% and 8% in most cases. The results show the strong reciprocal correlation between the individual’s foot plantar soft tissue’s mechanical response and the gait parameters and pave the way for further investigations in the field of biomechanics.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107818"},"PeriodicalIF":4.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TF2AngleNet: Continuous finger joint angle estimation based on multidimensional time–frequency features of sEMG signals
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-04-01 DOI: 10.1016/j.bspc.2025.107833
Hai Jiang , Yusuke Yamanoi , Peiji Chen , Xin Wang , Shixiong Chen , Xu Yong , Guanglin Li , Hiroshi Yokoi , Xiaobei Jing
Current pattern recognition-based myoelectric prosthetic hand control methods map electromyography (EMG) signals to specific hand postures, achieving high accuracy but often resulting in unnatural movements during transitions, reducing the hand’s anthropomorphic nature. While some studies predict single-finger joint angles from EMG signals, these approaches lack practicality since arm muscles often control multiple fingers simultaneously. This study proposed a TF2AngleNet that predicts six finger joint angles using both time domain raw signals and frequency domain features of EMG signals. A novel non-contact joint angle measurement method was used to collect EMG and joint angle data from five healthy subjects over five days. The experimental results demonstrate that TF2AngleNet achieves outstanding performance in continuous joint angle estimation, with a correlation coefficient of 94.7%, an R2 value of 89.2%, and an NRMSE of 9.5%. Notably, this represents a 12.43% improvement in NRMSE, along with average gains of 1.2% in CC and 2.42% in R2 compared to single-domain models (p-values < 0.05 across all metrics). Also, hand postures were shown using a virtual hand model, providing a natural and bionic control method of myoelectric hands. Additionally, a novel conceptual framework is proposed to reduce barriers to using pattern recognition-based prosthetic hands, with this study serving as its first stage by validating the model’s performance under three experimental conditions. This research provides a promising solution for dexterous, biomimetic and practical myoelectric prosthetic hand control methods.
{"title":"TF2AngleNet: Continuous finger joint angle estimation based on multidimensional time–frequency features of sEMG signals","authors":"Hai Jiang ,&nbsp;Yusuke Yamanoi ,&nbsp;Peiji Chen ,&nbsp;Xin Wang ,&nbsp;Shixiong Chen ,&nbsp;Xu Yong ,&nbsp;Guanglin Li ,&nbsp;Hiroshi Yokoi ,&nbsp;Xiaobei Jing","doi":"10.1016/j.bspc.2025.107833","DOIUrl":"10.1016/j.bspc.2025.107833","url":null,"abstract":"<div><div>Current pattern recognition-based myoelectric prosthetic hand control methods map electromyography (EMG) signals to specific hand postures, achieving high accuracy but often resulting in unnatural movements during transitions, reducing the hand’s anthropomorphic nature. While some studies predict single-finger joint angles from EMG signals, these approaches lack practicality since arm muscles often control multiple fingers simultaneously. This study proposed a TF2AngleNet that predicts six finger joint angles using both time domain raw signals and frequency domain features of EMG signals. A novel non-contact joint angle measurement method was used to collect EMG and joint angle data from five healthy subjects over five days. The experimental results demonstrate that TF2AngleNet achieves outstanding performance in continuous joint angle estimation, with a correlation coefficient of 94.7%, an R<sup>2</sup> value of 89.2%, and an NRMSE of 9.5%. Notably, this represents a 12.43% improvement in NRMSE, along with average gains of 1.2% in CC and 2.42% in R<sup>2</sup> compared to single-domain models (p-values <span><math><mo>&lt;</mo></math></span> 0.05 across all metrics). Also, hand postures were shown using a virtual hand model, providing a natural and bionic control method of myoelectric hands. Additionally, a novel conceptual framework is proposed to reduce barriers to using pattern recognition-based prosthetic hands, with this study serving as its first stage by validating the model’s performance under three experimental conditions. This research provides a promising solution for dexterous, biomimetic and practical myoelectric prosthetic hand control methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107833"},"PeriodicalIF":4.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Holistic evaluation and generalization enhancement of CART-ANOVA based transfer learning approach for brain tumor classifications
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-04-01 DOI: 10.1016/j.bspc.2025.107829
Shiraz Afzal, Muhammad Rauf
This study presents a convolutional neural network (CNN) based on enhanced detection of brain tumors. The promising detection is made possible by the CART-ANOVA technique by applying preprocessing methods to improve testing dataset quality. While utilizing two sources of the dataset for both inter and intra-dataset validation, image sharpening techniques are utilized to refine the Source 2 and Source 1 testing datasets, leading to improved model performance and robustness in brain tumor classification. This paper introduces a hyper-parameter tuning model, designed to determine optimal parameters focusing on batch size and learning rate for the authentic classification. By providing statistical validation, this model ensures the selection of the most effective hyperparameters, leading to superior classification performance. The ResNet18 model was initially trained on one dataset, having 20 % of the data reserved for testing. To further evaluate its robustness and generalizability, the model was tested on a second dataset. The framework produces astonishing results, attaining 99.65 % for four tumor classifications and 98.05 % for seven tumor categories on the dataset from Source 1. The introduced data preprocessing methods resulted in 99.31 % accuracy for four distinct tumor classifications and 98.90 % for seven distinct tumor classifications on Source 2, while also improving Source 1 accuracy to 99.84 % (four-class) and 99.03 % (seven-class). By achieving seven distinct classifications, this work not only improves accuracy and variability but also strengthens model robustness through a rigorous post-validation framework. These advancements offer significant potential for improving brain tumor diagnosis and treatment strategies.
{"title":"Holistic evaluation and generalization enhancement of CART-ANOVA based transfer learning approach for brain tumor classifications","authors":"Shiraz Afzal,&nbsp;Muhammad Rauf","doi":"10.1016/j.bspc.2025.107829","DOIUrl":"10.1016/j.bspc.2025.107829","url":null,"abstract":"<div><div>This study presents a convolutional neural network (CNN) based on enhanced detection of brain tumors. The promising detection is made possible by the CART-ANOVA technique by applying preprocessing methods to improve testing dataset quality. While utilizing two sources of the dataset for both inter and intra-dataset validation, image sharpening techniques are utilized to refine the Source 2 and Source 1 testing datasets, leading to improved model performance and robustness in brain tumor classification. This paper introduces a hyper-parameter tuning model, designed to determine optimal parameters focusing on batch size and learning rate for the authentic classification. By providing statistical validation, this model ensures the selection of the most effective hyperparameters, leading to superior classification performance. The ResNet18 model was initially trained on one dataset, having 20 % of the data reserved for testing. To further evaluate its robustness and generalizability, the model was tested on a second dataset. The framework produces astonishing results, attaining 99.65 % for four tumor classifications and 98.05 % for seven tumor categories on the dataset from Source 1. The introduced data preprocessing methods resulted in 99.31 % accuracy for four distinct tumor classifications and 98.90 % for seven distinct tumor classifications on Source 2, while also improving Source 1 accuracy to 99.84 % (four-class) and 99.03 % (seven-class). By achieving seven distinct classifications, this work not only improves accuracy and variability but also strengthens model robustness through a rigorous post-validation framework. These advancements offer significant potential for improving brain tumor diagnosis and treatment strategies.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107829"},"PeriodicalIF":4.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging motor execution and motor imagery BCI paradigms: An inter-task transfer learning approach
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-31 DOI: 10.1016/j.bspc.2025.107834
Sergio Pérez-Velasco, Diego Marcos-Martínez, Eduardo Santamaría-Vázquez, Víctor Martínez-Cagigal, Roberto Hornero
Motor imagery (MI)-based brain–computer interfaces (BCIs) decode movement imagination from brain activity, but improving decoding accuracy from electroencephalography (EEG) remains challenging. MI-based BCIs require calibration runs to train models; however, participant engagement cannot be externally verified. Motor execution (ME) is more straightforward and can be supervised. Deep learning (DL) leverages transfer learning (TL) to bypass calibration. This is the first work to explore wether a ME-trained DL model can reliably classify MI without finetuning to the MI task, thereby achieving direct TL between ME and MI tasks. We employed EEGSym, a DL network for inter-subject TL of EEG decoding, evaluating three scenarios: ME to MI, ME to ME, and MI to MI classification. We analyzed performance correlation between scenarios, and used shapley additive explanations (SHAP) to elucidate model focus patterns learned from ME or MI data. Results show that DL models trained on ME data and tested on MI perform comparably to those trained on MI data. A significant positive correlation was found between performance in ME and MI tasks for models trained on ME data. Explainable artificial intelligence (XAI) techniques revealed robust correlation between patterns in ME and MI tasks. However, between 0.5 to 1 s, the ME-trained model focused on the contralateral central region, while the MI-trained model also targeted the ipsilateral fronto-central region. Our findings demonstrate the viability of inter-task TL between ME and MI using DL models in BCI applications. This supports using ME-trained models for MI tasks to enhance targeted learning of brain activation patterns.
{"title":"Bridging motor execution and motor imagery BCI paradigms: An inter-task transfer learning approach","authors":"Sergio Pérez-Velasco,&nbsp;Diego Marcos-Martínez,&nbsp;Eduardo Santamaría-Vázquez,&nbsp;Víctor Martínez-Cagigal,&nbsp;Roberto Hornero","doi":"10.1016/j.bspc.2025.107834","DOIUrl":"10.1016/j.bspc.2025.107834","url":null,"abstract":"<div><div>Motor imagery (MI)-based brain–computer interfaces (BCIs) decode movement imagination from brain activity, but improving decoding accuracy from electroencephalography (EEG) remains challenging. MI-based BCIs require calibration runs to train models; however, participant engagement cannot be externally verified. Motor execution (ME) is more straightforward and can be supervised. Deep learning (DL) leverages transfer learning (TL) to bypass calibration. This is the first work to explore wether a ME-trained DL model can reliably classify MI without finetuning to the MI task, thereby achieving direct TL between ME and MI tasks. We employed EEGSym, a DL network for inter-subject TL of EEG decoding, evaluating three scenarios: ME to MI, ME to ME, and MI to MI classification. We analyzed performance correlation between scenarios, and used shapley additive explanations (SHAP) to elucidate model focus patterns learned from ME or MI data. Results show that DL models trained on ME data and tested on MI perform comparably to those trained on MI data. A significant positive correlation was found between performance in ME and MI tasks for models trained on ME data. Explainable artificial intelligence (XAI) techniques revealed robust correlation between patterns in ME and MI tasks. However, between 0.5 to 1 s, the ME-trained model focused on the contralateral central region, while the MI-trained model also targeted the ipsilateral fronto-central region. Our findings demonstrate the viability of inter-task TL between ME and MI using DL models in BCI applications. This supports using ME-trained models for MI tasks to enhance targeted learning of brain activation patterns.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107834"},"PeriodicalIF":4.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SM2C – Boost the semi-supervised segmentation for medical image by using meta pseudo labels and mixed images
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-31 DOI: 10.1016/j.bspc.2025.107869
Yifei Wang , Chuhong Zhu
Recently, semi-supervised learning methods have effectively leveraged unlabeled data to address the scarcity of annotated medical images. However, unlike common object datasets, the limited medical image resources often lead to overfitting due to significant shape variations of specific organs across cases or even within different sections of the same case. The intricate shapes of organs and lesions in medical images introduce additional complexity in auto-diagnosis, hindering the generalization of networks. To address this challenge, we propose a novel method, Scaling-up Mix with Multi-Class (SM2C), to synthesize organs and lesions with diverse shapes for clinical diagnosis. Integrated into a teacher–student framework, SM2C enhances the reliability of pseudo-labels generated by the teacher network, thereby improving the generalization of the student network. This method employs three key strategies: scaling up image size, multi-class mixing, and object shape jittering. We conduct ablation studies to validate the SM2C design, demonstrating its effectiveness in diversifying segmentation object shapes. In detail, multi-class mixing preserves inter-class balance, object shape jittering generates the various shapes that may appear in clinical diagnosis, and scaling up image size enriches context while enhancing robustness. Furthermore, Extensive experiments on three benchmark medical segmentation datasets further show solid gains compared with other state-of-the-art methods.
{"title":"SM2C – Boost the semi-supervised segmentation for medical image by using meta pseudo labels and mixed images","authors":"Yifei Wang ,&nbsp;Chuhong Zhu","doi":"10.1016/j.bspc.2025.107869","DOIUrl":"10.1016/j.bspc.2025.107869","url":null,"abstract":"<div><div>Recently, semi-supervised learning methods have effectively leveraged unlabeled data to address the scarcity of annotated medical images. However, unlike common object datasets, the limited medical image resources often lead to overfitting due to significant shape variations of specific organs across cases or even within different sections of the same case. The intricate shapes of organs and lesions in medical images introduce additional complexity in auto-diagnosis, hindering the generalization of networks. To address this challenge, we propose a novel method, Scaling-up Mix with Multi-Class (SM<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>C), to synthesize organs and lesions with diverse shapes for clinical diagnosis. Integrated into a teacher–student framework, SM<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>C enhances the reliability of pseudo-labels generated by the teacher network, thereby improving the generalization of the student network. This method employs three key strategies: scaling up image size, multi-class mixing, and object shape jittering. We conduct ablation studies to validate the SM<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>C design, demonstrating its effectiveness in diversifying segmentation object shapes. In detail, multi-class mixing preserves inter-class balance, object shape jittering generates the various shapes that may appear in clinical diagnosis, and scaling up image size enriches context while enhancing robustness. Furthermore, Extensive experiments on three benchmark medical segmentation datasets further show solid gains compared with other state-of-the-art methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107869"},"PeriodicalIF":4.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical priors-guided feature learning network on multimodal imaging raw data for brain tumor segmentation
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-31 DOI: 10.1016/j.bspc.2025.107855
Yingying Feng , Weiguang Wang , Xuanyi Zhang , Yi Jing , Jingao Xu , Moyu Xia , Wei Cai , Xia Zhang
Mainstream brain tumor segmentation methods require skull stripping, which can inadvertently remove adjacent tumor lesions and reduce accuracy. To address this, we propose MPGNet, which directly uses raw multimodal imaging data for segmentation. Guided by medical prior information, it effectively avoids skull interference and improves accuracy. Specifically, to alleviate skull interference and misidentification, we design a relevant graph aggregation (RGA) module that enhances feature representations by leveraging the structural characteristics of the brain. Then, to reduce confusion among different regions in the prediction results, we define a prior density loss (PDL) function using brain tumor density information from multimodal imaging. Finally, to evaluate our method, we collect skull-stripped brain tumor segmentation challenge (BRATS) data, their corresponding Cancer Genome Atlas (TCGA) raw data, and actual clinical raw data annotated by experienced radiologists. Our experiments demonstrate that MPGNet is effective at preserving tumor integrity compared to other state-of-the-art brain tumor segmentation methods that require skull stripping, improving the Dice similarity coefficient by 4.27%. Additionally, when all models are trained and tested with raw data, MPGNet outperforms the best existing model by 1.05% Dice, showcasing superior performance in handling skull interference.
{"title":"Medical priors-guided feature learning network on multimodal imaging raw data for brain tumor segmentation","authors":"Yingying Feng ,&nbsp;Weiguang Wang ,&nbsp;Xuanyi Zhang ,&nbsp;Yi Jing ,&nbsp;Jingao Xu ,&nbsp;Moyu Xia ,&nbsp;Wei Cai ,&nbsp;Xia Zhang","doi":"10.1016/j.bspc.2025.107855","DOIUrl":"10.1016/j.bspc.2025.107855","url":null,"abstract":"<div><div>Mainstream brain tumor segmentation methods require skull stripping, which can inadvertently remove adjacent tumor lesions and reduce accuracy. To address this, we propose MPGNet, which directly uses raw multimodal imaging data for segmentation. Guided by medical prior information, it effectively avoids skull interference and improves accuracy. Specifically, to alleviate skull interference and misidentification, we design a relevant graph aggregation (RGA) module that enhances feature representations by leveraging the structural characteristics of the brain. Then, to reduce confusion among different regions in the prediction results, we define a prior density loss (PDL) function using brain tumor density information from multimodal imaging. Finally, to evaluate our method, we collect skull-stripped brain tumor segmentation challenge (BRATS) data, their corresponding Cancer Genome Atlas (TCGA) raw data, and actual clinical raw data annotated by experienced radiologists. Our experiments demonstrate that MPGNet is effective at preserving tumor integrity compared to other state-of-the-art brain tumor segmentation methods that require skull stripping, improving the Dice similarity coefficient by 4.27%. Additionally, when all models are trained and tested with raw data, MPGNet outperforms the best existing model by 1.05% Dice, showcasing superior performance in handling skull interference.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107855"},"PeriodicalIF":4.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRL-ECG-HF: Deep reinforcement learning for enhanced automated diagnosis of heart failure with imbalanced ECG data
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-31 DOI: 10.1016/j.bspc.2025.107680
Bochao Zhao , Zhenyue Gao , Xiaoli Liu , Zhengbo Zhang , Wendong Xiao , Sen Zhang
Heart failure (HF) is a prevalent cardiovascular condition requiring accurate and timely diagnosis for effective management. Electrocardiogram (ECG) data, as a non-invasive diagnostic resource, provides crucial temporal–spatial information essential for HF diagnosis. However, traditional automated systems struggle with the temporal–spatial complexity and class imbalance of ECG data. To address these challenges, we propose DRL-ECG-HF, a deep reinforcement learning (DRL)-based multi-instance model for enhanced HF diagnosis. By treating each ECG recording as a bag of instances and analyzing individual segments, the model captures fine-grained features related to HF. To mitigate data imbalance, we introduce a DRL strategy incorporating prioritized experience replay (PER), assigning different rewards to minority class instances. The SHapley Additive exPlanations (SHAP) technique is applied to enhance interpretability, providing clinicians insights into the model’s decision-making. The proposed method was validated on the MIMIC-IV-ECG dataset with 12-lead, 10-second ECG samples from 154,934 patients and compared against various methods, including techniques for handling imbalanced data and state-of-the-art time-series classification approaches. The DRL-ECG-HF model achieved an AUROC of 0.90, an F-measure of 0.58, and a G-mean of 0.80, significantly outperforming existing methods. Additionally, it demonstrated superior performance using 12-lead ECG data compared to single-lead, emphasizing the value of comprehensive temporal–spatial information. These results highlight the potential of DRL-ECG-HF as a reliable tool for improving HF diagnosis accuracy and interpretability, paving the way for clinical adoption.
{"title":"DRL-ECG-HF: Deep reinforcement learning for enhanced automated diagnosis of heart failure with imbalanced ECG data","authors":"Bochao Zhao ,&nbsp;Zhenyue Gao ,&nbsp;Xiaoli Liu ,&nbsp;Zhengbo Zhang ,&nbsp;Wendong Xiao ,&nbsp;Sen Zhang","doi":"10.1016/j.bspc.2025.107680","DOIUrl":"10.1016/j.bspc.2025.107680","url":null,"abstract":"<div><div>Heart failure (HF) is a prevalent cardiovascular condition requiring accurate and timely diagnosis for effective management. Electrocardiogram (ECG) data, as a non-invasive diagnostic resource, provides crucial temporal–spatial information essential for HF diagnosis. However, traditional automated systems struggle with the temporal–spatial complexity and class imbalance of ECG data. To address these challenges, we propose DRL-ECG-HF, a deep reinforcement learning (DRL)-based multi-instance model for enhanced HF diagnosis. By treating each ECG recording as a bag of instances and analyzing individual segments, the model captures fine-grained features related to HF. To mitigate data imbalance, we introduce a DRL strategy incorporating prioritized experience replay (PER), assigning different rewards to minority class instances. The SHapley Additive exPlanations (SHAP) technique is applied to enhance interpretability, providing clinicians insights into the model’s decision-making. The proposed method was validated on the MIMIC-IV-ECG dataset with 12-lead, 10-second ECG samples from 154,934 patients and compared against various methods, including techniques for handling imbalanced data and state-of-the-art time-series classification approaches. The DRL-ECG-HF model achieved an AUROC of 0.90, an F-measure of 0.58, and a G-mean of 0.80, significantly outperforming existing methods. Additionally, it demonstrated superior performance using 12-lead ECG data compared to single-lead, emphasizing the value of comprehensive temporal–spatial information. These results highlight the potential of DRL-ECG-HF as a reliable tool for improving HF diagnosis accuracy and interpretability, paving the way for clinical adoption.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107680"},"PeriodicalIF":4.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optic disc and cup segmentation methods for glaucoma detection using twin- inception transformer hinge attention network with cycle consistent convolutional neural network
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-29 DOI: 10.1016/j.bspc.2025.107844
C. Rekha , K. Jayashree
One of the primary sources of blindness worldwide is glaucoma and can only be treated if detected early. This study’s goal is to design a comprehensive scheme for the glaucoma classification incorporating advanced approaches for extracting attributes and segmentation. To begin with, the optic disc and cup are well segmented from the retinal pictures with the Pufferfish Optimization Algorithm (POA). Due to POA, it becomes very easy to more accurately define the area of the optic disc and cup which in turn helps in glaucoma diagnosis depending on the severity. Joining the state-of-the-art neural network designs for attributes extraction and categorization, a new hybrid deep learning (DL) method is described. In the developed model, the Primary Inception Transformer, Hinge Attention Network, and Cycle-Consistent Convolutional Neural Network (Cycle-Consistent CNN) are in fusion with the Human Memory Optimization Algorithm (HMOA). The Twin-Inception Transformer captures intricate spatial interactions in retinal images by utilizing transformer processes, while the Hinge Attention Network fortifies feature learning by a dynamic attention model. In incurred to enhance the training process, HMOA replicates the human memory consolidation process to increase the trainees’ retention and reliability. This combined approach enhances the model’s capability of generalization while still preserving the highest quality of features extracted. The usefulness of the indicated architecture has been proved in experiments using the freely available glaucoma datasets. When compared with today’s benchmark techniques the presented work yields a better performance such as 99.7% accuracy, and 99.5% precision.
{"title":"Optic disc and cup segmentation methods for glaucoma detection using twin- inception transformer hinge attention network with cycle consistent convolutional neural network","authors":"C. Rekha ,&nbsp;K. Jayashree","doi":"10.1016/j.bspc.2025.107844","DOIUrl":"10.1016/j.bspc.2025.107844","url":null,"abstract":"<div><div>One of the primary sources of blindness worldwide is glaucoma and can only be treated if detected early. This study’s goal is to design a comprehensive scheme for the glaucoma classification incorporating advanced approaches for extracting attributes and segmentation. To begin with, the optic disc and cup are well segmented from the retinal pictures with the Pufferfish Optimization Algorithm (POA). Due to POA, it becomes very easy to more accurately define the area of the optic disc and cup which in turn helps in glaucoma diagnosis depending on the severity. Joining the state-of-the-art neural network designs for attributes extraction and categorization, a new hybrid deep learning (DL) method is described. In the developed model, the Primary Inception Transformer, Hinge Attention Network, and Cycle-Consistent Convolutional Neural Network (Cycle-Consistent CNN) are in fusion with the Human Memory Optimization Algorithm (HMOA). The Twin-Inception Transformer captures intricate spatial interactions in retinal images by utilizing transformer processes, while the Hinge Attention Network fortifies feature learning by a dynamic attention model. In incurred to enhance the training process, HMOA replicates the human memory consolidation process to increase the trainees’ retention and reliability. This combined approach enhances the model’s capability of generalization while still preserving the highest quality of features extracted. The usefulness of the indicated architecture has been proved in experiments using the freely available glaucoma datasets. When compared with today’s benchmark techniques the presented work yields a better performance such as 99.7% accuracy, and 99.5% precision.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107844"},"PeriodicalIF":4.9,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143724429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized YOLOv11 model for lung nodule detection 用于肺结节检测的优化 YOLOv11 模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-29 DOI: 10.1016/j.bspc.2025.107830
Zichao Liu , Lili Wei , Tingqiang Song

Objectives

This study proposes an advanced YOLOv11-based lung nodule detection algorithm that balances high accuracy with efficient computation, addressing the critical need for accurate and timely early diagnosis of lung cancer.

Methods

We replaced the traditional backbone with MobileNetV4, which employs reversible connections to prevent information loss and enhance feature representation, thereby improving the model’s efficiency in processing high-resolution CT scans. We developed a novel C2PSA module, C2PSA-MSDA, which integrates Multi-Scale Dilation Attention (MSDA) to capture multi-scale features more effectively. For the neck part, we introduced the new FreqFusion-BiFPN to enhance feature integration and boundary clarity, thereby reducing false positives. Additionally, we created a new C3k2 module, DyC3k2, to optimize feature fusion. We adopted Focal-inv-IoU for bounding box regression and Slide Loss for classification, which help the model focus more on high-quality predictions while still considering lower-quality ones, leading to more balanced and accurate detection.

Results

Extensive experiments on the LUNAR16 dataset and a proprietary dataset demonstrated significant improvements: precision increased by 4.15 %, recall by 3.23 %, mAP50 by 4.04 %, and mAP50-95 by 3.28 % compared to the baseline YOLOv11. These gains were achieved with a smaller model size (5.08 MB) and a processing speed of 135.2 frames per second (f/s). The model also performed well on the proprietary dataset, demonstrating strong generalization.

Conclusion

The results indicate that the improved algorithm achieves higher accuracy, real-time performance, and better generalization in lung nodule detection, highlighting its potential for clinical application in the early lung cancer diagnosis.
{"title":"Optimized YOLOv11 model for lung nodule detection","authors":"Zichao Liu ,&nbsp;Lili Wei ,&nbsp;Tingqiang Song","doi":"10.1016/j.bspc.2025.107830","DOIUrl":"10.1016/j.bspc.2025.107830","url":null,"abstract":"<div><h3>Objectives</h3><div>This study proposes an advanced YOLOv11-based lung nodule detection algorithm that balances high accuracy with efficient computation, addressing the critical need for accurate and timely early diagnosis of lung cancer.</div></div><div><h3>Methods</h3><div>We replaced the traditional backbone with MobileNetV4, which employs reversible connections to prevent information loss and enhance feature representation, thereby improving the model’s efficiency in processing high-resolution CT scans. We developed a novel C2PSA module, C2PSA-MSDA, which integrates Multi-Scale Dilation Attention (MSDA) to capture multi-scale features more effectively. For the neck part, we introduced the new FreqFusion-BiFPN to enhance feature integration and boundary clarity, thereby reducing false positives. Additionally, we created a new C3k2 module, DyC3k2, to optimize feature fusion. We adopted Focal-inv-IoU for bounding box regression and Slide Loss for classification, which help the model focus more on high-quality predictions while still considering lower-quality ones, leading to more balanced and accurate detection.</div></div><div><h3>Results</h3><div>Extensive experiments on the LUNAR16 dataset and a proprietary dataset demonstrated significant improvements: precision increased by 4.15 %, recall by 3.23 %, mAP50 by 4.04 %, and mAP50-95 by 3.28 % compared to the baseline YOLOv11. These gains were achieved with a smaller model size (5.08 MB) and a processing speed of 135.2 frames per second (f/s). The model also performed well on the proprietary dataset, demonstrating strong generalization.</div></div><div><h3>Conclusion</h3><div>The results indicate that the improved algorithm achieves higher accuracy, real-time performance, and better generalization in lung nodule detection, highlighting its potential for clinical application in the early lung cancer diagnosis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107830"},"PeriodicalIF":4.9,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143734971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling the abusive head trauma and Shaken Baby Syndrome: A comprehensive wavelet analysis
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-28 DOI: 10.1016/j.bspc.2025.107862
Sebastian Glowinski , Alina Głowińska

Background

Abusive Head Trauma (AHT) and Shaken Baby Syndrome (SBS) represent severe forms of child abuse with devastating consequences, including profound neurological damage and, in some cases, death. Despite advances in medical imaging and clinical assessments, diagnosing these injuries remains a formidable challenge due to their intricate and multifaceted nature.

Objective

This research explores the application of wavelet analysis, a sophisticated signal processing method, to improve the detection and comprehension of AHT and SBS. By leveraging this technique, the study aims to enhance diagnostic accuracy and provide deeper insights into the biomechanical mechanisms underlying these injuries.

Results

The analysis revealed intense, rapid oscillations in the forehead and back of the head, suggesting violent shaking, while the sternum showed less pronounced oscillations, indicating gentler motion. The wavelet analysis pinpointed frequencies between 6 and 12 Hz in the head, with lower frequencies for the sternum, shedding light on the distinct ways different parts of the body respond to these forces. Simulated free-fall impacts further revealed significant rotational and linear accelerations, with sharp peaks in both the forehead and sternum. These findings are crucial for understanding the injury mechanisms. Additionally, wavelet transfer function analysis highlighted the synchronized movements and energy transfer between body parts, with frequency responses varying based on the impact surface.

Conclusion

This study sheds light on the intricate biomechanical dynamics of infants during episodes of shaking and impact. It underscores the need for continued research to refine our understanding of these injury mechanisms and to inform more effective prevention and intervention strategies for protecting vulnerable populations.
{"title":"Unveiling the abusive head trauma and Shaken Baby Syndrome: A comprehensive wavelet analysis","authors":"Sebastian Glowinski ,&nbsp;Alina Głowińska","doi":"10.1016/j.bspc.2025.107862","DOIUrl":"10.1016/j.bspc.2025.107862","url":null,"abstract":"<div><h3>Background</h3><div>Abusive Head Trauma (AHT) and Shaken Baby Syndrome (SBS) represent severe forms of child abuse with devastating consequences, including profound neurological damage and, in some cases, death. Despite advances in medical imaging and clinical assessments, diagnosing these injuries remains a formidable challenge due to their intricate and multifaceted nature.</div></div><div><h3>Objective</h3><div>This research explores the application of wavelet analysis, a sophisticated signal processing method, to improve the detection and comprehension of AHT and SBS. By leveraging this technique, the study aims to enhance diagnostic accuracy and provide deeper insights into the biomechanical mechanisms underlying these injuries.</div></div><div><h3>Results</h3><div>The analysis revealed intense, rapid oscillations in the forehead and back of the head, suggesting violent shaking, while the sternum showed less pronounced oscillations, indicating gentler motion. The wavelet analysis pinpointed frequencies between 6 and 12 Hz in the head, with lower frequencies for the sternum, shedding light on the distinct ways different parts of the body respond to these forces. Simulated free-fall impacts further revealed significant rotational and linear accelerations, with sharp peaks in both the forehead and sternum. These findings are crucial for understanding the injury mechanisms. Additionally, wavelet transfer function analysis highlighted the synchronized movements and energy transfer between body parts, with frequency responses varying based on the impact surface.</div></div><div><h3>Conclusion</h3><div>This study sheds light on the intricate biomechanical dynamics of infants during episodes of shaking and impact. It underscores the need for continued research to refine our understanding of these injury mechanisms and to inform more effective prevention and intervention strategies for protecting vulnerable populations.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"107 ","pages":"Article 107862"},"PeriodicalIF":4.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143724430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1