首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
An energy-efficient dual-branch spiking neural network for epileptic seizure detection from electroencephalogram signals 基于脑电图信号的高效双分支尖峰神经网络癫痫发作检测
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-01-27 DOI: 10.1016/j.bspc.2026.109694
Xicheng Lou , Xinwei Li , Hongying Meng , Jun Hu , Yongmei Xu , Haohuan Kong , Jiazhang Yang , Zhangyong Li
Epileptic seizure detection from electroencephalogram (EEG) signals is critical for clinical diagnosis and long-term neurological monitoring. However, conventional artificial neural networks (ANNs) are often computationally expensive and energy demanding, which hinders their deployment in large-scale or real-time brain-signal analysis. Spiking neural networks (SNNs) provide a biologically inspired and energy-efficient alternative, yet existing architectures still struggle to balance accuracy and efficiency in EEG-based seizure detection. In this study, we propose an adaptive integrate-and-fire (AIF) spiking neuron model that dynamically adjusts its temporal behavior to capture diverse activation patterns. Based on this neuron, we develop a dual-branch spiking neural network (DBSNet), designed to decode multi-scale and multi-dimensional EEG features for improved seizure detection. We evaluate DBSNet on three public epileptic EEG datasets. Among SNN-based approaches, DBSNet consistently achieves state-of-the-art performance. On a large-scale dataset, it even surpasses the best-performing ANN while consuming only one-seventh of its theoretical energy, highlighting its efficiency advantage. These results demonstrate the potential of adaptive spiking architectures to achieve accurate and sustainable neural computing for EEG-based seizure detection, and they suggest a promising paradigm for broader applications in brain-signal processing.
从脑电图(EEG)信号中检测癫痫发作对临床诊断和长期神经监测至关重要。然而,传统的人工神经网络(ann)通常在计算上昂贵且能耗高,这阻碍了它们在大规模或实时脑信号分析中的部署。脉冲神经网络(snn)提供了一种受生物学启发且节能的替代方案,但现有架构仍在努力平衡基于脑电图的癫痫发作检测的准确性和效率。在这项研究中,我们提出了一种自适应整合-激发(AIF)峰值神经元模型,该模型可以动态调整其时间行为以捕获不同的激活模式。在此基础上,我们开发了一个双分支尖峰神经网络(DBSNet),设计用于解码多尺度和多维脑电图特征,以改进癫痫发作检测。我们在三个公开的癫痫脑电图数据集上对DBSNet进行了评估。在基于snn的方法中,DBSNet始终实现最先进的性能。在大规模数据集上,它甚至超过了性能最好的人工神经网络,而消耗的理论能量仅为其理论能量的七分之一,凸显了其效率优势。这些结果证明了自适应尖峰架构在实现基于脑电图的癫痫发作检测的准确和可持续的神经计算方面的潜力,并为在大脑信号处理方面的更广泛应用提供了一个有前途的范例。
{"title":"An energy-efficient dual-branch spiking neural network for epileptic seizure detection from electroencephalogram signals","authors":"Xicheng Lou ,&nbsp;Xinwei Li ,&nbsp;Hongying Meng ,&nbsp;Jun Hu ,&nbsp;Yongmei Xu ,&nbsp;Haohuan Kong ,&nbsp;Jiazhang Yang ,&nbsp;Zhangyong Li","doi":"10.1016/j.bspc.2026.109694","DOIUrl":"10.1016/j.bspc.2026.109694","url":null,"abstract":"<div><div>Epileptic seizure detection from electroencephalogram (EEG) signals is critical for clinical diagnosis and long-term neurological monitoring. However, conventional artificial neural networks (ANNs) are often computationally expensive and energy demanding, which hinders their deployment in large-scale or real-time brain-signal analysis. Spiking neural networks (SNNs) provide a biologically inspired and energy-efficient alternative, yet existing architectures still struggle to balance accuracy and efficiency in EEG-based seizure detection. In this study, we propose an adaptive integrate-and-fire (AIF) spiking neuron model that dynamically adjusts its temporal behavior to capture diverse activation patterns. Based on this neuron, we develop a dual-branch spiking neural network (DBSNet), designed to decode multi-scale and multi-dimensional EEG features for improved seizure detection. We evaluate DBSNet on three public epileptic EEG datasets. Among SNN-based approaches, DBSNet consistently achieves state-of-the-art performance. On a large-scale dataset, it even surpasses the best-performing ANN while consuming only one-seventh of its theoretical energy, highlighting its efficiency advantage. These results demonstrate the potential of adaptive spiking architectures to achieve accurate and sustainable neural computing for EEG-based seizure detection, and they suggest a promising paradigm for broader applications in brain-signal processing.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109694"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clustering-enhanced active learning with dynamic sampling for brain tumor classification 基于动态采样的聚类增强主动学习脑肿瘤分类
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-01-31 DOI: 10.1016/j.bspc.2026.109715
Yawen Fan , Xiang Wang , Zhen Yue , Xinchen Zhang , Mingkai Chen , Jianxin Chen
Automated classification of brain tumors is essential for reliable diagnosis and effective treatment planning. However, deep learning-based methods require large, well-labeled MRI datasets, which can be expensive, time-consuming, and challenging to obtain in clinical settings. Moreover, real-world datasets often exhibit severe class imbalance and inter-subject variability, both of which can compromise model robustness and limit generalization to unseen cases. In this paper, we introduce a novel dynamic active learning framework enhanced by clustering for brain tumor classification. First, the proposed framework extracts high-level features of MRI images by a self-supervised learning method, which are then clustered to form a multi-class data pool, providing a pre-classification of the samples. To reduce annotation effort while maintaining model performance, the framework dynamically selects the most informative samples from each cluster by jointly considering prediction uncertainty and cluster diversity. Additionally, we have constructed a high-quality brain tumor MRI dataset that includes three tumor types: glioma, metastatic tumor, and diffuse large B-cell lymphoma. Notably, the latter is scarce in existing public datasets. Extensive experiments on both public and private datasets show that the proposed method achieves competitive performance using only a small portion of labeled data. Also, on an external test set, the method obtained an average accuracy of 0.92. All these results suggest that our method offers a practical and efficient solution for MRI-based brain tumor classification in real-world clinical settings.
脑肿瘤的自动分类对于可靠的诊断和有效的治疗计划至关重要。然而,基于深度学习的方法需要大量的、标记良好的MRI数据集,这可能是昂贵的、耗时的,并且在临床环境中难以获得。此外,现实世界的数据集经常表现出严重的类别不平衡和学科间的可变性,这两者都会损害模型的鲁棒性,并限制对未知情况的泛化。本文提出了一种新的基于聚类的动态主动学习框架,用于脑肿瘤分类。首先,提出的框架通过自监督学习方法提取MRI图像的高级特征,然后将其聚类形成多类数据池,提供样本的预分类。为了在保持模型性能的同时减少标注工作量,该框架通过综合考虑预测不确定性和聚类多样性,从每个聚类中动态选择信息量最大的样本。此外,我们还构建了一个高质量的脑肿瘤MRI数据集,其中包括三种肿瘤类型:胶质瘤、转移性肿瘤和弥漫性大b细胞淋巴瘤。值得注意的是,后者在现有的公共数据集中是稀缺的。在公共和私有数据集上进行的大量实验表明,该方法仅使用一小部分标记数据就能获得具有竞争力的性能。此外,在外部测试集上,该方法的平均精度为0.92。所有这些结果表明,我们的方法在现实世界的临床环境中为基于mri的脑肿瘤分类提供了一种实用有效的解决方案。
{"title":"Clustering-enhanced active learning with dynamic sampling for brain tumor classification","authors":"Yawen Fan ,&nbsp;Xiang Wang ,&nbsp;Zhen Yue ,&nbsp;Xinchen Zhang ,&nbsp;Mingkai Chen ,&nbsp;Jianxin Chen","doi":"10.1016/j.bspc.2026.109715","DOIUrl":"10.1016/j.bspc.2026.109715","url":null,"abstract":"<div><div>Automated classification of brain tumors is essential for reliable diagnosis and effective treatment planning. However, deep learning-based methods require large, well-labeled MRI datasets, which can be expensive, time-consuming, and challenging to obtain in clinical settings. Moreover, real-world datasets often exhibit severe class imbalance and inter-subject variability, both of which can compromise model robustness and limit generalization to unseen cases. In this paper, we introduce a novel dynamic active learning framework enhanced by clustering for brain tumor classification. First, the proposed framework extracts high-level features of MRI images by a self-supervised learning method, which are then clustered to form a multi-class data pool, providing a pre-classification of the samples. To reduce annotation effort while maintaining model performance, the framework dynamically selects the most informative samples from each cluster by jointly considering prediction uncertainty and cluster diversity. Additionally, we have constructed a high-quality brain tumor MRI dataset that includes three tumor types: glioma, metastatic tumor, and diffuse large B-cell lymphoma. Notably, the latter is scarce in existing public datasets. Extensive experiments on both public and private datasets show that the proposed method achieves competitive performance using only a small portion of labeled data. Also, on an external test set, the method obtained an average accuracy of 0.92. All these results suggest that our method offers a practical and efficient solution for MRI-based brain tumor classification in real-world clinical settings.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109715"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel unified complex network framework based on entropy moment for analyzing time series 一种新的基于熵矩的统一复杂网络框架用于时间序列分析
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-05 DOI: 10.1016/j.bspc.2026.109752
Ruiquan Chen , Jieren Xie , Yuqing Liu , Hanmin Chen , Junsheng Cheng , Shuo Tang , Yue Zhang , Xingxing Ke , Guanghua Xu , Bingwei He
Understanding electroencephalogram signals requires the exploration of nonlinear time series analysis techniques due to the intricate complexity of the human brain. Among these techniques, phase space entropy stands out, with bubble entropy recognized for its ability to mitigate the impact of selection parameters. However, various phase space entropies focus solely on the probability distribution of symbolized embedding vectors while disregarding the structural information and transition between symbols across different phase spaces. To address this limitation, we propose a novel definition of the entropy moment based on bubble entropy, termed the Bubble Transition Entropy Moment (BTEM). This enhancement allows for a better utilization of phase space information and introduces a new metric for assessing the regularity of time series. We conducted rigorous testing on a coupled Henon model to evaluate the efficacy of the proposed method. These tests highlighted its advantages in analyzing short-time series data and its resilience to fluctuations in parameters. In order to further validate the effectiveness of our method, we conducted experiments using two publicly epilepsy datasets. The results not only reaffirmed the superiority of the proposed unified framework over the traditional methods, but also demonstrated that it can achieve high decoding accuracy with shorter data lengths.
由于人类大脑的复杂性,理解脑电图信号需要探索非线性时间序列分析技术。在这些技术中,相空间熵脱颖而出,气泡熵因其减轻选择参数影响的能力而得到认可。然而,各种相空间熵只关注符号化嵌入向量的概率分布,而忽略了符号在不同相空间中的结构信息和转换。为了解决这一限制,我们提出了一个基于气泡熵的熵矩的新定义,称为气泡过渡熵矩(BTEM)。这种增强允许更好地利用相空间信息,并引入一种新的度量来评估时间序列的规律性。我们对耦合Henon模型进行了严格的测试,以评估所提出方法的有效性。这些测试突出了它在分析短时间序列数据和对参数波动的弹性方面的优势。为了进一步验证我们方法的有效性,我们使用两个公开的癫痫数据集进行了实验。结果不仅重申了所提出的统一框架相对于传统方法的优越性,而且证明了它可以在较短的数据长度下实现较高的解码精度。
{"title":"A novel unified complex network framework based on entropy moment for analyzing time series","authors":"Ruiquan Chen ,&nbsp;Jieren Xie ,&nbsp;Yuqing Liu ,&nbsp;Hanmin Chen ,&nbsp;Junsheng Cheng ,&nbsp;Shuo Tang ,&nbsp;Yue Zhang ,&nbsp;Xingxing Ke ,&nbsp;Guanghua Xu ,&nbsp;Bingwei He","doi":"10.1016/j.bspc.2026.109752","DOIUrl":"10.1016/j.bspc.2026.109752","url":null,"abstract":"<div><div>Understanding electroencephalogram signals requires the exploration of nonlinear time series analysis techniques due to the intricate complexity of the human brain. Among these techniques, phase space entropy stands out, with bubble entropy recognized for its ability to mitigate the impact of selection parameters. However, various phase space entropies focus solely on the probability distribution of symbolized embedding vectors while disregarding the structural information and transition between symbols across different phase spaces. To address this limitation, we propose a novel definition of the entropy moment based on bubble entropy, termed the Bubble Transition Entropy Moment (BTEM). This enhancement allows for a better utilization of phase space information and introduces a new metric for assessing the regularity of time series. We conducted rigorous testing on a coupled Henon model to evaluate the efficacy of the proposed method. These tests highlighted its advantages in analyzing short-time series data and its resilience to fluctuations in parameters. In order to further validate the effectiveness of our method, we conducted experiments using two publicly epilepsy datasets. The results not only reaffirmed the superiority of the proposed unified framework over the traditional methods, but also demonstrated that it can achieve high decoding accuracy with shorter data lengths.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109752"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cyclic deep representation-based domain adaptation for cross-subject motor imagery classification 基于循环深度表征的领域自适应跨主题运动意象分类
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-05 DOI: 10.1016/j.bspc.2026.109762
Min He , Xuan Cao , Tian-jian Luo
Deep representation learning has attracted great attention for brain-computer interfaces (BCIs) based neural rehabilitation engineering, especially for the motor imagery electroencephalogram (MI-EEG) signals. Recently researchers have explored numerous deep representation models to decode MI-EEG signals with various structures, however they suffered from the variability across recording subjects and the scarcity of samples. To solve these issues, domain adaptation models have been proposed to mitigate existed subjects’ samples to decode new subject’s sample by learning subject-invariant deep representations. However, existed models neglected temporal-varying and spatially-coupled characteristics of MI-EEG signals during domain adaptation, resulting performance deterioration for cross-subject classification. To improve decoding performance, we propose a novel domain adaptation model, referred to Cyclic Deep Representation-based Domain Adaptation (CDRDA), to simultaneously transfer deep representations from source domain to target domain, as well as target domain to source domain. Specifically, our CDRDA model learns a joint optimization that weighted dual adversarial losses, cyclic losses, and domain-specific losses to improve classification performance together. Empirical experiments on two benchmark MI-EEG datasets have revealed the feasibility and effectiveness of the CDRDA model with accuracy, Cohen’s kappa, and macro average F1-score. Results analyses and ablation studies have also verified the superiority of the CDRDA model for building online MI-BCIs.
深度表征学习在基于脑机接口(bci)的神经康复工程中备受关注,尤其是在运动图像脑电图(MI-EEG)信号方面。近年来,研究人员已经探索了许多深度表示模型来解码具有不同结构的MI-EEG信号,但它们受到记录对象的可变性和样本稀缺性的影响。为了解决这些问题,研究人员提出了领域适应模型,通过学习主体不变的深度表征来缓解已有主体的样本,从而解码新主体的样本。然而,现有模型在域自适应过程中忽略了脑电信号的时变和空间耦合特征,导致跨主题分类性能下降。为了提高解码性能,我们提出了一种新的基于循环深度表示的域自适应(CDRDA)模型,该模型可以同时将深度表示从源域传输到目标域,以及从目标域传输到源域。具体来说,我们的CDRDA模型学习了加权对偶对抗损失、循环损失和特定领域损失的联合优化,以共同提高分类性能。在两个基准MI-EEG数据集上的实证实验表明,CDRDA模型具有准确率、Cohen’s kappa和宏观平均f1得分的可行性和有效性。结果分析和消融研究也验证了CDRDA模型构建在线mi - bci的优越性。
{"title":"Cyclic deep representation-based domain adaptation for cross-subject motor imagery classification","authors":"Min He ,&nbsp;Xuan Cao ,&nbsp;Tian-jian Luo","doi":"10.1016/j.bspc.2026.109762","DOIUrl":"10.1016/j.bspc.2026.109762","url":null,"abstract":"<div><div>Deep representation learning has attracted great attention for brain-computer interfaces (BCIs) based neural rehabilitation engineering, especially for the motor imagery electroencephalogram (MI-EEG) signals. Recently researchers have explored numerous deep representation models to decode MI-EEG signals with various structures, however they suffered from the variability across recording subjects and the scarcity of samples. To solve these issues, domain adaptation models have been proposed to mitigate existed subjects’ samples to decode new subject’s sample by learning subject-invariant deep representations. However, existed models neglected temporal-varying and spatially-coupled characteristics of MI-EEG signals during domain adaptation, resulting performance deterioration for cross-subject classification. To improve decoding performance, we propose a novel domain adaptation model, referred to <strong>C</strong>yclic <strong>D</strong>eep <strong>R</strong>epresentation-based <strong>D</strong>omain <strong>A</strong>daptation (CDRDA), to simultaneously transfer deep representations from source domain to target domain, as well as target domain to source domain. Specifically, our CDRDA model learns a joint optimization that weighted dual adversarial losses, cyclic losses, and domain-specific losses to improve classification performance together. Empirical experiments on two benchmark MI-EEG datasets have revealed the feasibility and effectiveness of the CDRDA model with accuracy, Cohen’s kappa, and macro average F1-score. Results analyses and ablation studies have also verified the superiority of the CDRDA model for building online MI-BCIs.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109762"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based multimodal fusion of imaging, pathology, and CTCs for early diagnosis of pediatric distal femur osteosarcoma 基于深度学习的影像、病理和ctc多模态融合在小儿股骨远端骨肉瘤早期诊断中的应用
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-04 DOI: 10.1016/j.bspc.2026.109558
Dongjian Song , Meng Su , Qiuliang Liu , Da Zhang , Zechen Yan , Qian Zhang , Qi Wang , Hui Zhang , Longyan Shi , Yingzhong Fan , Heying Yang
This study proposes a novel deep learning (DL)-based multimodal diagnostic framework that integrates magnetic resonance imaging (MRI), computed tomography (CT), histopathological slides, and circulating tumor cells (CTCs) data for early and accurate diagnosis of distal femur osteosarcoma (OS) in pediatric patients. Public datasets including The Cancer Imaging Archive (TCIA), The Cancer Genome Atlas (TCGA), and the Gene Expression Omnibus (GEO) provided imaging and genomic data. Preprocessing involved denoising, normalization, slice alignment, and color standardization using Fiji/ImageJ. Pathological features were extracted via transfer learning using pretrained convolutional neural networks (CNNs) like VGG16 and ResNet50. CTCs were detected and classified using flow cytometry, Hough transform, and support vector machine (SVM) algorithms. A multimodal DL architecture was constructed by fusing image, pathology, and CTC feature vectors, and performance was evaluated through cross-validation. The model achieved an accuracy of 92.5%, sensitivity of 88.7%, specificity of 94.3%, and AUC of 0.96 on an independent test set. Incorporating CTC data notably improved performance in metastasis assessment and diagnosis where imaging was inconclusive. The proposed DL-based multimodal model significantly enhances the early diagnostic capacity for pediatric distal femur OS. Its robustness, diagnostic accuracy, and potential for clinical translation make it a promising tool for personalized treatment strategies.
本研究提出了一种新的基于深度学习(DL)的多模式诊断框架,该框架整合了磁共振成像(MRI)、计算机断层扫描(CT)、组织病理学切片和循环肿瘤细胞(ctc)数据,用于儿科患者股骨远端骨肉瘤(OS)的早期准确诊断。包括癌症成像档案(TCIA)、癌症基因组图谱(TCGA)和基因表达综合数据库(GEO)在内的公共数据集提供了成像和基因组数据。预处理包括使用Fiji/ImageJ去噪、归一化、切片对齐和颜色标准化。使用VGG16和ResNet50等预训练卷积神经网络(cnn)通过迁移学习提取病理特征。采用流式细胞术、霍夫变换和支持向量机(SVM)算法对ctc进行检测和分类。通过融合图像、病理和CTC特征向量构建多模态深度学习架构,并通过交叉验证评估性能。该模型在独立测试集上的准确率为92.5%,灵敏度为88.7%,特异性为94.3%,AUC为0.96。合并CTC数据显著提高了在影像学不确定的情况下转移评估和诊断的表现。所提出的基于dl的多模态模型显著提高了小儿股骨远端OS的早期诊断能力。它的稳健性、诊断准确性和临床翻译的潜力使其成为个性化治疗策略的有前途的工具。
{"title":"Deep learning-based multimodal fusion of imaging, pathology, and CTCs for early diagnosis of pediatric distal femur osteosarcoma","authors":"Dongjian Song ,&nbsp;Meng Su ,&nbsp;Qiuliang Liu ,&nbsp;Da Zhang ,&nbsp;Zechen Yan ,&nbsp;Qian Zhang ,&nbsp;Qi Wang ,&nbsp;Hui Zhang ,&nbsp;Longyan Shi ,&nbsp;Yingzhong Fan ,&nbsp;Heying Yang","doi":"10.1016/j.bspc.2026.109558","DOIUrl":"10.1016/j.bspc.2026.109558","url":null,"abstract":"<div><div>This study proposes a novel deep learning (DL)-based multimodal diagnostic framework that integrates magnetic resonance imaging (MRI), computed tomography (CT), histopathological slides, and circulating tumor cells (CTCs) data for early and accurate diagnosis of distal femur osteosarcoma (OS) in pediatric patients. Public datasets including The Cancer Imaging Archive (TCIA), The Cancer Genome Atlas (TCGA), and the Gene Expression Omnibus (GEO) provided imaging and genomic data. Preprocessing involved denoising, normalization, slice alignment, and color standardization using Fiji/ImageJ. Pathological features were extracted via transfer learning using pretrained convolutional neural networks (CNNs) like VGG16 and ResNet50. CTCs were detected and classified using flow cytometry, Hough transform, and support vector machine (SVM) algorithms. A multimodal DL architecture was constructed by fusing image, pathology, and CTC feature vectors, and performance was evaluated through cross-validation. The model achieved an accuracy of 92.5%, sensitivity of 88.7%, specificity of 94.3%, and AUC of 0.96 on an independent test set. Incorporating CTC data notably improved performance in metastasis assessment and diagnosis where imaging was inconclusive. The proposed DL-based multimodal model significantly enhances the early diagnostic capacity for pediatric distal femur OS. Its robustness, diagnostic accuracy, and potential for clinical translation make it a promising tool for personalized treatment strategies.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109558"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatially Enhanced Pyramid Split attention for improved ECG-Based emotion recognition 空间增强金字塔分裂注意改进基于心电的情绪识别
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-05 DOI: 10.1016/j.bspc.2026.109729
Chenyang Pan , Hui Chen , Xuedian Zhang , Tong Su , Pei Ma
Accurate emotion recognition plays a crucial role in human–computer interaction, mental healthcare, and cognitive behavior analysis. Previous research consistently demonstrates ECG’s strong potential for emotion recognition, yet current ECG-based approaches still face critical challenges including sensitivity to signal artifacts, inter-subject variability, and limited discriminability in fine-grained emotion classification. To address these issues, we propose a deep learning framework that enhances feature representation through a Spatially Enhanced Pyramid Split-Attention (SEPSA) mechanism, which captures multi-scale spatial patterns in ECG signals to enable more robust emotion classification from raw inputs. The method further introduces an optimized beat-level preprocessing strategy to improve data quality by identifying and removing morphologically inconsistent heartbeats. Extensive experiments on two public datasets—WESAD and DREAMER—showed that our framework achieved competitive performance. It attained an average accuracy of 98.9% in four-class emotion classification on WESAD, and 94.5%/92.7% in binary classification of arousal and valence on DREAMER, where it also reached 89.8%/88.7% as the average accuracy in five-class classification. Ablation studies confirmed the contribution of each component to the overall performance. These results underscore the effectiveness of our approach within the studied datasets and suggest its feasibility as a foundation for future research. Subsequent work will focus on enhancing generalizability through validation on larger, more ecologically diverse datasets and exploring integration pathways for wearable affective computing systems.
准确的情绪识别在人机交互、心理健康和认知行为分析中起着至关重要的作用。先前的研究一致证明了ECG在情绪识别方面的强大潜力,但目前基于ECG的方法仍然面临着关键的挑战,包括对信号伪像的敏感性、主体间的可变性以及细粒度情绪分类的有限可辨析性。为了解决这些问题,我们提出了一个深度学习框架,该框架通过空间增强金字塔分散注意(SEPSA)机制增强特征表示,该机制捕获ECG信号中的多尺度空间模式,从而从原始输入中实现更稳健的情绪分类。该方法进一步引入了一种优化的心跳级预处理策略,通过识别和去除形态不一致的心跳来提高数据质量。在两个公共数据集(wesad和dreamer)上进行的大量实验表明,我们的框架取得了具有竞争力的性能。WESAD上情绪四级分类的平均正确率为98.9%,dream上唤醒效价二元分类的平均正确率为94.5%/92.7%,其中唤醒效价二元分类的平均正确率为89.8%/88.7%。消融研究证实了每个部件对整体性能的贡献。这些结果强调了我们的方法在研究数据集中的有效性,并表明其作为未来研究基础的可行性。后续工作将侧重于通过对更大、更生态多样化的数据集进行验证来增强通用性,并探索可穿戴情感计算系统的集成途径。
{"title":"Spatially Enhanced Pyramid Split attention for improved ECG-Based emotion recognition","authors":"Chenyang Pan ,&nbsp;Hui Chen ,&nbsp;Xuedian Zhang ,&nbsp;Tong Su ,&nbsp;Pei Ma","doi":"10.1016/j.bspc.2026.109729","DOIUrl":"10.1016/j.bspc.2026.109729","url":null,"abstract":"<div><div>Accurate emotion recognition plays a crucial role in human–computer interaction, mental healthcare, and cognitive behavior analysis. Previous research consistently demonstrates ECG’s strong potential for emotion recognition, yet current ECG-based approaches still face critical challenges including sensitivity to signal artifacts, inter-subject variability, and limited discriminability in fine-grained emotion classification. To address these issues, we propose a deep learning framework that enhances feature representation through a Spatially Enhanced Pyramid Split-Attention (SEPSA) mechanism, which captures multi-scale spatial patterns in ECG signals to enable more robust emotion classification from raw inputs. The method further introduces an optimized beat-level preprocessing strategy to improve data quality by identifying and removing morphologically inconsistent heartbeats. Extensive experiments on two public datasets—WESAD and DREAMER—showed that our framework achieved competitive performance. It attained an average accuracy of 98.9% in four-class emotion classification on WESAD, and 94.5%/92.7% in binary classification of arousal and valence on DREAMER, where it also reached 89.8%/88.7% as the average accuracy in five-class classification. Ablation studies confirmed the contribution of each component to the overall performance. These results underscore the effectiveness of our approach within the studied datasets and suggest its feasibility as a foundation for future research. Subsequent work will focus on enhancing generalizability through validation on larger, more ecologically diverse datasets and exploring integration pathways for wearable affective computing systems.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109729"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mamba NeuroDynamics Integration Net:A cross-task highly generalizable fNIRS-based brain decoding framework 曼巴神经动力学集成网络:一个跨任务高度通用的基于fnir的大脑解码框架
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-02 DOI: 10.1016/j.bspc.2026.109706
Qiulei Han , Hongbiao Ye , Yan Sun , Ze Song , Hongyu Cai , Jian Zhao , Lijuan Shi , Zhejun Kuang , Liu Wang , Yifan Wang , He Gu , Lu Tan , Miaoshui Bai , Lili Wang
Functional near-infrared spectroscopy (fNIRS) reflects the neurovascular coupling process by monitoring changes in HbO/HbR concentrations and serves as an important tool for decoding cognitive tasks. However, existing methods still fall short in capturing the nonlinear dynamics of neural activity, the temporal coordination between channels, and the cross-time evolution of brain states. To address these limitations, this paper proposes the MaNDi-Net framework, which achieves local-to-global dynamic collaborative representation through a three-level physiologically guided modeling strategy: 1. Dual-path neural dynamics feature extraction (structural level):Introduces Lyapunov Exponent (LLE) and Phase-Locking Value (PLV) features to model the influence of electrophysiological activity on hemodynamic changes from two dimensions — local neural chaoticity of brain regions and functional connectivity synchrony between brain regions — thereby compensating for the lack of modeling of neural mechanisms in existing approaches. 2. State-space modeling with parameter sharing strategy (channel level): Utilizes a state-space model to fit the slow-varying dynamics of HbO/HbR and the inter-channel coordinated coupling relationships. The parameter sharing mechanism captures common temporal patterns across brain regions, improving modeling efficiency and aligning with the functional modular reuse hypothesis of the brain. 3.Multi-head attention-driven temporal modeling (temporal level): Uses window-level multi-head attention to capture cross-window neural state transitions from delayed hemodynamic responses, enhancing representation of task-related dynamic phases. The model achieved excellent decoding across mental arithmetic (88.11%), word generation (78.24%), and motor imagery (81.07%) tasks. Ablation studies highlight the essential contribution of neural dynamics and temporal modules in capturing chaos augmentation and functional connectivity reorganization, ensuring physiological interpretability and robust small-sample generalization.
功能近红外光谱(fNIRS)通过监测HbO/HbR浓度的变化来反映神经血管耦合过程,是解码认知任务的重要工具。然而,现有的方法在捕捉神经活动的非线性动态、通道间的时间协调以及大脑状态的跨时间演化方面仍然存在不足。为了解决这些限制,本文提出了mandy - net框架,该框架通过三个层次的生理引导建模策略实现了局部到全局的动态协作表示:双路径神经动力学特征提取(结构层面):引入李雅普诺夫指数(LLE)和锁相值(PLV)特征,从脑区域的局部神经混沌和脑区域之间的功能连接同步这两个维度来模拟电生理活动对血流动力学变化的影响,从而弥补了现有方法中神经机制建模的不足。2. 采用参数共享策略(信道级)的状态空间建模:利用状态空间模型拟合HbO/HbR的慢变动态和信道间协调耦合关系。参数共享机制捕获了大脑各区域的共同时间模式,提高了建模效率,符合大脑功能模块化重用假设。3.多头注意驱动的时间模型(时间水平):使用窗口级多头注意捕捉延迟血流动力学反应的跨窗口神经状态转换,增强任务相关动态阶段的表征。该模型在心算(88.11%)、词生成(78.24%)和运动意象(81.07%)任务中取得了出色的解码效果。消融研究强调了神经动力学和时间模块在捕捉混沌增强和功能连接重组、确保生理可解释性和稳健的小样本泛化方面的重要贡献。
{"title":"Mamba NeuroDynamics Integration Net:A cross-task highly generalizable fNIRS-based brain decoding framework","authors":"Qiulei Han ,&nbsp;Hongbiao Ye ,&nbsp;Yan Sun ,&nbsp;Ze Song ,&nbsp;Hongyu Cai ,&nbsp;Jian Zhao ,&nbsp;Lijuan Shi ,&nbsp;Zhejun Kuang ,&nbsp;Liu Wang ,&nbsp;Yifan Wang ,&nbsp;He Gu ,&nbsp;Lu Tan ,&nbsp;Miaoshui Bai ,&nbsp;Lili Wang","doi":"10.1016/j.bspc.2026.109706","DOIUrl":"10.1016/j.bspc.2026.109706","url":null,"abstract":"<div><div>Functional near-infrared spectroscopy (fNIRS) reflects the neurovascular coupling process by monitoring changes in HbO/HbR concentrations and serves as an important tool for decoding cognitive tasks. However, existing methods still fall short in capturing the nonlinear dynamics of neural activity, the temporal coordination between channels, and the cross-time evolution of brain states. To address these limitations, this paper proposes the MaNDi-Net framework, which achieves local-to-global dynamic collaborative representation through a three-level physiologically guided modeling strategy: 1. Dual-path neural dynamics feature extraction (structural level):Introduces Lyapunov Exponent (LLE) and Phase-Locking Value (PLV) features to model the influence of electrophysiological activity on hemodynamic changes from two dimensions — local neural chaoticity of brain regions and functional connectivity synchrony between brain regions — thereby compensating for the lack of modeling of neural mechanisms in existing approaches. 2. State-space modeling with parameter sharing strategy (channel level): Utilizes a state-space model to fit the slow-varying dynamics of HbO/HbR and the inter-channel coordinated coupling relationships. The parameter sharing mechanism captures common temporal patterns across brain regions, improving modeling efficiency and aligning with the functional modular reuse hypothesis of the brain. 3.Multi-head attention-driven temporal modeling (temporal level): Uses window-level multi-head attention to capture cross-window neural state transitions from delayed hemodynamic responses, enhancing representation of task-related dynamic phases. The model achieved excellent decoding across mental arithmetic (88.11%), word generation (78.24%), and motor imagery (81.07%) tasks. Ablation studies highlight the essential contribution of neural dynamics and temporal modules in capturing chaos augmentation and functional connectivity reorganization, ensuring physiological interpretability and robust small-sample generalization.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109706"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable deep autoencoding of vibroarthrographic time–frequency distributions for robust knee disorder detection 可解释的深度自编码振动关节时间-频率分布鲁棒膝盖疾病检测
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-05 DOI: 10.1016/j.bspc.2026.109781
Saif Nalband , Maulik Gupta , Sachin Kansale , Tanmoy Hazra , Femi Robert , A. Amalin Prince
This paper presents AUTOENCODE-KNEE, a novel approach for automatic feature extraction from the time–frequency distribution of the knee joint of the human body using vibroarthrographic (VAG) signals. VAG signals contain valuable information, which is crucial for diagnosing various musculoskeletal disorders. However, manually extracting relevant features from VAG signals can be time-consuming and subjective. To address this challenge, we propose utilizing a convolutional neural network (CNN)-based autoencoder architecture for automatic feature extraction. The autoencoder is trained on a dataset comprising time–frequency representations of VAG signals, learning to encode and decode the input signals while preserving important features. By leveraging the inherent ability of CNNs to capture spatial dependencies, the autoencoder effectively learns to extract discriminative features from the complex time–frequency domain. Our experimental results demonstrate the efficacy of AUTOENCODE-KNEE in automatically extracting meaningful features from knee joint signals. We compare different machine learning models for classifying musculoskeletal disorders. Furthermore, we use explainable Artificial Intelligence (xAI) to capture more abstract and pathology-relevant features. In summary, AUTOENCODE-KNEE offers a promising solution for automatic feature extraction from knee joint signals, potentially revolutionizing how musculoskeletal disorders are diagnosed and treated.
本文提出了一种利用关节振动成像(VAG)信号从人体膝关节的时频分布中自动提取特征的新方法——AUTOENCODE-KNEE。VAG信号包含有价值的信息,对诊断各种肌肉骨骼疾病至关重要。但是,手动从VAG信号中提取相关特征费时且主观。为了解决这一挑战,我们提出利用基于卷积神经网络(CNN)的自编码器架构进行自动特征提取。自动编码器在包含VAG信号时频表示的数据集上进行训练,学习编码和解码输入信号,同时保留重要特征。通过利用cnn固有的捕获空间依赖关系的能力,自编码器有效地学习从复杂的时频域中提取判别特征。我们的实验结果证明了AUTOENCODE-KNEE在从膝关节信号中自动提取有意义特征方面的有效性。我们比较了用于分类肌肉骨骼疾病的不同机器学习模型。此外,我们使用可解释的人工智能(xAI)来捕获更抽象和病理相关的特征。总之,AUTOENCODE-KNEE为膝关节信号的自动特征提取提供了一个很有前途的解决方案,可能会彻底改变肌肉骨骼疾病的诊断和治疗方式。
{"title":"Explainable deep autoencoding of vibroarthrographic time–frequency distributions for robust knee disorder detection","authors":"Saif Nalband ,&nbsp;Maulik Gupta ,&nbsp;Sachin Kansale ,&nbsp;Tanmoy Hazra ,&nbsp;Femi Robert ,&nbsp;A. Amalin Prince","doi":"10.1016/j.bspc.2026.109781","DOIUrl":"10.1016/j.bspc.2026.109781","url":null,"abstract":"<div><div>This paper presents AUTOENCODE-KNEE, a novel approach for automatic feature extraction from the time–frequency distribution of the knee joint of the human body using vibroarthrographic (VAG) signals. VAG signals contain valuable information, which is crucial for diagnosing various musculoskeletal disorders. However, manually extracting relevant features from VAG signals can be time-consuming and subjective. To address this challenge, we propose utilizing a convolutional neural network (CNN)-based autoencoder architecture for automatic feature extraction. The autoencoder is trained on a dataset comprising time–frequency representations of VAG signals, learning to encode and decode the input signals while preserving important features. By leveraging the inherent ability of CNNs to capture spatial dependencies, the autoencoder effectively learns to extract discriminative features from the complex time–frequency domain. Our experimental results demonstrate the efficacy of AUTOENCODE-KNEE in automatically extracting meaningful features from knee joint signals. We compare different machine learning models for classifying musculoskeletal disorders. Furthermore, we use explainable Artificial Intelligence (xAI) to capture more abstract and pathology-relevant features. In summary, AUTOENCODE-KNEE offers a promising solution for automatic feature extraction from knee joint signals, potentially revolutionizing how musculoskeletal disorders are diagnosed and treated.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109781"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDMGCN: Deep Dynamic Multi-Graph Convolutional Neural Network for EEG emotion recognition 基于深度动态多图卷积神经网络的脑电情感识别
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-04 DOI: 10.1016/j.bspc.2026.109740
Jiao Wang, Zhifen Guo, Peng Zhang, Hongchen Luo, Fengbin Ma, Pengcheng Song
Electroencephalogram (EEG) provides an objective and accurate reflection of the human emotional state, making EEG-based emotion recognition a research focus in fields such as medical measurement and health monitoring. Given the irregular structure of EEG data, graph convolutional neural networks (GCNNs) are effective at learning topological relationships among EEG channels. However, existing GCNN-based work is limited by restricted stacking depth and insufficient flexibility in modeling topological relationships, which makes it difficult to capture complex correlations among EEG signals and ultimately hinders recognition performance. To address these issues, we propose a Deep Dynamic Multi-Graph Convolutional Neural Network (DDMGCN). Specifically, DDMGCN employs a dual-branch collaborative training framework. The master training network integrates a multi-layer 3D convolutional neural network (3DCNN) within the DGCNN architecture to deepen the model, capturing dynamic interactions and multi-level spatiotemporal information. The auxiliary update network introduces a multi-graph structure that adaptively adjusts each layer to achieve optimal topological relationships. Finally, an update strategy leveraging a branch attention mechanism is applied to both branches to optimize model parameters. We evaluate the performance of the DDMGCN on two public SEED and DREAMER datasets. Experimental results, including subject-dependent and subject-independent validations, outperform current state-of-the-art models. This demonstrates the potential of our method for modeling dynamic EEG connectivity in emotion recognition.
脑电图(EEG)能够客观准确地反映人类的情绪状态,使得基于脑电图的情绪识别成为医学测量和健康监测等领域的研究热点。考虑到脑电数据的不规则结构,图卷积神经网络(GCNNs)可以有效地学习脑电通道之间的拓扑关系。然而,现有的基于gcnn的工作受到叠加深度的限制和拓扑关系建模灵活性不足的限制,难以捕获脑电信号之间复杂的相关性,最终影响识别性能。为了解决这些问题,我们提出了一个深度动态多图卷积神经网络(DDMGCN)。具体来说,DDMGCN采用了双分支协作培训框架。主训练网络在DGCNN架构内集成了多层3D卷积神经网络(3DCNN)来深化模型,捕捉动态交互和多层次时空信息。辅助更新网络引入多图结构,自适应调整每一层以实现最优拓扑关系。最后,利用分支关注机制对两个分支应用更新策略来优化模型参数。我们评估了DDMGCN在两个公共SEED和dream数据集上的性能。实验结果,包括主体依赖和主体独立验证,优于当前最先进的模型。这证明了我们的方法在情感识别中建模动态脑电图连接的潜力。
{"title":"DDMGCN: Deep Dynamic Multi-Graph Convolutional Neural Network for EEG emotion recognition","authors":"Jiao Wang,&nbsp;Zhifen Guo,&nbsp;Peng Zhang,&nbsp;Hongchen Luo,&nbsp;Fengbin Ma,&nbsp;Pengcheng Song","doi":"10.1016/j.bspc.2026.109740","DOIUrl":"10.1016/j.bspc.2026.109740","url":null,"abstract":"<div><div>Electroencephalogram (EEG) provides an objective and accurate reflection of the human emotional state, making EEG-based emotion recognition a research focus in fields such as medical measurement and health monitoring. Given the irregular structure of EEG data, graph convolutional neural networks (GCNNs) are effective at learning topological relationships among EEG channels. However, existing GCNN-based work is limited by restricted stacking depth and insufficient flexibility in modeling topological relationships, which makes it difficult to capture complex correlations among EEG signals and ultimately hinders recognition performance. To address these issues, we propose a Deep Dynamic Multi-Graph Convolutional Neural Network (DDMGCN). Specifically, DDMGCN employs a dual-branch collaborative training framework. The master training network integrates a multi-layer 3D convolutional neural network (3DCNN) within the DGCNN architecture to deepen the model, capturing dynamic interactions and multi-level spatiotemporal information. The auxiliary update network introduces a multi-graph structure that adaptively adjusts each layer to achieve optimal topological relationships. Finally, an update strategy leveraging a branch attention mechanism is applied to both branches to optimize model parameters. We evaluate the performance of the DDMGCN on two public SEED and DREAMER datasets. Experimental results, including subject-dependent and subject-independent validations, outperform current state-of-the-art models. This demonstrates the potential of our method for modeling dynamic EEG connectivity in emotion recognition.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109740"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient gastric tumor detection from endoscopic images using trans-mapped learning models 利用跨映射学习模型从内镜图像中高效检测胃肿瘤
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-03 DOI: 10.1016/j.bspc.2026.109646
I. Govindharaj , Gnanajeyaraman Rajaram , S. Ravichandran , J. Viswanath , R. Elankavi , J. Raja
Gastric cancer has emerged as a major health concern in recent years, often attributed to improper or unhealthy dietary habits. Early detection remains challenging due to the lack of identifiable symptoms in its initial stages, emphasizing the need for intelligent computational diagnostic methods. This study introduces the Inflate Region-based Tumor Recognition (IRTR) scheme, a novel approach leveraging endoscopy images and trans-mapped learning to detect inflated tumor regions with precision. The proposed scheme employs trans-mapping layers, which are trained to analyze inputs and outputs for identifying high and low-intensity feature regions. By focusing on external boundaries with elevated trans-intensity levels, the scheme effectively identifies regions exhibiting significant differences across the image. These mapped features are then utilized to train a model that repetitively processes high-to-low and low-to-high intensity transitions across input and output layers, enhancing the recognition of inflated tumor regions. Boundary differentiation, a key component of this approach, further refines detection precision from early endoscopic inputs. Evaluation results demonstrate that the IRTR scheme achieves superior performance, with an accuracy improvement of 9.38%, a precision increase of 12.04%, 9.69% in specificity and a mean error reduction of 11.04% for maximum intensity rates. This study underscores the potential of trans-mapped learning in advancing early gastric tumor detection.
近年来,胃癌已成为一个主要的健康问题,通常归因于不适当或不健康的饮食习惯。由于在初始阶段缺乏可识别的症状,早期检测仍然具有挑战性,这强调了对智能计算诊断方法的需求。本研究介绍了基于膨胀区域的肿瘤识别(IRTR)方案,这是一种利用内窥镜图像和跨映射学习来精确检测膨胀肿瘤区域的新方法。该方案采用跨映射层,这些层经过训练来分析输入和输出,以识别高强度和低强度特征区域。通过聚焦具有高跨强度水平的外部边界,该方案有效地识别出图像中表现出显著差异的区域。然后利用这些映射的特征来训练一个模型,该模型在输入和输出层之间重复处理高到低和低到高强度的转换,从而增强对膨胀肿瘤区域的识别。边界区分是该方法的一个关键组成部分,它进一步提高了早期内镜输入的检测精度。评价结果表明,IRTR方案的准确率提高了9.38%,精度提高了12.04%,特异性提高了9.69%,最大强度率的平均误差降低了11.04%。这项研究强调了反映射学习在促进早期胃肿瘤检测方面的潜力。
{"title":"Efficient gastric tumor detection from endoscopic images using trans-mapped learning models","authors":"I. Govindharaj ,&nbsp;Gnanajeyaraman Rajaram ,&nbsp;S. Ravichandran ,&nbsp;J. Viswanath ,&nbsp;R. Elankavi ,&nbsp;J. Raja","doi":"10.1016/j.bspc.2026.109646","DOIUrl":"10.1016/j.bspc.2026.109646","url":null,"abstract":"<div><div>Gastric cancer has emerged as a major health concern in recent years, often attributed to improper or unhealthy dietary habits. Early detection remains challenging due to the lack of identifiable symptoms in its initial stages, emphasizing the need for intelligent computational diagnostic methods. This study introduces the Inflate Region-based Tumor Recognition (IRTR) scheme, a novel approach leveraging endoscopy images and trans-mapped learning to detect inflated tumor regions with precision. The proposed scheme employs trans-mapping layers, which are trained to analyze inputs and outputs for identifying high and low-intensity feature regions. By focusing on external boundaries with elevated trans-intensity levels, the scheme effectively identifies regions exhibiting significant differences across the image. These mapped features are then utilized to train a model that repetitively processes high-to-low and low-to-high intensity transitions across input and output layers, enhancing the recognition of inflated tumor regions. Boundary differentiation, a key component of this approach, further refines detection precision from early endoscopic inputs. Evaluation results demonstrate that the IRTR scheme achieves superior performance, with an accuracy improvement of 9.38%, a precision increase of 12.04%, 9.69% in specificity and a mean error reduction of 11.04% for maximum intensity rates. This study underscores the potential of trans-mapped learning in advancing early gastric tumor detection.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109646"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1