首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
A collaborative multi-task model for immunohistochemical molecular sub-types of multi-modal breast cancer MRI images 多模态乳腺癌 MRI 图像免疫组化分子亚型的多任务协作模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-08 DOI: 10.1016/j.bspc.2024.107137
Haozhen Xiang , Yuqi Xiong , Yingwei Shen , Jiaxin Li , Deshan Liu
Clinically, personalized treatment developed based on the immunohistochemical (IHC) molecular sub-types of breast cancer can enhance long-term survival rates. Nevertheless, IHC, as an invasive detection method, may pose some risks of tumor metastasis caused by puncture. This work propose a collaborative multi-task model based on multi-modal data. Firstly, a dual-stream learning network based on Swin Transformer is employed to extract features from both DCE and T1WI images. Specifically, an Shared Representation (SR) module extracts shared representations, while an Enhancement of Unique features (EU) module enhances specific features. Subsequently, a multi-path classification network is constructed, which comprehensively considers the MRI image features, lesion location, and morphological features. Comprehensive experiments using clinical MRI images show the proposed method outperforms state-of-the-art, with an accuracy of 85.1%, sensitivity of 84.0%, specificity of 95.1%, and an F1 score of 83.6%.
在临床上,根据乳腺癌的免疫组化(IHC)分子亚型进行个性化治疗可以提高长期生存率。然而,IHC 作为一种侵入性检测方法,可能存在穿刺导致肿瘤转移的风险。本研究提出了一种基于多模态数据的多任务协作模型。首先,采用基于 Swin Transformer 的双流学习网络从 DCE 和 T1WI 图像中提取特征。具体来说,共享表征(SR)模块提取共享表征,而增强独特特征(EU)模块则增强特定特征。随后,构建了一个多路径分类网络,该网络综合考虑了磁共振成像特征、病变位置和形态特征。使用临床 MRI 图像进行的综合实验表明,所提出的方法优于最先进的方法,准确率为 85.1%,灵敏度为 84.0%,特异性为 95.1%,F1 得分为 83.6%。
{"title":"A collaborative multi-task model for immunohistochemical molecular sub-types of multi-modal breast cancer MRI images","authors":"Haozhen Xiang ,&nbsp;Yuqi Xiong ,&nbsp;Yingwei Shen ,&nbsp;Jiaxin Li ,&nbsp;Deshan Liu","doi":"10.1016/j.bspc.2024.107137","DOIUrl":"10.1016/j.bspc.2024.107137","url":null,"abstract":"<div><div>Clinically, personalized treatment developed based on the immunohistochemical (IHC) molecular sub-types of breast cancer can enhance long-term survival rates. Nevertheless, IHC, as an invasive detection method, may pose some risks of tumor metastasis caused by puncture. This work propose a collaborative multi-task model based on multi-modal data. Firstly, a dual-stream learning network based on Swin Transformer is employed to extract features from both DCE and T1WI images. Specifically, an Shared Representation (SR) module extracts shared representations, while an Enhancement of Unique features (EU) module enhances specific features. Subsequently, a multi-path classification network is constructed, which comprehensively considers the MRI image features, lesion location, and morphological features. Comprehensive experiments using clinical MRI images show the proposed method outperforms state-of-the-art, with an accuracy of 85.1%, sensitivity of 84.0%, specificity of 95.1%, and an F1 score of 83.6%.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107137"},"PeriodicalIF":4.9,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CL-MRI: Self-Supervised contrastive learning to improve the accuracy of undersampled MRI reconstruction CL-MRI:自我监督对比学习提高欠采样磁共振成像重建的准确性
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-08 DOI: 10.1016/j.bspc.2024.107185
Mevan Ekanayake , Zhifeng Chen , Mehrtash Harandi , Gary Egan , Zhaolin Chen
Deep learning (DL) methods have emerged as the state-of-the-art for Magnetic Resonance Imaging (MRI) reconstruction. DL methods typically involve training deep neural networks to take undersampled MRI images as input and transform them into high-quality MRI images through data-driven processes. However, deep learning models often fail with higher levels of undersampling due to the insufficient information in the input, which is crucial for producing high-quality MRI images. Thus, optimizing the information content at the input of a DL reconstruction model could significantly improve reconstruction accuracy. In this paper, we introduce a self-supervised pretraining procedure using contrastive learning to improve the accuracy of undersampled DL MRI reconstruction. We use contrastive learning to transform the MRI image representations into a latent space that maximizes mutual information among different undersampled representations and optimizes the information content at the input of the downstream DL reconstruction models. Our experiments demonstrate improved reconstruction accuracy across a range of acceleration factors and datasets, both quantitatively and qualitatively. Furthermore, our extended experiments validate the proposed framework’s robustness under adversarial conditions, such as measurement noise, different k-space sampling patterns, and pathological abnormalities, and also prove the transfer learning capabilities on MRI datasets with completely different anatomy. Additionally, we conducted experiments to visualize and analyze the properties of the proposed MRI contrastive learning latent space. Code available here.
深度学习(DL)方法已成为磁共振成像(MRI)重建的最先进方法。深度学习方法通常包括训练深度神经网络,将采样不足的磁共振成像图像作为输入,并通过数据驱动过程将其转换为高质量的磁共振成像图像。然而,深度学习模型在较高的欠采样水平下往往会失败,原因是输入信息不足,而这对生成高质量的 MRI 图像至关重要。因此,优化 DL 重建模型输入端的信息含量可以显著提高重建精度。在本文中,我们介绍了一种使用对比学习的自监督预训练程序,以提高欠采样 DL MRI 重建的准确性。我们利用对比学习将 MRI 图像表征转换为一个潜空间,该潜空间能最大化不同欠采样表征之间的互信息,并优化下游 DL 重建模型输入端的信息含量。我们的实验证明,在各种加速因子和数据集上,重建精度都有了定量和定性的提高。此外,我们的扩展实验还验证了所提出的框架在诸如测量噪声、不同 k 空间采样模式和病理异常等对抗条件下的鲁棒性,并证明了在解剖结构完全不同的 MRI 数据集上的迁移学习能力。此外,我们还进行了实验,对所提出的磁共振成像对比学习潜空间的特性进行了可视化分析。代码在此提供。
{"title":"CL-MRI: Self-Supervised contrastive learning to improve the accuracy of undersampled MRI reconstruction","authors":"Mevan Ekanayake ,&nbsp;Zhifeng Chen ,&nbsp;Mehrtash Harandi ,&nbsp;Gary Egan ,&nbsp;Zhaolin Chen","doi":"10.1016/j.bspc.2024.107185","DOIUrl":"10.1016/j.bspc.2024.107185","url":null,"abstract":"<div><div>Deep learning (DL) methods have emerged as the state-of-the-art for Magnetic Resonance Imaging (MRI) reconstruction. DL methods typically involve training deep neural networks to take undersampled MRI images as input and transform them into high-quality MRI images through data-driven processes. However, deep learning models often fail with higher levels of undersampling due to the insufficient information in the input, which is crucial for producing high-quality MRI images. Thus, optimizing the information content at the input of a DL reconstruction model could significantly improve reconstruction accuracy. In this paper, we introduce a self-supervised pretraining procedure using contrastive learning to improve the accuracy of undersampled DL MRI reconstruction. We use contrastive learning to transform the MRI image representations into a latent space that maximizes mutual information among different undersampled representations and optimizes the information content at the input of the downstream DL reconstruction models. Our experiments demonstrate improved reconstruction accuracy across a range of acceleration factors and datasets, both quantitatively and qualitatively. Furthermore, our extended experiments validate the proposed framework’s robustness under adversarial conditions, such as measurement noise, different k-space sampling patterns, and pathological abnormalities, and also prove the transfer learning capabilities on MRI datasets with completely different anatomy. Additionally, we conducted experiments to visualize and analyze the properties of the proposed MRI contrastive learning latent space. Code available <span><span>here</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107185"},"PeriodicalIF":4.9,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAR-GAN: Multi attention residual generative adversarial network for tumor segmentation in breast ultrasounds MAR-GAN:用于乳腺超声波肿瘤分割的多注意残差生成对抗网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-08 DOI: 10.1016/j.bspc.2024.107171
Imran Ul Haq , Haider Ali , Yuefeng Li , Zhe Liu

Introduction

Ultrasonography is among the most regularly used methods for earlier detection of breast cancer. Automatic and precise segmentation of breast masses in breast ultrasound (US) images is essential but still a challenge due to several causes of uncertainties, like the high variety of tumor shapes and sizes, obscure tumor borders, very low SNR, and speckle noise.

Method

To deal with these uncertainties, this work presents an effective and automated GAN based approach for tumor segmentation in breast US named MAR-GAN, to extract rich, informative features from US images. In MAR-GAN the capabilities of the traditional encoder-decoder generator were enhanced by multiple modifications. Multi-scale residual blocks were used to retrieve additional aspects of the tumor area for a more precise description. A novel boundary and foreground attention (BFA) module is proposed to increase attention for the tumor region and boundary curve. The squeeze and excitation (SE) and the adaptive context selection (ACS) modules were added to increase representational capability on encoder side and facilitates better selection and aggregation of contextual information on the decoder side respectively. The L1-norm and structural similarity index metric (SSIM) were added into the MAR-GAN’s loss function to capture rich local context information from the tumors’ surroundings.

Results

Two breast US datasets were utilized to evaluate the effectiveness of the suggested approach. Using the BUSI dataset, our network outperformed several state-of-the-art segmentations models in IoU and Dice metrics, scoring 89.27 %, 94.21 %, respectively. The suggested approach achieved encouraging results on UDIAT dataset, with IoU and Dice scores of 82.75 %, 88.54 %, respectively.
导言 超声波检查是早期发现乳腺癌最常用的方法之一。为了应对这些不确定性,本研究提出了一种基于 GAN 的有效且自动化的乳腺 US 肿瘤分割方法 MAR-GAN,用于从 US 图像中提取丰富的信息特征。在 MAR-GAN 中,传统编码器-解码器生成器的功能通过多种修改得到了增强。多尺度残留块用于检索肿瘤区域的其他方面,以获得更精确的描述。提出了一个新颖的边界和前景关注(BFA)模块,以增加对肿瘤区域和边界曲线的关注。此外,还增加了挤压和激发(SE)模块和自适应上下文选择(ACS)模块,以提高编码器端的表征能力,并促进解码器端更好地选择和聚合上下文信息。在 MAR-GAN 的损失函数中加入了 L1 正态和结构相似性指数度量(SSIM),以捕捉肿瘤周围丰富的局部上下文信息。利用 BUSI 数据集,我们的网络在 IoU 和 Dice 指标上优于几种最先进的分割模型,得分分别为 89.27 % 和 94.21 %。建议的方法在 UDIAT 数据集上取得了令人鼓舞的结果,IoU 和 Dice 分数分别为 82.75 % 和 88.54 %。
{"title":"MAR-GAN: Multi attention residual generative adversarial network for tumor segmentation in breast ultrasounds","authors":"Imran Ul Haq ,&nbsp;Haider Ali ,&nbsp;Yuefeng Li ,&nbsp;Zhe Liu","doi":"10.1016/j.bspc.2024.107171","DOIUrl":"10.1016/j.bspc.2024.107171","url":null,"abstract":"<div><h3>Introduction</h3><div>Ultrasonography is among the most regularly used methods for earlier detection of breast cancer. Automatic and precise segmentation of breast masses in breast ultrasound (US) images is essential but still a challenge due to several causes of uncertainties, like the high variety of tumor shapes and sizes, obscure tumor borders, very low SNR, and speckle noise.</div></div><div><h3>Method</h3><div>To deal with these uncertainties, this work presents an effective and automated GAN based approach for tumor segmentation in breast US named MAR-GAN, to extract rich, informative features from US images. In MAR-GAN the capabilities of the traditional encoder-decoder generator were enhanced by multiple modifications. Multi-scale residual blocks were used to retrieve additional aspects of the tumor area for a more precise description. A novel boundary and foreground attention (BFA) module is proposed to increase attention for the tumor region and boundary curve. The squeeze and excitation (SE) and the adaptive context selection (ACS) modules were added to increase representational capability on encoder side and facilitates better selection and aggregation of contextual information on the decoder side respectively. The L1-norm and structural similarity index metric (SSIM) were added into the MAR-GAN’s loss function to capture rich local context information from the tumors’ surroundings.</div></div><div><h3>Results</h3><div>Two breast US datasets were utilized to evaluate the effectiveness of the suggested approach. Using the BUSI dataset, our network outperformed several state-of-the-art segmentations models in IoU and Dice metrics, scoring 89.27 %, 94.21 %, respectively. The suggested approach achieved encouraging results on UDIAT dataset, with IoU and Dice scores of 82.75 %, 88.54 %, respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107171"},"PeriodicalIF":4.9,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-based comprehensive robotic system for lower limb rehabilitation 基于深度学习的下肢康复综合机器人系统
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107178
Prithwijit Mukherjee, Anisha Halder Roy
In the modern era, a significant percentage of people around the world suffer from knee pain-related problems. ‘Knee pain’ can be alleviated by performing knee rehabilitation exercises in the correct posture on a regular basis. In our research, an attention mechanism-based CNN-TLSTM (Convolution Neural Network-tanh Long Sort-Term Memory) network has been proposed for assessing the knee pain level of a person. Here, electroencephalogram (EEG) signals of the frontal, parietal, and temporal lobes, electromyography (EMG) signals of the hamstring and quadriceps muscles, and knee bending angle have been used for knee pain detection. First, the CNN network has been utilized for automated feature extraction from the EEG, knee bending angle, and EMG data, and subsequently, the TLSTM network has been used as a classifier. The trained CNN-TLSTM model can classify the knee pain level of a person into five categories, namely no pain, low pain, medium pain, moderate pain, and high pain, with an overall accuracy of 95.88 %. In the hardware part, a prototype of an automated robotic knee rehabilitation system has been designed to help a person perform three rehabilitation exercises, i.e., sitting knee bending, straight leg rise, and active knee bending, according to his/her pain level, without the presence of any physiotherapist. The novelty of our research lies in (i) designing a novel deep learning-based classifier model for broadly classifying knee pain into five categories, (ii) introducing attention mechanism into the TLSTM network to boost its classification performance, and (iii) developing a user-friendly rehabilitation device for knee rehabilitation.
在现代社会,全世界有相当一部分人都受到膝关节疼痛相关问题的困扰。膝关节疼痛 "可以通过定期以正确姿势进行膝关节康复训练来缓解。我们的研究提出了一种基于注意力机制的 CNN-TLSTM(卷积神经网络-长分类记忆)网络,用于评估人的膝关节疼痛程度。在这里,额叶、顶叶和颞叶的脑电图(EEG)信号、腘绳肌和股四头肌的肌电图(EMG)信号以及膝关节弯曲角度被用于膝关节疼痛检测。首先,利用 CNN 网络从脑电图、膝关节弯曲角度和肌电图数据中自动提取特征,然后将 TLSTM 网络用作分类器。经过训练的 CNN-TLSTM 模型可将人的膝关节疼痛程度分为五类,即无疼痛、低疼痛、中等疼痛、中度疼痛和高度疼痛,总体准确率为 95.88%。在硬件部分,我们设计了一个自动机器人膝关节康复系统的原型,可以在没有任何理疗师在场的情况下,根据患者的疼痛程度帮助其进行三种康复训练,即坐位屈膝、直腿起立和主动屈膝。我们研究的新颖之处在于:(i)设计了一种基于深度学习的新型分类器模型,可将膝关节疼痛大致分为五类;(ii)在 TLSTM 网络中引入注意力机制,以提高其分类性能;以及(iii)开发了一种用户友好型膝关节康复设备。
{"title":"A deep learning-based comprehensive robotic system for lower limb rehabilitation","authors":"Prithwijit Mukherjee,&nbsp;Anisha Halder Roy","doi":"10.1016/j.bspc.2024.107178","DOIUrl":"10.1016/j.bspc.2024.107178","url":null,"abstract":"<div><div>In the modern era, a significant percentage of people around the world suffer from knee pain-related problems. ‘Knee pain’ can be alleviated by performing knee rehabilitation exercises in the correct posture on a regular basis. In our research, an attention mechanism-based CNN-TLSTM (Convolution Neural Network-tanh Long Sort-Term Memory) network has been proposed for assessing the knee pain level of a person. Here, electroencephalogram (EEG) signals of the frontal, parietal, and temporal lobes, electromyography (EMG) signals of the hamstring and quadriceps muscles, and knee bending angle have been used for knee pain detection. First, the CNN network has been utilized for automated feature extraction from the EEG, knee bending angle, and EMG data, and subsequently, the TLSTM network has been used as a classifier. The trained CNN-TLSTM model can classify the knee pain level of a person into five categories, namely no pain, low pain, medium pain, moderate pain, and high pain, with an overall accuracy of 95.88 %. In the hardware part, a prototype of an automated robotic knee rehabilitation system has been designed to help a person perform three rehabilitation exercises, i.e., sitting knee bending, straight leg rise, and active knee bending, according to his/her pain level, without the presence of any physiotherapist. The novelty of our research lies in (i) designing a novel deep learning-based classifier model for broadly classifying knee pain into five categories, (ii) introducing attention mechanism into the TLSTM network to boost its classification performance, and (iii) developing a user-friendly rehabilitation device for knee rehabilitation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107178"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFI-ViT: A coarse-to-fine inference based vision transformer for gastric cancer subtype detection using pathological images CFI-ViT:利用病理图像进行胃癌亚型检测的基于粗到细推理的视觉变换器
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107160
Xinghang Wang , Haibo Tao , Bin Wang , Huaiping Jin , Zhenhui Li
Accurate detection of histopathological cancer subtypes is crucial for personalized treatment. Currently, deep learning methods based on histopathology images have become an effective solution to this problem. However, existing deep learning methods for histopathology image classification often suffer from high computational complexity, not considering the variability of different regions, and failing to synchronize the focus on local–global information effectively. To address these issues, we propose a coarse-to-fine inference based vision transformer (ViT) network (CFI-ViT) for pathological image detection of gastric cancer subtypes. CFI-ViT combines global attention and discriminative and differentiable modules to achieve two-stage inference. In the coarse inference stage, a ViT model with relative position embedding is employed to extract global information from the input images. If the critical information is not sufficiently identified, the differentiable module is adopted to extract local image regions with discrimination for fine-grained screening in the fine inference stage. The effectiveness and superiority of the proposed CFI-ViT method have been validated through three pathological image datasets of gastric cancer, including one private dataset clinically collected from Yunnan Cancer Hospital in China and two publicly available datasets, i.e., HE-GHI-DS and TCGA-STAD. The experimental results demonstrate that CFI-ViT achieves superior recognition accuracy and generalization performance compared to traditional methods, while using only 80 % of the computational resources required by the ViT model.
准确检测组织病理学癌症亚型对于个性化治疗至关重要。目前,基于组织病理学图像的深度学习方法已成为解决这一问题的有效方法。然而,现有的组织病理学图像分类深度学习方法往往存在计算复杂度高、未考虑不同区域的差异性、无法有效同步关注局部和全局信息等问题。为了解决这些问题,我们提出了一种基于视觉变换器(ViT)的粗到细推理网络(CFI-ViT),用于胃癌亚型的病理图像检测。CFI-ViT 结合了全局注意力、判别和可微分模块,实现了两阶段推理。在粗推理阶段,采用具有相对位置嵌入的 ViT 模型从输入图像中提取全局信息。如果关键信息识别不充分,则在精细推理阶段采用可微分模块提取具有区分度的局部图像区域,进行细粒度筛选。我们通过三个胃癌病理图像数据集验证了 CFI-ViT 方法的有效性和优越性,其中包括一个从中国云南省肿瘤医院临床收集的私有数据集和两个公开数据集,即 HE-GHI-DS 和 TCGA-STAD。实验结果表明,与传统方法相比,CFI-ViT 获得了更高的识别准确率和泛化性能,而所需的计算资源仅为 ViT 模型的 80%。
{"title":"CFI-ViT: A coarse-to-fine inference based vision transformer for gastric cancer subtype detection using pathological images","authors":"Xinghang Wang ,&nbsp;Haibo Tao ,&nbsp;Bin Wang ,&nbsp;Huaiping Jin ,&nbsp;Zhenhui Li","doi":"10.1016/j.bspc.2024.107160","DOIUrl":"10.1016/j.bspc.2024.107160","url":null,"abstract":"<div><div>Accurate detection of histopathological cancer subtypes is crucial for personalized treatment. Currently, deep learning methods based on histopathology images have become an effective solution to this problem. However, existing deep learning methods for histopathology image classification often suffer from high computational complexity, not considering the variability of different regions, and failing to synchronize the focus on local–global information effectively. To address these issues, we propose a coarse-to-fine inference based vision transformer (ViT) network (CFI-ViT) for pathological image detection of gastric cancer subtypes. CFI-ViT combines global attention and discriminative and differentiable modules to achieve two-stage inference. In the coarse inference stage, a ViT model with relative position embedding is employed to extract global information from the input images. If the critical information is not sufficiently identified, the differentiable module is adopted to extract local image regions with discrimination for fine-grained screening in the fine inference stage. The effectiveness and superiority of the proposed CFI-ViT method have been validated through three pathological image datasets of gastric cancer, including one private dataset clinically collected from Yunnan Cancer Hospital in China and two publicly available datasets, i.e., HE-GHI-DS and TCGA-STAD. The experimental results demonstrate that CFI-ViT achieves superior recognition accuracy and generalization performance compared to traditional methods, while using only 80 % of the computational resources required by the ViT model.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107160"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of severe coronary artery disease based on clinical phonocardiogram and large kernel convolution interaction network 基于临床心电图和大核卷积交互网络的严重冠状动脉疾病检测
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107186
Chongbo Yin , Jian Qin , Yan Shi , Yineng Zheng , Xingming Guo
Heart sound auscultation coupled with machine learning algorithms is a risk-free and low-cost method for coronary artery disease detection (CAD). However, current studies mainly focus on CAD screening, namely classifying CAD and non-CAD, due to limited clinical data and algorithm performance. This leaves a gap to investigate CAD severity by phonocardiogram (PCG). To solve the issue, we first establish a clinical PCG dataset for CAD patients. The dataset includes 150 subjects with 80 severe CAD and 70 non-severe CAD patients. Then, we propose the large kernel convolution interaction network (LKCIN) to detect CAD severity. It integrates automatic feature extraction and pattern classification and simplifies PCG processing steps. The developed large kernel interaction block (LKIB) has three properties: long-distance dependency, local receptive field, and channel interaction, which efficiently improves feature extraction capabilities in LKCIN. Apart from it, a separate downsampling block is proposed to alleviate feature losses during forward propagation, following the LKIBs. Experiment is performed on the clinical PCG data, and LKCIN obtains good classification performance with accuracy 85.97 %, sensitivity 85.64 %, and specificity 86.26 %. Our study breaks conventional CAD screening, and provides a reliable option for CAD severity detection in clinical practice.
心音听诊结合机器学习算法是一种无风险、低成本的冠状动脉疾病(CAD)检测方法。然而,由于临床数据和算法性能有限,目前的研究主要集中于 CAD 筛查,即对 CAD 和非 CAD 进行分类。这为通过声心动图(PCG)研究 CAD 严重程度留下了空白。为了解决这个问题,我们首先建立了一个针对 CAD 患者的临床 PCG 数据集。该数据集包括 150 名受试者,其中 80 名重度 CAD 患者和 70 名非重度 CAD 患者。然后,我们提出了大核卷积交互网络(LKCIN)来检测 CAD 的严重程度。它集成了自动特征提取和模式分类,简化了 PCG 处理步骤。所开发的大核交互块(LKIB)具有三个特性:长距离依赖性、局部感受野和通道交互性,可有效提高 LKCIN 的特征提取能力。除此以外,还提出了一个单独的下采样块,以减轻 LKIB 在前向传播过程中的特征损失。在临床 PCG 数据上进行了实验,LKCIN 获得了良好的分类性能,准确率为 85.97%,灵敏度为 85.64%,特异性为 86.26%。我们的研究打破了传统的 CAD 筛查,为临床实践中 CAD 严重程度的检测提供了可靠的选择。
{"title":"Detection of severe coronary artery disease based on clinical phonocardiogram and large kernel convolution interaction network","authors":"Chongbo Yin ,&nbsp;Jian Qin ,&nbsp;Yan Shi ,&nbsp;Yineng Zheng ,&nbsp;Xingming Guo","doi":"10.1016/j.bspc.2024.107186","DOIUrl":"10.1016/j.bspc.2024.107186","url":null,"abstract":"<div><div>Heart sound auscultation coupled with machine learning algorithms is a risk-free and low-cost method for coronary artery disease detection (CAD). However, current studies mainly focus on CAD screening, namely classifying CAD and non-CAD, due to limited clinical data and algorithm performance. This leaves a gap to investigate CAD severity by phonocardiogram (PCG). To solve the issue, we first establish a clinical PCG dataset for CAD patients. The dataset includes 150 subjects with 80 severe CAD and 70 non-severe CAD patients. Then, we propose the large kernel convolution interaction network (LKCIN) to detect CAD severity. It integrates automatic feature extraction and pattern classification and simplifies PCG processing steps. The developed large kernel interaction block (LKIB) has three properties: long-distance dependency, local receptive field, and channel interaction, which efficiently improves feature extraction capabilities in LKCIN. Apart from it, a separate downsampling block is proposed to alleviate feature losses during forward propagation, following the LKIBs. Experiment is performed on the clinical PCG data, and LKCIN obtains good classification performance with accuracy 85.97 %, sensitivity 85.64 %, and specificity 86.26 %. Our study breaks conventional CAD screening, and provides a reliable option for CAD severity detection in clinical practice.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107186"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intermediary-guided windowed attention Aggregation network for fine-grained characterization of Major Depressive Disorder fMRI 中间引导的窗口注意 用于细化重度抑郁障碍 fMRI 特征的聚合网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107166
Xue Yuan , Maozhou Chen , Peng Ding , Anan Gan , Keren Shi , Anming Gong , Lei Zhao , Tianwen Li , Yunfa Fu , Yuqi Cheng

Objectives

Establishing objective and quantitative imaging markers at individual level can assist in accurate diagnosis of Major Depressive Disorder (MDD). However, the clinical heterogeneity of MDD leads to a decrease in recognition accuracy, to address this issue, we propose the Windowed Attention Aggregation Network (WAAN) for a medium-sized functional Magnetic Resonance Imaging (fMRI) dataset comprising 111 MDD and 106 Healthy Controls (HC).

Methods

The proposed WAAN model is a dynamic temporal model that contains two important components, Inner-Window Self-Attention (IWSA) and Cross-Window Self-Attention (CWSA), to characterize the MDD-fMRI data at a fine-grained level and fuse global temporal information. In addition, to optimize WAAN, a new Point to Domain Loss (p2d Loss) function is proposed, which intermediate guides the model to learn class centers with smaller class deviations, thus improving the intra-class feature density.

Results

The proposed WAAN achieved an accuracy of 83.8 % (±1.4 %) in MDD identification task in medium-sized site. The right superior orbitofrontal gyrus and right superior temporal gyrus (pole) were found to be categorically highly attributable brain regions in MDD patients, and the hippocampus had stable categorical attributions. The effect of temporal parameters on classification was also explored and time window parameters for high categorical attributions were obtained.

Significance

The proposed WAAN is expected to improve the accuracy of personalized identification of MDD. This study helps to find the target brain regions for treatment or intervention of MDD, and provides better scanning time window parameters for MDD-fMRI analysis.
目的建立个体水平的客观定量成像标记有助于准确诊断重度抑郁症(MDD)。然而,MDD 的临床异质性导致了识别准确率的下降。为了解决这个问题,我们针对由 111 名 MDD 和 106 名健康对照(HC)组成的中型功能磁共振成像(fMRI)数据集提出了窗口注意聚集网络(WAAN)。方法所提出的 WAAN 模型是一个动态时间模型,包含两个重要组成部分:内窗自我注意(IWSA)和跨窗自我注意(CWSA),用于在细粒度水平上描述 MDD-fMRI 数据的特征并融合全局时间信息。此外,为了优化 WAAN,还提出了一个新的点到域损失(p2d Loss)函数,该函数可在中间引导模型学习具有较小类偏差的类中心,从而提高类内特征密度。研究发现,右侧眶额上回和右侧颞上回(极点)是 MDD 患者的高分类归因脑区,海马区具有稳定的分类归因。研究还探讨了时间参数对分类的影响,并获得了高分类归因的时间窗参数。这项研究有助于找到治疗或干预 MDD 的目标脑区,并为 MDD-fMRI 分析提供更好的扫描时间窗参数。
{"title":"Intermediary-guided windowed attention Aggregation network for fine-grained characterization of Major Depressive Disorder fMRI","authors":"Xue Yuan ,&nbsp;Maozhou Chen ,&nbsp;Peng Ding ,&nbsp;Anan Gan ,&nbsp;Keren Shi ,&nbsp;Anming Gong ,&nbsp;Lei Zhao ,&nbsp;Tianwen Li ,&nbsp;Yunfa Fu ,&nbsp;Yuqi Cheng","doi":"10.1016/j.bspc.2024.107166","DOIUrl":"10.1016/j.bspc.2024.107166","url":null,"abstract":"<div><h3>Objectives</h3><div>Establishing objective and quantitative imaging markers at individual level can assist in accurate diagnosis of Major Depressive Disorder (MDD). However, the clinical heterogeneity of MDD leads to a decrease in recognition accuracy, to address this issue, we propose the Windowed Attention Aggregation Network (WAAN) for a medium-sized functional Magnetic Resonance Imaging (fMRI) dataset comprising 111 MDD and 106 Healthy Controls (HC).</div></div><div><h3>Methods</h3><div>The proposed WAAN model is a dynamic temporal model that contains two important components, Inner-Window Self-Attention (IWSA) and Cross-Window Self-Attention (CWSA), to characterize the MDD-fMRI data at a fine-grained level and fuse global temporal information. In addition, to optimize WAAN, a new Point to Domain Loss (p2d Loss) function is proposed, which intermediate guides the model to learn class centers with smaller class deviations, thus improving the intra-class feature density.</div></div><div><h3>Results</h3><div>The proposed WAAN achieved an accuracy of 83.8 % (±1.4 %) in MDD identification task in medium-sized site. The right superior orbitofrontal gyrus and right superior temporal gyrus (pole) were found to be categorically highly attributable brain regions in MDD patients, and the hippocampus had stable categorical attributions. The effect of temporal parameters on classification was also explored and time window parameters for high categorical attributions were obtained.</div></div><div><h3>Significance</h3><div>The proposed WAAN is expected to improve the accuracy of personalized identification of MDD. This study helps to find the target brain regions for treatment or intervention of MDD, and provides better scanning time window parameters for MDD-fMRI analysis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107166"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight convolutional transformer neural network for EEG-based depression recognition 基于脑电图的抑郁识别轻量级卷积变换器神经网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107112
Pengfei Hou , Xiaowei Li , Jing Zhu , Bin Hu , Fellow IEEE
Depression is a serious mental health condition affecting hundreds of millions of people worldwide. Electroencephalogram (EEG) is a spontaneous and rhythmic physiological signal capable of measuring the brain activity of subjects, serving as an objective biomarker for depression research. This paper proposes a lightweight Convolutional Transformer neural network (LCTNN) for depression identification. LCTNN features three significant characteristics: (1) It combines the advantages of both CNN and Transformer to learn rich EEG signal representations from local to global perspectives in time domain. (2) Channel Modulator (CM) dynamically adjusts the contribution of each electrode channel of the EEG signal to depression identification. (3) Considering the high temporal resolution of EEG signals imposes a significant burden on computing self-attention, LCTNN replaces canonical self-attention with sparse attention, reducing its spatiotemporal complexity to O(LlogL). Furthermore, this paper incorporates an attention pooling operation between two Transformer layers, further reducing the spatial complexity. Compared to other deep learning methods, LCTNN achieved state-of-the-art performance on the majority of metrics across two datasets. This indicates that LCTNN offers new insights into the relationship between EEG signals and depression, providing a valuable reference for the future development of depression diagnosis and treatment.
抑郁症是一种严重的精神疾病,影响着全球数亿人。脑电图(EEG)是一种自发的、有节奏的生理信号,能够测量受试者的大脑活动,是抑郁症研究的客观生物标志物。本文提出了一种用于抑郁症识别的轻量级卷积变压器神经网络(LCTNN)。LCTNN 具有三个显著特点:(1)它结合了 CNN 和 Transformer 的优势,从局部到全局学习丰富的时域脑电信号表征。(2) 通道调制器(CM)可动态调整脑电信号各电极通道对抑郁识别的贡献。(3) 考虑到脑电信号的高时间分辨率给计算自我注意带来了巨大负担,LCTNN 用稀疏注意取代了典型自我注意,将其时空复杂度降低到 O(LlogL)。此外,本文还在两个变换层之间加入了注意力池操作,进一步降低了空间复杂度。与其他深度学习方法相比,LCTNN 在两个数据集的大多数指标上都达到了最先进的性能。这表明,LCTNN 为研究脑电信号与抑郁症之间的关系提供了新的见解,为未来抑郁症诊断和治疗的发展提供了有价值的参考。
{"title":"A lightweight convolutional transformer neural network for EEG-based depression recognition","authors":"Pengfei Hou ,&nbsp;Xiaowei Li ,&nbsp;Jing Zhu ,&nbsp;Bin Hu ,&nbsp;Fellow IEEE","doi":"10.1016/j.bspc.2024.107112","DOIUrl":"10.1016/j.bspc.2024.107112","url":null,"abstract":"<div><div>Depression is a serious mental health condition affecting hundreds of millions of people worldwide. Electroencephalogram (EEG) is a spontaneous and rhythmic physiological signal capable of measuring the brain activity of subjects, serving as an objective biomarker for depression research. This paper proposes a lightweight Convolutional Transformer neural network (LCTNN) for depression identification. LCTNN features three significant characteristics: (1) It combines the advantages of both CNN and Transformer to learn rich EEG signal representations from local to global perspectives in time domain. (2) Channel Modulator (CM) dynamically adjusts the contribution of each electrode channel of the EEG signal to depression identification. (3) Considering the high temporal resolution of EEG signals imposes a significant burden on computing self-attention, LCTNN replaces canonical self-attention with sparse attention, reducing its spatiotemporal complexity to <span><math><mrow><mi>O</mi><mo>(</mo><mi>L</mi><mi>log</mi><mi>L</mi><mo>)</mo></mrow></math></span>. Furthermore, this paper incorporates an attention pooling operation between two Transformer layers, further reducing the spatial complexity. Compared to other deep learning methods, LCTNN achieved state-of-the-art performance on the majority of metrics across two datasets. This indicates that LCTNN offers new insights into the relationship between EEG signals and depression, providing a valuable reference for the future development of depression diagnosis and treatment.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107112"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel lung cancer detection adopting Radiomic feature extraction with Locust assisted CS based CNN classifier 采用基于蝗虫辅助 CS 的 CNN 分类器进行辐射组特征提取的新型肺癌检测方法
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107139
P. Lavanya , K. Vidhya
Cancer is regarded as one of the life-threatening diseases since it causes significant number of fatalities in every year. Among different cancer types, the lung cancer is considered as the most destructive type with largest mortality rate. Therefore, an effective and accurate technique for detecting the lung cancer is crucial for providing the adequate treatment on time. This study presents a novel deep learning-based lung cancer detection method. The technique of image processing comprises of four major phases. Initially, the pre-processing of input images is carried out with the implementation of Adaptive Wiener filter for successfully eliminating the noises in the image without making any edge loss. Then, the process of segmentation is executed using Cascaded K-means Fuzzy C-means (KM-FCM) algorithm. The stages of feature extraction and selection are carried out using Radiomics approach, which aids in the extraction and selection of meaningful features that facilitates cancer detection. The final stage of image processing is classification, which is accomplished by a novel Locust assisted Crow Search (CS) based Convolutional Neural Network (CNN) classifier. The proposed digital image processing technique displays an impressive performance in detecting lung cancer with an accuracy of 96.33%.
癌症被认为是威胁生命的疾病之一,因为它每年都会造成大量死亡。在各种癌症类型中,肺癌被认为是最具破坏性、死亡率最高的类型。因此,有效而准确的肺癌检测技术对于及时提供适当的治疗至关重要。本研究提出了一种基于深度学习的新型肺癌检测方法。图像处理技术包括四个主要阶段。首先,使用自适应维纳滤波器对输入图像进行预处理,以成功消除图像中的噪音,同时不会造成任何边缘损失。然后,使用级联 K 均值模糊 C 均值(KM-FCM)算法执行分割过程。特征提取和选择阶段采用放射组学方法,该方法有助于提取和选择有意义的特征,从而有助于癌症检测。图像处理的最后阶段是分类,由基于卷积神经网络(CNN)的新型蝗虫辅助乌鸦搜索(CS)分类器完成。所提出的数字图像处理技术在检测肺癌方面表现出色,准确率高达 96.33%。
{"title":"A novel lung cancer detection adopting Radiomic feature extraction with Locust assisted CS based CNN classifier","authors":"P. Lavanya ,&nbsp;K. Vidhya","doi":"10.1016/j.bspc.2024.107139","DOIUrl":"10.1016/j.bspc.2024.107139","url":null,"abstract":"<div><div>Cancer is regarded as one of the life-threatening diseases since it causes significant number of fatalities in every year. Among different cancer types, the lung cancer is considered as the most destructive type with largest mortality rate. Therefore, an effective and accurate technique for detecting the lung cancer is crucial for providing the adequate treatment on time. This study presents a novel deep learning-based lung cancer detection method. The technique of image processing comprises of four major phases. Initially, the pre-processing of input images is carried out with the implementation of Adaptive Wiener filter for successfully eliminating the noises in the image without making any edge loss. Then, the process of segmentation is executed using Cascaded K-means Fuzzy C-means (KM-FCM) algorithm. The stages of feature extraction and selection are carried out using Radiomics approach, which aids in the extraction and selection of meaningful features that facilitates cancer detection. The final stage of image processing is classification, which is accomplished by a novel Locust assisted Crow Search (CS) based Convolutional Neural Network (CNN) classifier. The proposed digital image processing technique displays an impressive performance in detecting lung cancer with an accuracy of 96.33%.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107139"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topological feature search method for multichannel EEG: Application in ADHD classification 多通道脑电图拓扑特征搜索法:在多动症分类中的应用
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-05 DOI: 10.1016/j.bspc.2024.107153
Tianming Cai , Guoying Zhao , Junbin Zang , Chen Zong , Zhidong Zhang , Chenyang Xue
In recent years, the preliminary diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) using electroencephalography (EEG) has attracted the attention from researchers. EEG, known for its expediency and efficiency, plays a pivotal role in the diagnosis and treatment of ADHD. However, the non-stationarity of EEG signals and inter-subject variability pose challenges to the diagnostic and classification processes. Topological Data Analysis (TDA) offers a novel perspective for ADHD classification, diverging from traditional time–frequency domain features. However, conventional TDA models are restricted to single-channel time series and are susceptible to noise, leading to the loss of topological features in persistence diagrams.This paper presents an enhanced TDA approach applicable to multi-channel EEG in ADHD. Initially, optimal input parameters for multi-channel EEG are determined. Subsequently, each channel’s EEG undergoes phase space reconstruction (PSR) followed by the utilization of k-Power Distance to Measure (k-PDTM) for approximating ideal point clouds. Then, multi-dimensional time series are re-embedded, and TDA is applied to obtain topological feature information. Gaussian function-based Multivariate Kernel Density Estimation (MKDE) is employed in the merger persistence diagram to filter out desired topological feature mappings. Finally, the persistence image (PI) method is employed to extract topological features, and the influence of various weighting functions on the results is discussed.The effectiveness of our method is evaluated using the IEEE ADHD dataset. Results demonstrate that the accuracy, sensitivity, and specificity reach 78.27%, 80.62%, and 75.63%, respectively. Compared to traditional TDA methods, our method was effectively improved and outperforms typical nonlinear descriptors. These findings indicate that our method exhibits higher precision and robustness.
近年来,利用脑电图(EEG)对注意力缺陷多动障碍(ADHD)进行初步诊断引起了研究人员的关注。脑电图以其便捷、高效而著称,在多动症的诊断和治疗中发挥着举足轻重的作用。然而,脑电信号的非稳态性和受试者之间的变异性给诊断和分类过程带来了挑战。拓扑数据分析(TDA)与传统的时频域特征不同,为多动症分类提供了一个新的视角。然而,传统的拓扑数据分析模型仅限于单通道时间序列,且易受噪声影响,导致持续图中拓扑特征的丢失。首先,确定多通道脑电图的最佳输入参数。随后,对每个通道的脑电图进行相空间重建(PSR),然后利用 k-PDTM 逼近理想点云。然后,重新嵌入多维时间序列,并应用 TDA 获取拓扑特征信息。在合并持久图中采用基于高斯函数的多变量核密度估计(MKDE),以筛选出所需的拓扑特征映射。最后,采用持久图(PI)方法提取拓扑特征,并讨论了各种加权函数对结果的影响。结果表明,准确率、灵敏度和特异性分别达到了 78.27%、80.62% 和 75.63%。与传统的 TDA 方法相比,我们的方法得到了有效改进,其性能优于典型的非线性描述符。这些发现表明,我们的方法具有更高的精确度和鲁棒性。
{"title":"Topological feature search method for multichannel EEG: Application in ADHD classification","authors":"Tianming Cai ,&nbsp;Guoying Zhao ,&nbsp;Junbin Zang ,&nbsp;Chen Zong ,&nbsp;Zhidong Zhang ,&nbsp;Chenyang Xue","doi":"10.1016/j.bspc.2024.107153","DOIUrl":"10.1016/j.bspc.2024.107153","url":null,"abstract":"<div><div>In recent years, the preliminary diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) using electroencephalography (EEG) has attracted the attention from researchers. EEG, known for its expediency and efficiency, plays a pivotal role in the diagnosis and treatment of ADHD. However, the non-stationarity of EEG signals and inter-subject variability pose challenges to the diagnostic and classification processes. Topological Data Analysis (TDA) offers a novel perspective for ADHD classification, diverging from traditional time–frequency domain features. However, conventional TDA models are restricted to single-channel time series and are susceptible to noise, leading to the loss of topological features in persistence diagrams.This paper presents an enhanced TDA approach applicable to multi-channel EEG in ADHD. Initially, optimal input parameters for multi-channel EEG are determined. Subsequently, each channel’s EEG undergoes phase space reconstruction (PSR) followed by the utilization of k-Power Distance to Measure (k-PDTM) for approximating ideal point clouds. Then, multi-dimensional time series are re-embedded, and TDA is applied to obtain topological feature information. Gaussian function-based Multivariate Kernel Density Estimation (MKDE) is employed in the merger persistence diagram to filter out desired topological feature mappings. Finally, the persistence image (PI) method is employed to extract topological features, and the influence of various weighting functions on the results is discussed.The effectiveness of our method is evaluated using the IEEE ADHD dataset. Results demonstrate that the accuracy, sensitivity, and specificity reach 78.27%, 80.62%, and 75.63%, respectively. Compared to traditional TDA methods, our method was effectively improved and outperforms typical nonlinear descriptors. These findings indicate that our method exhibits higher precision and robustness.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107153"},"PeriodicalIF":4.9,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1